&&&& RUNNING TensorRT.trtexec [TensorRT v8202] # trtexec --onnx=model/onnx-model/model.onnx --saveEngine=model/trt-model/engine.trt --optShapes=input:8x512x512x3 --workspace=8000 --verbose [04/18/2022-02:33:54] [I] === Model Options === [04/18/2022-02:33:54] [I] Format: ONNX [04/18/2022-02:33:54] [I] Model: model/onnx-model/model.onnx [04/18/2022-02:33:54] [I] Output: [04/18/2022-02:33:54] [I] === Build Options === [04/18/2022-02:33:54] [I] Max batch: explicit batch [04/18/2022-02:33:54] [I] Workspace: 8000 MiB [04/18/2022-02:33:54] [I] minTiming: 1 [04/18/2022-02:33:54] [I] avgTiming: 8 [04/18/2022-02:33:54] [I] Precision: FP32 [04/18/2022-02:33:54] [I] Calibration: [04/18/2022-02:33:54] [I] Refit: Disabled [04/18/2022-02:33:54] [I] Sparsity: Disabled [04/18/2022-02:33:54] [I] Safe mode: Disabled [04/18/2022-02:33:54] [I] DirectIO mode: Disabled [04/18/2022-02:33:54] [I] Restricted mode: Disabled [04/18/2022-02:33:54] [I] Save engine: model/trt-model/engine.trt [04/18/2022-02:33:54] [I] Load engine: [04/18/2022-02:33:54] [I] Profiling verbosity: 0 [04/18/2022-02:33:54] [I] Tactic sources: Using default tactic sources [04/18/2022-02:33:54] [I] timingCacheMode: local [04/18/2022-02:33:54] [I] timingCacheFile: [04/18/2022-02:33:54] [I] Input(s)s format: fp32:CHW [04/18/2022-02:33:54] [I] Output(s)s format: fp32:CHW [04/18/2022-02:33:54] [I] Input build shape: input=8x512x512x3+8x512x512x3+8x512x512x3 [04/18/2022-02:33:54] [I] Input calibration shapes: model [04/18/2022-02:33:54] [I] === System Options === [04/18/2022-02:33:54] [I] Device: 0 [04/18/2022-02:33:54] [I] DLACore: [04/18/2022-02:33:54] [I] Plugins: [04/18/2022-02:33:54] [I] === Inference Options === [04/18/2022-02:33:54] [I] Batch: Explicit [04/18/2022-02:33:54] [I] Input inference shape: input=8x512x512x3 [04/18/2022-02:33:54] [I] Iterations: 10 [04/18/2022-02:33:54] [I] Duration: 3s (+ 200ms warm up) [04/18/2022-02:33:54] [I] Sleep time: 0ms [04/18/2022-02:33:54] [I] Idle time: 0ms [04/18/2022-02:33:54] [I] Streams: 1 [04/18/2022-02:33:54] [I] ExposeDMA: Disabled [04/18/2022-02:33:54] [I] Data transfers: Enabled [04/18/2022-02:33:54] [I] Spin-wait: Disabled [04/18/2022-02:33:54] [I] Multithreading: Disabled [04/18/2022-02:33:54] [I] CUDA Graph: Disabled [04/18/2022-02:33:54] [I] Separate profiling: Disabled [04/18/2022-02:33:54] [I] Time Deserialize: Disabled [04/18/2022-02:33:54] [I] Time Refit: Disabled [04/18/2022-02:33:54] [I] Skip inference: Disabled [04/18/2022-02:33:54] [I] Inputs: [04/18/2022-02:33:54] [I] === Reporting Options === [04/18/2022-02:33:54] [I] Verbose: Enabled [04/18/2022-02:33:54] [I] Averages: 10 inferences [04/18/2022-02:33:54] [I] Percentile: 99 [04/18/2022-02:33:54] [I] Dump refittable layers:Disabled [04/18/2022-02:33:54] [I] Dump output: Disabled [04/18/2022-02:33:54] [I] Profile: Disabled [04/18/2022-02:33:54] [I] Export timing to JSON file: [04/18/2022-02:33:54] [I] Export output to JSON file: [04/18/2022-02:33:54] [I] Export profile to JSON file: [04/18/2022-02:33:54] [I] [04/18/2022-02:33:54] [I] === Device Information === [04/18/2022-02:33:54] [I] Selected Device: NVIDIA GeForce RTX 3070 Laptop GPU [04/18/2022-02:33:54] [I] Compute Capability: 8.6 [04/18/2022-02:33:54] [I] SMs: 40 [04/18/2022-02:33:54] [I] Compute Clock Rate: 1.56 GHz [04/18/2022-02:33:54] [I] Device Global Memory: 8191 MiB [04/18/2022-02:33:54] [I] Shared Memory per SM: 100 KiB [04/18/2022-02:33:54] [I] Memory Bus Width: 256 bits (ECC disabled) [04/18/2022-02:33:54] [I] Memory Clock Rate: 7.001 GHz [04/18/2022-02:33:54] [I] [04/18/2022-02:33:54] [I] TensorRT version: 8.2.2 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::BatchTilePlugin_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::CoordConvAC version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::CropAndResize version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::CropAndResizeDynamic version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::EfficientNMS_TFTRT_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::GenerateDetection_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::MultilevelCropAndResize_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::MultilevelProposeROI_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::NMSDynamic_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::Proposal version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::ProposalDynamic version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::Region_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::ScatterND version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [04/18/2022-02:33:54] [V] [TRT] Registered plugin creator - ::Split version 1 [04/18/2022-02:33:55] [I] [TRT] [MemUsageChange] Init CUDA: CPU +455, GPU +0, now: CPU 467, GPU 1251 (MiB) [04/18/2022-02:33:55] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 467 MiB, GPU 1251 MiB [04/18/2022-02:33:55] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 621 MiB, GPU 1295 MiB [04/18/2022-02:33:55] [I] Start parsing network model [04/18/2022-02:33:55] [I] [TRT] ---------------------------------------------------------------- [04/18/2022-02:33:55] [I] [TRT] Input filename: model/onnx-model/model.onnx [04/18/2022-02:33:55] [I] [TRT] ONNX IR version: 0.0.7 [04/18/2022-02:33:55] [I] [TRT] Opset version: 11 [04/18/2022-02:33:55] [I] [TRT] Producer name: [04/18/2022-02:33:55] [I] [TRT] Producer version: [04/18/2022-02:33:55] [I] [TRT] Domain: [04/18/2022-02:33:55] [I] [TRT] Model version: 0 [04/18/2022-02:33:55] [I] [TRT] Doc string: [04/18/2022-02:33:55] [I] [TRT] ---------------------------------------------------------------- [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::BatchTilePlugin_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::CoordConvAC version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::CropAndResizeDynamic version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TFTRT_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::GenerateDetection_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::MultilevelCropAndResize_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::MultilevelProposeROI_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::NMSDynamic_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::Proposal version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::ProposalDynamic version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::ScatterND version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1 [04/18/2022-02:33:55] [V] [TRT] Plugin creator already registered - ::Split version 1 [04/18/2022-02:33:55] [V] [TRT] Adding network input: input with dtype: float32, dimensions: (-1, 512, 512, 3) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: input for ONNX tensor: input [04/18/2022-02:33:55] [V] [TRT] Importing initializer: preprocessor/scale_value:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: preprocessor/mean_value:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: const_fold_opt__5829 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: const_fold_opt__6102 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: const_fold_opt__5788 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: const_fold_opt__5809 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4_shape [04/18/2022-02:33:55] [V] [TRT] Importing initializer: nms/anchors:0 [04/18/2022-02:33:55] [V] [TRT] Parsing node: preprocessor/transpose [Transpose] [04/18/2022-02:33:55] [V] [TRT] Searching for input: input [04/18/2022-02:33:55] [V] [TRT] preprocessor/transpose [Transpose] inputs: [input -> (-1, 512, 512, 3)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: preprocessor/transpose for ONNX node: preprocessor/transpose [04/18/2022-02:33:55] [V] [TRT] Registering tensor: preprocessor/transpose:0_0 for ONNX tensor: preprocessor/transpose:0_0 [04/18/2022-02:33:55] [V] [TRT] preprocessor/transpose [Transpose] outputs: [preprocessor/transpose:0_0 -> (-1, 3, 512, 512)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: preprocessor/scale [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: preprocessor/transpose:0_0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: preprocessor/scale_value:0 [04/18/2022-02:33:55] [V] [TRT] preprocessor/scale [Mul] inputs: [preprocessor/transpose:0_0 -> (-1, 3, 512, 512)[FLOAT]], [preprocessor/scale_value:0 -> (1, 3, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: preprocessor/scale_value:0 for ONNX node: preprocessor/scale_value:0 [04/18/2022-02:33:55] [V] [TRT] Registering layer: preprocessor/scale for ONNX node: preprocessor/scale [04/18/2022-02:33:55] [V] [TRT] Registering tensor: preprocessor/scale:0_1 for ONNX tensor: preprocessor/scale:0_1 [04/18/2022-02:33:55] [V] [TRT] preprocessor/scale [Mul] outputs: [preprocessor/scale:0_1 -> (-1, 3, 512, 512)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: preprocessor/mean [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: preprocessor/scale:0_1 [04/18/2022-02:33:55] [V] [TRT] Searching for input: preprocessor/mean_value:0 [04/18/2022-02:33:55] [V] [TRT] preprocessor/mean [Add] inputs: [preprocessor/scale:0_1 -> (-1, 3, 512, 512)[FLOAT]], [preprocessor/mean_value:0 -> (1, 3, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: preprocessor/mean_value:0 for ONNX node: preprocessor/mean_value:0 [04/18/2022-02:33:55] [V] [TRT] Registering layer: preprocessor/mean for ONNX node: preprocessor/mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: preprocessor/mean:0_2 for ONNX tensor: preprocessor/mean:0_2 [04/18/2022-02:33:55] [V] [TRT] preprocessor/mean [Add] outputs: [preprocessor/mean:0_2 -> (-1, 3, 512, 512)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: preprocessor/mean:0_2 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D [Conv] inputs: [preprocessor/mean:0_2 -> (-1, 3, 512, 512)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_weights_fused_bn -> (32, 3, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D_bias_fused_bn -> (32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 3, 512, 512) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 32 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 32, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (32, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 32, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 32 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 32, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 32, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> (-1, 32, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> (-1, 32, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (8, 32, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (8)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 32, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 8 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 8, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 8, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 8, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 8, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 8, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 8, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul:0 -> (-1, 8, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul:0 -> (-1, 8, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (32, 8, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 8, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 32 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 32, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 32, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 32, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid:0 -> (-1, 32, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid:0 -> (-1, 32, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul:0 -> (-1, 32, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_weights_fused_bn -> (16, 32, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D_bias_fused_bn -> (16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 32, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 16 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 16, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 16, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 16, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D/ReadVariableOp:0 -> (96, 16, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 16, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 96 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 96, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D:0 -> (-1, 96, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp:0 -> (96)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/ReadVariableOp_1:0 -> (96)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (96)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (96)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 96, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul:0 -> (-1, 96, 256, 256)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul:0 -> (-1, 96, 256, 256)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (96, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (96)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 96, 256, 256) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 96 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 96, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 96, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> (-1, 96, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> (-1, 96, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (4, 96, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (4)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 96, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 4 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 4, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 4, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 4, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 4, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 4, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 4, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul:0 -> (-1, 4, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul:0 -> (-1, 4, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (96, 4, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (96)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 4, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 96 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 96, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 96, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 96, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid:0 -> (-1, 96, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0 -> (-1, 96, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid:0 -> (-1, 96, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul:0 -> (-1, 96, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul:0 -> (-1, 96, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_weights_fused_bn -> (24, 96, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D_bias_fused_bn -> (24)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 96, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 24 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 24, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 24, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 24, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 -> (144, 24, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 24, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp:0 -> (144)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/ReadVariableOp_1:0 -> (144)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (144)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_weights_fused_bn -> (144, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise_bias_fused_bn -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> (-1, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (6, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (6)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 6 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 6, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul:0 -> (-1, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (144, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 6, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_weights_fused_bn -> (24, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D_bias_fused_bn -> (24)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 24 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 24, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 24, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 24, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 24, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add:0 -> (-1, 24, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add:0 -> (-1, 24, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_weights_fused_bn -> (144, 24, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D_bias_fused_bn -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 24, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul:0 -> (-1, 144, 128, 128)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (144, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 128, 128) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (2, 2), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 144, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean:0 -> (-1, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (6, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (6)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 6 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 6, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul:0 -> (-1, 6, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul:0 -> (-1, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (144, 6, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (144)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 6, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 144 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 144, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0 -> (-1, 144, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid:0 -> (-1, 144, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul:0 -> (-1, 144, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul:0 -> (-1, 144, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_weights_fused_bn -> (40, 144, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D_bias_fused_bn -> (40)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 144, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 40 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 40, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 40, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 40, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 -> (240, 40, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 40, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp:0 -> (240)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/ReadVariableOp_1:0 -> (240)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (240)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_weights_fused_bn -> (240, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise_bias_fused_bn -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> (-1, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (10, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (10)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 10 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 10, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul:0 -> (-1, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (240, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 10, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_weights_fused_bn -> (40, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D_bias_fused_bn -> (40)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 40 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 40, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 40, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 40, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 40, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 -> (-1, 40, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 -> (-1, 40, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_weights_fused_bn -> (240, 40, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D_bias_fused_bn -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 40, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add:0 -> (-1, 40, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_weights_fused_bn -> (64, 40, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd_bias_fused_bn -> (64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 40, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 [Transpose] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 [Unsqueeze] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul:0 -> (-1, 240, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (240, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 64, 64) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (2, 2), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 240, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean:0 -> (-1, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (10, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (10)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 10 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 10, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul:0 -> (-1, 10, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul:0 -> (-1, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (240, 10, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (240)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 10, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 240 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 240, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0 -> (-1, 240, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid:0 -> (-1, 240, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul:0 -> (-1, 240, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul:0 -> (-1, 240, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_weights_fused_bn -> (80, 240, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D_bias_fused_bn -> (80)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 240, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 80 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 -> (480, 80, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp:0 -> (480)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/ReadVariableOp_1:0 -> (480)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (480)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_weights_fused_bn -> (480, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise_bias_fused_bn -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (20, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (20)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 20 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (480, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_weights_fused_bn -> (80, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D_bias_fused_bn -> (80)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 80 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 -> (-1, 80, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_weights_fused_bn -> (480, 80, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D_bias_fused_bn -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_weights_fused_bn -> (480, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise_bias_fused_bn -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (20, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (20)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 20 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (480, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_weights_fused_bn -> (80, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D_bias_fused_bn -> (80)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 80 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 80, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add:0 -> (-1, 80, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add:0 -> (-1, 80, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_weights_fused_bn -> (480, 80, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D_bias_fused_bn -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 80, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (480, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean:0 -> (-1, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (20, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (20)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 20 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul:0 -> (-1, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (480, 20, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (480)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 20, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 480 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 480, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid:0 -> (-1, 480, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul:0 -> (-1, 480, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_weights_fused_bn -> (112, 480, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D_bias_fused_bn -> (112)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 480, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 112 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 -> (672, 112, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp:0 -> (672)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/ReadVariableOp_1:0 -> (672)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (672)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_weights_fused_bn -> (672, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise_bias_fused_bn -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (28, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (28)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 28 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (672, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_weights_fused_bn -> (112, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D_bias_fused_bn -> (112)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 112 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_weights_fused_bn -> (672, 112, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D_bias_fused_bn -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_weights_fused_bn -> (672, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise_bias_fused_bn -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (28, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (28)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 28 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (672, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_weights_fused_bn -> (112, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D_bias_fused_bn -> (112)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 112 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_weights_fused_bn -> (672, 112, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D_bias_fused_bn -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn -> (64, 112, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn -> (64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add:0 -> (-1, 112, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_weights_fused_bn -> (64, 112, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd_bias_fused_bn -> (64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 112, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 [Transpose] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 [Transpose] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 [Unsqueeze] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 [Unsqueeze] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul:0 -> (-1, 672, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (672, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 32, 32) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (2, 2), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 672, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean:0 -> (-1, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (28, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (28)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 28 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul:0 -> (-1, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (672, 28, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (672)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 28, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 672 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 672, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0 -> (-1, 672, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid:0 -> (-1, 672, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul:0 -> (-1, 672, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul:0 -> (-1, 672, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_weights_fused_bn -> (192, 672, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D_bias_fused_bn -> (192)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 672, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 192 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D/ReadVariableOp:0 -> (1152, 192, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp:0 -> (1152)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/ReadVariableOp_1:0 -> (1152)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp:0 -> (1152)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3/ReadVariableOp_1:0 -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_weights_fused_bn -> (1152, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (48, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (48)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (1152, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_weights_fused_bn -> (192, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D_bias_fused_bn -> (192)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 192 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_weights_fused_bn -> (1152, 192, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_weights_fused_bn -> (1152, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise [04/18/2022-02:33:55] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (48, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (48)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (1152, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul [Mul] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_weights_fused_bn -> (192, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D_bias_fused_bn -> (192)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D [04/18/2022-02:33:55] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 192 [04/18/2022-02:33:55] [V] [TRT] Convolution output dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add [Add] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add [04/18/2022-02:33:55] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:55] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_weights_fused_bn -> (1152, 192, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:55] [V] [TRT] Convolution input dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:55] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_weights_fused_bn -> (1152, 1, 5, 5)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (5, 5), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (48, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (48)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (1152, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (1152)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_weights_fused_bn -> (192, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D_bias_fused_bn -> (192)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 192 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add [Add] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add [Add] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_bn/FusedBatchNormV3:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add [Add] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add:0 -> (-1, 192, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_weights_fused_bn -> (1152, 192, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 192, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_weights_fused_bn [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_bias_fused_bn [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_weights_fused_bn -> (1152, 1, 3, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise_bias_fused_bn -> (1152)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_bn/FusedBatchNormV3:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean [ReduceMean] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean [ReduceMean] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean [ReduceMean] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean:0 -> (-1, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/Conv2D/ReadVariableOp:0 -> (48, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd/ReadVariableOp:0 -> (48)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 48 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_conv2d/BiasAdd:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul:0 -> (-1, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/Conv2D/ReadVariableOp:0 -> (1152, 48, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd/ReadVariableOp:0 -> (1152)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 48, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 1152 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 1152, 1, 1) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_conv2d/BiasAdd:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid:0 -> (-1, 1152, 1, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_weights_fused_bn [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_bias_fused_bn [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul:0 -> (-1, 1152, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_weights_fused_bn -> (320, 1152, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D_bias_fused_bn -> (320)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 1152, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D for ONNX node: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 320 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 320, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_conv2d/Conv2D [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 320, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 320, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 -> (64, 320, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 320, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 320, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 -> (64, 320, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 320, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> (-1, 320, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/Conv2D/ReadVariableOp:0 -> (64, 320, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 320, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5436 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5436 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5436 for ONNX node: Transpose__5436 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5436:0 for ONNX tensor: Transpose__5436:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5436 [Transpose] outputs: [Transpose__5436:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5436:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 [Unsqueeze] inputs: [Transpose__5436:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6299 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6299 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6299 for ONNX node: Transpose__6299 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6299:0 for ONNX tensor: Transpose__6299:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6299 [Transpose] outputs: [Transpose__6299:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6299:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018 [Unsqueeze] inputs: [Transpose__6299:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6299:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881 [Unsqueeze] inputs: [Transpose__6299:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5829 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [const_fold_opt__5829 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__890:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6311 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6311 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6311 for ONNX node: Transpose__6311 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6311:0 for ONNX tensor: Transpose__6311:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6311 [Transpose] outputs: [Transpose__6311:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6311:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007 [Unsqueeze] inputs: [Transpose__6311:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6311:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897 [Unsqueeze] inputs: [Transpose__6311:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__6102 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [const_fold_opt__6102 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6317 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6317 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6317 for ONNX node: Transpose__6317 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6317:0 for ONNX tensor: Transpose__6317:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6317 [Transpose] outputs: [Transpose__6317:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6317:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995 [Unsqueeze] inputs: [Transpose__6317:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6317:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912 [Unsqueeze] inputs: [Transpose__6317:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5788 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [const_fold_opt__5788 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6293 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6293 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6293 for ONNX node: Transpose__6293 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6293:0 for ONNX tensor: Transpose__6293:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6293 [Transpose] outputs: [Transpose__6293:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6293:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983 [Unsqueeze] inputs: [Transpose__6293:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6293:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953 [Unsqueeze] inputs: [Transpose__6293:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5809 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [const_fold_opt__5809 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__974:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5460 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5460 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5460 for ONNX node: Transpose__5460 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5460:0 for ONNX tensor: Transpose__5460:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5460 [Transpose] outputs: [Transpose__5460:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5460:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 [Unsqueeze] inputs: [Transpose__5460:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__986:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5463 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5463 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5463 for ONNX node: Transpose__5463 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5463:0 for ONNX tensor: Transpose__5463:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5463 [Transpose] outputs: [Transpose__5463:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5463:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 [Unsqueeze] inputs: [Transpose__5463:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__998:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5465 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5465 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5465 for ONNX node: Transpose__5465 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5465:0 for ONNX tensor: Transpose__5465:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5465 [Transpose] outputs: [Transpose__5465:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5465:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 [Unsqueeze] inputs: [Transpose__5465:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1010:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5468 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5468 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5468 for ONNX node: Transpose__5468 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5468:0 for ONNX tensor: Transpose__5468:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5468 [Transpose] outputs: [Transpose__5468:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5468:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 [Unsqueeze] inputs: [Transpose__5468:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1021:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6315 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6315 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6315 for ONNX node: Transpose__6315 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6315:0 for ONNX tensor: Transpose__6315:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6315 [Transpose] outputs: [Transpose__6315:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6315:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129 [Unsqueeze] inputs: [Transpose__6315:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6315:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028 [Unsqueeze] inputs: [Transpose__6315:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5829 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [const_fold_opt__5829 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1037:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6307 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6307 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6307 for ONNX node: Transpose__6307 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6307:0 for ONNX tensor: Transpose__6307:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6307 [Transpose] outputs: [Transpose__6307:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6307:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118 [Unsqueeze] inputs: [Transpose__6307:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6307:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043 [Unsqueeze] inputs: [Transpose__6307:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__6102 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [const_fold_opt__6102 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1053:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6313 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6313 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6313 for ONNX node: Transpose__6313 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6313:0 for ONNX tensor: Transpose__6313:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6313 [Transpose] outputs: [Transpose__6313:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6313:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106 [Unsqueeze] inputs: [Transpose__6313:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6313:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060 [Unsqueeze] inputs: [Transpose__6313:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5788 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [const_fold_opt__5788 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1069:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6291 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6291 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6291 for ONNX node: Transpose__6291 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6291:0 for ONNX tensor: Transpose__6291:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6291 [Transpose] outputs: [Transpose__6291:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6291:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094 [Unsqueeze] inputs: [Transpose__6291:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6291:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075 [Unsqueeze] inputs: [Transpose__6291:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5809 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [const_fold_opt__5809 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1085:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5484 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5484 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5484 for ONNX node: Transpose__5484 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5484:0 for ONNX tensor: Transpose__5484:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5484 [Transpose] outputs: [Transpose__5484:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5484:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 [Unsqueeze] inputs: [Transpose__5484:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1097:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5485 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5485 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5485 for ONNX node: Transpose__5485 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5485:0 for ONNX tensor: Transpose__5485:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5485 [Transpose] outputs: [Transpose__5485:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5485:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 [Unsqueeze] inputs: [Transpose__5485:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1109:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5488 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5488 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5488 for ONNX node: Transpose__5488 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5488:0 for ONNX tensor: Transpose__5488:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5488 [Transpose] outputs: [Transpose__5488:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5488:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 [Unsqueeze] inputs: [Transpose__5488:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1121:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__5492 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5492 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__5492 for ONNX node: Transpose__5492 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__5492:0 for ONNX tensor: Transpose__5492:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__5492 [Transpose] outputs: [Transpose__5492:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__5492:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 [Unsqueeze] inputs: [Transpose__5492:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1132:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6305 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6305 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6305 for ONNX node: Transpose__6305 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6305:0 for ONNX tensor: Transpose__6305:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6305 [Transpose] outputs: [Transpose__6305:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6305:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424 [Unsqueeze] inputs: [Transpose__6305:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6305:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139 [Unsqueeze] inputs: [Transpose__6305:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 -> (-1, 4, 4, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 -> (-1, 4, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 4, 4, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 -> (-1, 4, 1, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5829 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 -> (-1, 4, 2, 4, 2, 64)[FLOAT]], [const_fold_opt__5829 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 -> (-1, 8, 8, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1148:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6297 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6297 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6297 for ONNX node: Transpose__6297 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6297:0 for ONNX tensor: Transpose__6297:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6297 [Transpose] outputs: [Transpose__6297:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6297:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367 [Unsqueeze] inputs: [Transpose__6297:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6297:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155 [Unsqueeze] inputs: [Transpose__6297:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 -> (-1, 8, 8, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 -> (-1, 8, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 8, 8, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 -> (-1, 8, 1, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__6102 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 -> (-1, 8, 2, 8, 2, 64)[FLOAT]], [const_fold_opt__6102 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 -> (-1, 16, 16, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1164:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6309 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6309 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6309 for ONNX node: Transpose__6309 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6309:0 for ONNX tensor: Transpose__6309:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6309 [Transpose] outputs: [Transpose__6309:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6309:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309 [Unsqueeze] inputs: [Transpose__6309:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6309:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171 [Unsqueeze] inputs: [Transpose__6309:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 -> (-1, 16, 16, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 -> (-1, 16, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 16, 16, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 -> (-1, 16, 1, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5788 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 -> (-1, 16, 2, 16, 2, 64)[FLOAT]], [const_fold_opt__5788 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 -> (-1, 32, 32, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1180:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: Transpose__6295 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6295 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: Transpose__6295 for ONNX node: Transpose__6295 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: Transpose__6295:0 for ONNX tensor: Transpose__6295:0 [04/18/2022-02:33:56] [V] [TRT] Transpose__6295 [Transpose] outputs: [Transpose__6295:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6295:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251 [Unsqueeze] inputs: [Transpose__6295:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: Transpose__6295:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186 [Unsqueeze] inputs: [Transpose__6295:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 -> (-1, 32, 32, 1, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 -> (-1, 32, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 2, 64), unsqueezing to: (_, _, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 -> (-1, 32, 1, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: const_fold_opt__5809 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 -> (-1, 32, 2, 32, 2, 64)[FLOAT]], [const_fold_opt__5809 -> (4)[INT32]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape [Reshape] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 -> (-1, 64, 64, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul:0 -> (-1, 64, 64, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 64, 64, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/Squeeze:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise__1196:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 [Transpose] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 [Unsqueeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 [Concat] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul [MatMul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 -> (-1, 32, 32, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise [04/18/2022-02:33:56] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze [Squeeze] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul:0 -> (-1, 32, 32, 64, 1)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Original shape: (_, 32, 32, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:56] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd [Conv] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd [04/18/2022-02:33:56] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:56] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:56] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul [Mul] [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 [04/18/2022-02:33:56] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/Squeeze:0 -> (-1, 32, 32, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:56] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul:0 -> (-1, 32, 32, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__1254:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 [Unsqueeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 16, 16, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 [Concat] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 -> (810, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 -> (810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 810 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 810, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> (-1, 810, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d/depthwise:0 -> (-1, 64, 64, 64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 -> (36, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 -> (36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 36 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 36, 64, 64) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> (-1, 36, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul [MatMul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 -> (-1, 16, 16, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> (-1, 810, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223:0 -> (-1, 64, 64, 810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> (-1, 36, 64, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246:0 -> (-1, 64, 64, 36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze [Squeeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul:0 -> (-1, 16, 16, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 16, 16, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223:0 -> (-1, 64, 64, 810)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 -> (-1, 36864, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246:0 -> (-1, 64, 64, 36)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 -> (-1, 36864, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/Squeeze:0 -> (-1, 16, 16, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul:0 -> (-1, 16, 16, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__1312:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 [Unsqueeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 8, 8, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 [Concat] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 -> (810, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 -> (810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 810 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 810, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> (-1, 810, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/depthwise:0 -> (-1, 64, 32, 32)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 -> (36, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 -> (36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 36 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 36, 32, 32) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> (-1, 36, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul [MatMul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 -> (-1, 8, 8, 64, 3)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 -> (3, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> (-1, 810, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281:0 -> (-1, 32, 32, 810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> (-1, 36, 32, 32)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304:0 -> (-1, 32, 32, 36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze [Squeeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul:0 -> (-1, 8, 8, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 8, 8, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281:0 -> (-1, 32, 32, 810)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 -> (-1, 9216, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304:0 -> (-1, 32, 32, 36)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 -> (-1, 9216, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/Squeeze:0 -> (-1, 8, 8, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul:0 -> (-1, 8, 8, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise__1370:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 [Unsqueeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 [Unsqueeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 4, 4, 64), unsqueezing to: (_, _, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 [Unsqueeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 [Concat] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 [Concat] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 [Concat] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 -> (810, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 -> (810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 810 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 810, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> (-1, 810, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_2/depthwise:0 -> (-1, 64, 16, 16)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 -> (36, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 -> (36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 36 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 36, 16, 16) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> (-1, 36, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul [MatMul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul [MatMul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 -> (-1, 4, 4, 64, 2)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 -> (2, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul [MatMul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> (-1, 810, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339:0 -> (-1, 16, 16, 810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> (-1, 36, 16, 16)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362:0 -> (-1, 16, 16, 36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze [Squeeze] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze [Squeeze] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul:0 -> (-1, 4, 4, 64, 1)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Original shape: (_, 4, 4, 64, 1), squeezing to: (_, _, _, _) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze [Squeeze] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339:0 -> (-1, 16, 16, 810)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 -> (-1, 2304, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362:0 -> (-1, 16, 16, 36)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 -> (-1, 2304, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul [Mul] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/Squeeze:0 -> (-1, 4, 4, 64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul [Mul] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427 [Transpose] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul:0 -> (-1, 4, 4, 64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427 [Transpose] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise__1427:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd [Conv] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/separable_conv2d_3/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 -> (810, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 -> (810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 810 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 810, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> (-1, 810, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_3/depthwise:0 -> (-1, 64, 8, 8)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 -> (36, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 -> (36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 36 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 36, 8, 8) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> (-1, 36, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> (-1, 810, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397:0 -> (-1, 8, 8, 810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> (-1, 36, 8, 8)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420:0 -> (-1, 8, 8, 36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397:0 -> (-1, 8, 8, 810)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 -> (-1, 576, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420:0 -> (-1, 8, 8, 36)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 -> (-1, 576, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/separable_conv2d/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_2/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/separable_conv2d_4/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/separable_conv2d_1/ReadVariableOp_1:0 -> (64, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3/ReadVariableOp:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/ReadVariableOp_1:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp:0 -> (64)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3/ReadVariableOp_1:0 -> (64)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [BatchNormalization] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid [Sigmoid] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul [Mul] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul [Mul] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul [Mul] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_1/ReadVariableOp:0 -> (64, 1, 3, 3)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise [04/18/2022-02:33:57] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/separable_conv2d_3/ReadVariableOp_1:0 -> (810, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd/ReadVariableOp:0 -> (810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 810 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 810, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> (-1, 810, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4 [Conv] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4 [Conv] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/depthwise:0 -> (-1, 64, 4, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/separable_conv2d_4/ReadVariableOp_1:0 -> (36, 64, 1, 1)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3/ReadVariableOp:0 -> (36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Convolution input dimensions: (-1, 64, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4 [04/18/2022-02:33:57] [V] [TRT] Using kernel: (1, 1), strides: (1, 1), prepadding: (0, 0), postpadding: (0, 0), dilations: (1, 1), numOutputs: 36 [04/18/2022-02:33:57] [V] [TRT] Convolution output dimensions: (-1, 36, 4, 4) [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4 [Conv] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> (-1, 36, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> (-1, 810, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454:0 -> (-1, 4, 4, 810)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 [Transpose] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 [Transpose] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> (-1, 36, 4, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 [Transpose] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749:0 -> (-1, 4, 4, 36)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454:0 -> (-1, 4, 4, 810)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 -> (-1, 144, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 [Reshape] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4_shape [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 [Reshape] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749:0 -> (-1, 4, 4, 36)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4_shape -> (3)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 for ONNX node: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 for ONNX tensor: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 [Reshape] outputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 -> (-1, 144, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/concat_1 [Concat] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/concat_1 [Concat] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 -> (-1, 36864, 90)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 -> (-1, 9216, 90)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 -> (-1, 2304, 90)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 -> (-1, 576, 90)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 -> (-1, 144, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/concat_1 for ONNX node: StatefulPartitionedCall/concat_1 [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/concat_1:0 for ONNX tensor: StatefulPartitionedCall/concat_1:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/concat_1 [Concat] outputs: [StatefulPartitionedCall/concat_1:0 -> (-1, 49104, 90)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: StatefulPartitionedCall/concat [Concat] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/concat [Concat] inputs: [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 -> (-1, 36864, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 -> (-1, 9216, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 -> (-1, 2304, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 -> (-1, 576, 4)[FLOAT]], [StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 -> (-1, 144, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Registering layer: StatefulPartitionedCall/concat for ONNX node: StatefulPartitionedCall/concat [04/18/2022-02:33:57] [V] [TRT] Registering tensor: StatefulPartitionedCall/concat:0 for ONNX tensor: StatefulPartitionedCall/concat:0 [04/18/2022-02:33:57] [V] [TRT] StatefulPartitionedCall/concat [Concat] outputs: [StatefulPartitionedCall/concat:0 -> (-1, 49104, 4)[FLOAT]], [04/18/2022-02:33:57] [V] [TRT] Parsing node: nms/non_maximum_suppression [EfficientNMS_TRT] [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/concat:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: StatefulPartitionedCall/concat_1:0 [04/18/2022-02:33:57] [V] [TRT] Searching for input: nms/anchors:0 [04/18/2022-02:33:57] [V] [TRT] nms/non_maximum_suppression [EfficientNMS_TRT] inputs: [StatefulPartitionedCall/concat:0 -> (-1, 49104, 4)[FLOAT]], [StatefulPartitionedCall/concat_1:0 -> (-1, 49104, 90)[FLOAT]], [nms/anchors:0 -> (1, 49104, 4)[FLOAT]], [04/18/2022-02:33:57] [I] [TRT] No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin. [04/18/2022-02:33:57] [I] [TRT] Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace: [04/18/2022-02:33:57] [V] [TRT] Registering layer: nms/anchors:0 for ONNX node: nms/anchors:0 [04/18/2022-02:33:57] [I] [TRT] Successfully created plugin: EfficientNMS_TRT [04/18/2022-02:33:57] [V] [TRT] Registering layer: nms/non_maximum_suppression for ONNX node: nms/non_maximum_suppression [04/18/2022-02:33:57] [V] [TRT] Registering tensor: num_detections_0 for ONNX tensor: num_detections [04/18/2022-02:33:57] [V] [TRT] Registering tensor: detection_boxes_1 for ONNX tensor: detection_boxes [04/18/2022-02:33:57] [V] [TRT] Registering tensor: detection_scores_2 for ONNX tensor: detection_scores [04/18/2022-02:33:57] [V] [TRT] Registering tensor: detection_classes_3 for ONNX tensor: detection_classes [04/18/2022-02:33:57] [V] [TRT] nms/non_maximum_suppression [EfficientNMS_TRT] outputs: [num_detections -> (-1, 1)[INT32]], [detection_boxes -> (-1, 100, 4)[FLOAT]], [detection_scores -> (-1, 100)[FLOAT]], [detection_classes -> (-1, 100)[INT32]], [04/18/2022-02:33:57] [V] [TRT] Marking num_detections_0 as output: num_detections [04/18/2022-02:33:57] [V] [TRT] Marking detection_boxes_1 as output: detection_boxes [04/18/2022-02:33:57] [V] [TRT] Marking detection_scores_2 as output: detection_scores [04/18/2022-02:33:57] [V] [TRT] Marking detection_classes_3 as output: detection_classes [04/18/2022-02:33:57] [I] Finish parsing network model [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,64,64,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,32,32,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,16,16,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,8,8,64,3][NONE] dims(input1)=[1,1,1,3,1][NONE]. [04/18/2022-02:33:57] [I] [TRT] StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/MatMul: broadcasting input1 to make tensors conform, dims(input0)=[-1,4,4,64,2][NONE] dims(input1)=[1,1,1,2,1][NONE]. [04/18/2022-02:33:57] [V] [TRT] Applying generic optimizations to the graph for inference. [04/18/2022-02:33:57] [V] [TRT] Original: 873 layers [04/18/2022-02:33:57] [V] [TRT] After dead-layer removal: 873 layers [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/truediv:0 with (Unnamed Layer* 327) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/truediv:0 with (Unnamed Layer* 369) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/truediv:0 with (Unnamed Layer* 411) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/truediv:0 with (Unnamed Layer* 453) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/truediv:0 with (Unnamed Layer* 482) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/truediv:0 with (Unnamed Layer* 511) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/truediv:0 with (Unnamed Layer* 540) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/truediv:0 with (Unnamed Layer* 569) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/truediv:0 with (Unnamed Layer* 611) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/truediv:0 with (Unnamed Layer* 653) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/truediv:0 with (Unnamed Layer* 695) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/truediv:0 with (Unnamed Layer* 737) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/truediv:0 with (Unnamed Layer* 766) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/truediv:0 with (Unnamed Layer* 795) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/truediv:0 with (Unnamed Layer* 824) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/truediv:0 with (Unnamed Layer* 853) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/truediv:0 with (Unnamed Layer* 895) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/truediv:0 with (Unnamed Layer* 937) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/truediv:0 with (Unnamed Layer* 979) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/truediv:0 with (Unnamed Layer* 1021) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/truediv:0 with (Unnamed Layer* 1053) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/truediv:0 with (Unnamed Layer* 1109) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/truediv:0 with (Unnamed Layer* 1169) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConstShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ConstShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/truediv:0 with (Unnamed Layer* 1229) [Shuffle] [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3__968 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__928 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3__940 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__841 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3__853 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5436 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5460 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool__981 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5463 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool__993 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5465 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/input_1_up_lvl_5/downsample_max_x2/MaxPool__1005 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5468 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/input_1_up_lvl_6/downsample_max_x2/MaxPool__1017 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5484 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/input_2_dn_lvl_3/downsample_max_x2/MaxPool__1092 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5485 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/input_2_up_lvl_4/downsample_max_x2/MaxPool__1104 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5488 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/input_2_up_lvl_5/downsample_max_x2/MaxPool__1116 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing Transpose__5492 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/input_2_up_lvl_6/downsample_max_x2/MaxPool__1128 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/Reshape with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/input_3_dn_lvl_3/downsample_max_x2/MaxPool__1249 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_0/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd__1223 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd__1246 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/input_3_up_lvl_4/downsample_max_x2/MaxPool__1307 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_1 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_1/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1__1281 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1__1304 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/input_3_up_lvl_5/downsample_max_x2/MaxPool__1365 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_2 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_2/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2__1339 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2__1362 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/input_3_up_lvl_6/downsample_max_x2/MaxPool__1423 with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/separable_conv/BiasAdd with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_3 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_3/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3__1397 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3__1420 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ConvScaleFusion [04/18/2022-02:33:57] [V] [TRT] ConvScaleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BiasAdd_4 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/BatchNorm/feature_4/FusedBatchNormV3 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4__1454 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4 [04/18/2022-02:33:57] [V] [TRT] Running: ShuffleShuffleFusion [04/18/2022-02:33:57] [V] [TRT] ShuffleShuffleFusion: Fusing StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4__1749 with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4 [04/18/2022-02:33:57] [V] [TRT] After Myelin optimization: 738 layers [04/18/2022-02:33:57] [V] [TRT] Applying ScaleNodes fusions. [04/18/2022-02:33:57] [V] [TRT] Running: ConstEltFusion [04/18/2022-02:33:57] [V] [TRT] ConstEltFusion: Fusing preprocessor/scale_value:0 with preprocessor/scale [04/18/2022-02:33:57] [V] [TRT] Running: ConstEltFusion [04/18/2022-02:33:57] [V] [TRT] ConstEltFusion: Fusing preprocessor/mean_value:0 with preprocessor/mean [04/18/2022-02:33:57] [V] [TRT] After scale fusion: 736 layers [04/18/2022-02:33:57] [V] [TRT] Running: ScaleScaleFusion [04/18/2022-02:33:57] [V] [TRT] ScaleScaleFusion: Fusing preprocessor/scale_value:0 + preprocessor/scale with preprocessor/mean_value:0 + preprocessor/mean [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ConvEltwiseSumFusion [04/18/2022-02:33:57] [V] [TRT] ConvEltwiseSumFusion: Fusing StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/project_conv2d/Conv2D with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/add/add [04/18/2022-02:33:57] [V] [TRT] Running: ReduceToPoolingFusion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_squeeze/Mean from REDUCE to POOLING [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:57] [V] [TRT] Running: ActivationToPointwiseConversion [04/18/2022-02:33:57] [V] [TRT] Swap the layer type of StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid from ACTIVATION to POINTWISE [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_2/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_2/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_2/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_3/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/expand_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/depthwise_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_reduce_activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_expand_activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/se_excite/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_0/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_1/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_2/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/Sigmoid) with StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/activation/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_3/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_0/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_1/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_1/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_2/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] Running: PointWiseFusion [04/18/2022-02:33:58] [V] [TRT] PointWiseFusion: Fusing PWN(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/Sigmoid) with StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/BoxPredictionTower/conv2d_2/activation_4/mul [04/18/2022-02:33:58] [V] [TRT] After vertical fusions: 608 layers [04/18/2022-02:33:58] [V] [TRT] After dupe layer removal: 608 layers [04/18/2022-02:33:58] [V] [TRT] After final dead-layer removal: 608 layers [04/18/2022-02:33:58] [V] [TRT] Merging layers: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:58] [V] [TRT] Merging layers: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D [04/18/2022-02:33:58] [V] [TRT] Merging layers: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 [04/18/2022-02:33:58] [V] [TRT] After tensor merging: 604 layers [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/concat [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 to StatefulPartitionedCall/concat:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 to StatefulPartitionedCall/concat:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 to StatefulPartitionedCall/concat:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 to StatefulPartitionedCall/concat:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 to StatefulPartitionedCall/concat:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/concat_1 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 to StatefulPartitionedCall/concat_1:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 to StatefulPartitionedCall/concat_1:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 to StatefulPartitionedCall/concat_1:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 to StatefulPartitionedCall/concat_1:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 to StatefulPartitionedCall/concat_1:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Concat__1426:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Concat__1369:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Concat__1311:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Concat__1253:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Concat__1195:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1191:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1188:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Concat__1179:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1175:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1172:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Concat__1163:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1159:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1156:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Concat__1147:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1143:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1140:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Concat__1131:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Concat__1120:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Concat__1108:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Concat__1096:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Concat__1084:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1080:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1077:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Concat__1068:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1064:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1061:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Concat__1052:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1048:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1045:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Concat__1036:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__1032:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__1029:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Concat__1020:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Concat__1009:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Concat__997:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Concat__985:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Concat__973:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Concat__945:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Concat__905:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Concat__889:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 because input does not support striding. [04/18/2022-02:33:58] [V] [TRT] Eliminating concatenation StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882 [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] Generating copy for StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 to StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 because copy elision not implemented for axis. [04/18/2022-02:33:58] [V] [TRT] After concat removal: 669 layers [04/18/2022-02:33:58] [V] [TRT] Graph construction and optimization completed in 0.632062 seconds. [04/18/2022-02:33:59] [V] [TRT] Using cublasLt as a tactic source [04/18/2022-02:33:59] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +809, GPU +350, now: CPU 1470, GPU 1651 (MiB) [04/18/2022-02:33:59] [V] [TRT] Using cuDNN as a tactic source [04/18/2022-02:33:59] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +126, GPU +56, now: CPU 1596, GPU 1707 (MiB) [04/18/2022-02:33:59] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored. [04/18/2022-02:33:59] [V] [TRT] Constructing optimization profile number 0 [1/1]. [04/18/2022-02:33:59] [V] [TRT] Reserving memory for activation tensors. Host: 0 bytes Device: 25185056 bytes [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:33:59] [V] [TRT] *************** Autotuning Reformat: Float(786432,1536,3,1) -> Float(786432,1,1536,512) *************** [04/18/2022-02:33:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(input -> ) (Reformat) [04/18/2022-02:33:59] [V] [TRT] Tactic: 1002 Time: 3.41517 [04/18/2022-02:33:59] [V] [TRT] Tactic: 0 Time: 3.53971 [04/18/2022-02:33:59] [V] [TRT] Fastest Tactic: 1002 Time: 3.41517 [04/18/2022-02:33:59] [V] [TRT] *************** Autotuning Reformat: Float(786432,1536,3,1) -> Float(196608,1:4,384,128) *************** [04/18/2022-02:33:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(input -> ) (Reformat) [04/18/2022-02:34:00] [V] [TRT] Tactic: 1002 Time: 3.49133 [04/18/2022-02:34:00] [V] [TRT] Tactic: 0 Time: 3.54099 [04/18/2022-02:34:00] [V] [TRT] Fastest Tactic: 1002 Time: 3.49133 [04/18/2022-02:34:00] [V] [TRT] *************** Autotuning Reformat: Float(786432,1536,3,1) -> Float(24576,1536:32,3,1) *************** [04/18/2022-02:34:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(input -> ) (Reformat) [04/18/2022-02:34:00] [V] [TRT] Tactic: 1002 Time: 3.44 [04/18/2022-02:34:00] [V] [TRT] Tactic: 0 Time: 3.42285 [04/18/2022-02:34:00] [V] [TRT] Fastest Tactic: 0 Time: 3.42285 [04/18/2022-02:34:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:00] [V] [TRT] *************** Autotuning Reformat: Float(786432,262144,512,1) -> Float(786432,1,1536,3) *************** [04/18/2022-02:34:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:00] [V] [TRT] Tactic: 1002 Time: 3.424 [04/18/2022-02:34:00] [V] [TRT] Tactic: 0 Time: 3.54982 [04/18/2022-02:34:00] [V] [TRT] Fastest Tactic: 1002 Time: 3.424 [04/18/2022-02:34:00] [V] [TRT] *************** Autotuning Reformat: Float(786432,262144,512,1) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:34:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:00] [V] [TRT] Tactic: 1002 Time: 3.93408 [04/18/2022-02:34:00] [V] [TRT] Tactic: 0 Time: 6.27494 [04/18/2022-02:34:00] [V] [TRT] Fastest Tactic: 1002 Time: 3.93408 [04/18/2022-02:34:00] [V] [TRT] *************** Autotuning Reformat: Float(786432,1,1536,3) -> Float(786432,262144,512,1) *************** [04/18/2022-02:34:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:00] [V] [TRT] Tactic: 1002 Time: 13.0747 [04/18/2022-02:34:00] [V] [TRT] Tactic: 0 Time: 5.25517 [04/18/2022-02:34:00] [V] [TRT] Fastest Tactic: 0 Time: 5.25517 [04/18/2022-02:34:00] [V] [TRT] *************** Autotuning Reformat: Float(786432,1,1536,3) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:34:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:01] [V] [TRT] Tactic: 1002 Time: 13.9695 [04/18/2022-02:34:01] [V] [TRT] Tactic: 0 Time: 6.30106 [04/18/2022-02:34:01] [V] [TRT] Fastest Tactic: 0 Time: 6.30106 [04/18/2022-02:34:01] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,512,1) -> Float(786432,262144,512,1) *************** [04/18/2022-02:34:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:01] [V] [TRT] Tactic: 1002 Time: 13.0876 [04/18/2022-02:34:01] [V] [TRT] Tactic: 0 Time: 8.54745 [04/18/2022-02:34:01] [V] [TRT] Fastest Tactic: 0 Time: 8.54745 [04/18/2022-02:34:01] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,512,1) -> Float(786432,1,1536,3) *************** [04/18/2022-02:34:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:01] [V] [TRT] Tactic: 1002 Time: 12.8008 [04/18/2022-02:34:01] [V] [TRT] Tactic: 0 Time: 4.09997 [04/18/2022-02:34:01] [V] [TRT] Fastest Tactic: 0 Time: 4.09997 [04/18/2022-02:34:01] [V] [TRT] *************** Autotuning Reformat: Float(262144,262144:32,512,1) -> Float(786432,262144,512,1) *************** [04/18/2022-02:34:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:02] [V] [TRT] Tactic: 1002 Time: 13.1052 [04/18/2022-02:34:02] [V] [TRT] Tactic: 0 Time: 15.9525 [04/18/2022-02:34:02] [V] [TRT] Fastest Tactic: 1002 Time: 13.1052 [04/18/2022-02:34:02] [V] [TRT] *************** Autotuning Reformat: Float(262144,262144:32,512,1) -> Float(786432,1,1536,3) *************** [04/18/2022-02:34:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:02] [V] [TRT] Tactic: 1002 Time: 13.4301 [04/18/2022-02:34:02] [V] [TRT] Tactic: 0 Time: 6.75571 [04/18/2022-02:34:02] [V] [TRT] Fastest Tactic: 0 Time: 6.75571 [04/18/2022-02:34:02] [V] [TRT] *************** Autotuning Reformat: Float(262144,262144:32,512,1) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:34:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(preprocessor/transpose:0_0 -> ) (Reformat) [04/18/2022-02:34:03] [V] [TRT] Tactic: 1002 Time: 13.663 [04/18/2022-02:34:03] [V] [TRT] Tactic: 0 Time: 9.55802 [04/18/2022-02:34:03] [V] [TRT] Fastest Tactic: 0 Time: 9.55802 [04/18/2022-02:34:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(786432,262144,512,1) -> Float(786432,1,1536,3) *************** [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(786432,262144,512,1) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(786432,1,1536,3) -> Float(786432,262144,512,1) *************** [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(786432,1,1536,3) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,512,1) -> Float(786432,262144,512,1) *************** [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,512,1) -> Float(786432,1,1536,3) *************** [04/18/2022-02:34:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:03] [V] [TRT] Tactic: 1002 Time: 8.86669 [04/18/2022-02:34:03] [V] [TRT] Tactic: 0 Time: 9.55046 [04/18/2022-02:34:03] [V] [TRT] Fastest Tactic: 1002 Time: 8.86669 [04/18/2022-02:34:03] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:03] [V] [TRT] Tactic: 1002 Time: 8.94643 [04/18/2022-02:34:04] [V] [TRT] Tactic: 0 Time: 9.55994 [04/18/2022-02:34:04] [V] [TRT] Fastest Tactic: 1002 Time: 8.94643 [04/18/2022-02:34:04] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:04] [V] [TRT] Tactic: 1002 Time: 8.87565 [04/18/2022-02:34:05] [V] [TRT] Tactic: 0 Time: 48.8168 [04/18/2022-02:34:05] [V] [TRT] Fastest Tactic: 1002 Time: 8.87565 [04/18/2022-02:34:05] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:05] [V] [TRT] Tactic: 1002 Time: 10.1176 [04/18/2022-02:34:05] [V] [TRT] Tactic: 0 Time: 10.0567 [04/18/2022-02:34:05] [V] [TRT] Fastest Tactic: 0 Time: 10.0567 [04/18/2022-02:34:05] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:05] [V] [TRT] Tactic: 1002 Time: 9.68179 [04/18/2022-02:34:06] [V] [TRT] Tactic: 0 Time: 42.39 [04/18/2022-02:34:06] [V] [TRT] Fastest Tactic: 1002 Time: 9.68179 [04/18/2022-02:34:06] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:06] [V] [TRT] Tactic: 1002 Time: 9.65504 [04/18/2022-02:34:06] [V] [TRT] Tactic: 0 Time: 9.70931 [04/18/2022-02:34:06] [V] [TRT] Fastest Tactic: 1002 Time: 9.65504 [04/18/2022-02:34:06] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:06] [V] [TRT] Tactic: 1002 Time: 9.64186 [04/18/2022-02:34:08] [V] [TRT] Tactic: 0 Time: 77.6543 [04/18/2022-02:34:08] [V] [TRT] Fastest Tactic: 1002 Time: 9.64186 [04/18/2022-02:34:08] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:08] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:08] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:08] [V] [TRT] Tactic: 1002 Time: 9.07853 [04/18/2022-02:34:09] [V] [TRT] Tactic: 0 Time: 42.2295 [04/18/2022-02:34:09] [V] [TRT] Fastest Tactic: 1002 Time: 9.07853 [04/18/2022-02:34:09] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:09] [V] [TRT] Tactic: 1002 Time: 8.992 [04/18/2022-02:34:09] [V] [TRT] Tactic: 0 Time: 9.40237 [04/18/2022-02:34:09] [V] [TRT] Fastest Tactic: 1002 Time: 8.992 [04/18/2022-02:34:09] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:09] [V] [TRT] Tactic: 1002 Time: 9.1465 [04/18/2022-02:34:10] [V] [TRT] Tactic: 0 Time: 77.2957 [04/18/2022-02:34:10] [V] [TRT] Fastest Tactic: 1002 Time: 9.1465 [04/18/2022-02:34:10] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:10] [V] [TRT] Tactic: 1002 Time: 9.64288 [04/18/2022-02:34:11] [V] [TRT] Tactic: 0 Time: 42.4384 [04/18/2022-02:34:11] [V] [TRT] Fastest Tactic: 1002 Time: 9.64288 [04/18/2022-02:34:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:11] [V] [TRT] Tactic: 1002 Time: 9.06598 [04/18/2022-02:34:12] [V] [TRT] Tactic: 0 Time: 9.79098 [04/18/2022-02:34:12] [V] [TRT] Fastest Tactic: 1002 Time: 9.06598 [04/18/2022-02:34:12] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:12] [V] [TRT] Tactic: 1002 Time: 9.01837 [04/18/2022-02:34:12] [V] [TRT] Tactic: 0 Time: 9.81299 [04/18/2022-02:34:12] [V] [TRT] Fastest Tactic: 1002 Time: 9.01837 [04/18/2022-02:34:12] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:12] [V] [TRT] Tactic: 1002 Time: 13.9862 [04/18/2022-02:34:13] [V] [TRT] Tactic: 0 Time: 41.1377 [04/18/2022-02:34:13] [V] [TRT] Fastest Tactic: 1002 Time: 13.9862 [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:13] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:13] [V] [TRT] Tactic: 1002 Time: 8.90419 [04/18/2022-02:34:13] [V] [TRT] Tactic: 0 Time: 9.6041 [04/18/2022-02:34:13] [V] [TRT] Fastest Tactic: 1002 Time: 8.90419 [04/18/2022-02:34:13] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:13] [V] [TRT] Tactic: 1002 Time: 8.9705 [04/18/2022-02:34:14] [V] [TRT] Tactic: 0 Time: 9.52256 [04/18/2022-02:34:14] [V] [TRT] Fastest Tactic: 1002 Time: 8.9705 [04/18/2022-02:34:14] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:14] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:14] [V] [TRT] Tactic: 1002 Time: 8.96128 [04/18/2022-02:34:15] [V] [TRT] Tactic: 0 Time: 48.7789 [04/18/2022-02:34:15] [V] [TRT] Fastest Tactic: 1002 Time: 8.96128 [04/18/2022-02:34:15] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:15] [V] [TRT] Tactic: 1002 Time: 9.66438 [04/18/2022-02:34:15] [V] [TRT] Tactic: 0 Time: 9.43565 [04/18/2022-02:34:15] [V] [TRT] Fastest Tactic: 0 Time: 9.43565 [04/18/2022-02:34:15] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:15] [V] [TRT] Tactic: 1002 Time: 9.00902 [04/18/2022-02:34:16] [V] [TRT] Tactic: 0 Time: 42.3151 [04/18/2022-02:34:16] [V] [TRT] Fastest Tactic: 1002 Time: 9.00902 [04/18/2022-02:34:16] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:16] [V] [TRT] Tactic: 1002 Time: 8.98816 [04/18/2022-02:34:16] [V] [TRT] Tactic: 0 Time: 9.32928 [04/18/2022-02:34:16] [V] [TRT] Fastest Tactic: 1002 Time: 8.98816 [04/18/2022-02:34:16] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:16] [V] [TRT] Tactic: 1002 Time: 9.01952 [04/18/2022-02:34:18] [V] [TRT] Tactic: 0 Time: 77.2329 [04/18/2022-02:34:18] [V] [TRT] Fastest Tactic: 1002 Time: 9.01952 [04/18/2022-02:34:18] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:18] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:18] [V] [TRT] Tactic: 1002 Time: 9.64762 [04/18/2022-02:34:19] [V] [TRT] Tactic: 0 Time: 42.3688 [04/18/2022-02:34:19] [V] [TRT] Fastest Tactic: 1002 Time: 9.64762 [04/18/2022-02:34:19] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:19] [V] [TRT] Tactic: 1002 Time: 9.66515 [04/18/2022-02:34:19] [V] [TRT] Tactic: 0 Time: 9.69344 [04/18/2022-02:34:19] [V] [TRT] Fastest Tactic: 1002 Time: 9.66515 [04/18/2022-02:34:19] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:19] [V] [TRT] Tactic: 1002 Time: 9.56173 [04/18/2022-02:34:20] [V] [TRT] Tactic: 0 Time: 77.5355 [04/18/2022-02:34:20] [V] [TRT] Fastest Tactic: 1002 Time: 9.56173 [04/18/2022-02:34:20] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:20] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:21] [V] [TRT] Tactic: 1002 Time: 9.6576 [04/18/2022-02:34:21] [V] [TRT] Tactic: 0 Time: 42.4358 [04/18/2022-02:34:21] [V] [TRT] Fastest Tactic: 1002 Time: 9.6576 [04/18/2022-02:34:21] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:21] [V] [TRT] Tactic: 1002 Time: 9.02336 [04/18/2022-02:34:22] [V] [TRT] Tactic: 0 Time: 9.39264 [04/18/2022-02:34:22] [V] [TRT] Fastest Tactic: 1002 Time: 9.02336 [04/18/2022-02:34:22] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:22] [V] [TRT] Tactic: 1002 Time: 9.05062 [04/18/2022-02:34:22] [V] [TRT] Tactic: 0 Time: 9.37408 [04/18/2022-02:34:22] [V] [TRT] Fastest Tactic: 1002 Time: 9.05062 [04/18/2022-02:34:22] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:22] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:22] [V] [TRT] Tactic: 1002 Time: 13.4107 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 41.1217 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 1002 Time: 13.4107 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(32,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(32,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04672 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044928 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04544 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.016896 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.02944 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1,8,8) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.029952 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(32,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.045568 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(32,1,32,32) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(32,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(8,1:4,8,8) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(32,1,1,1) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:23] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:34:23] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:34:23] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(32,1,32,32) *************** [04/18/2022-02:34:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(32,1,1,1) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 0.052992 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(32,1,32,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1:4,8,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:34:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,65536,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(2097152,1,8192,32) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(524288,1:4,2048,8) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(65536,65536:32,256,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:34:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1048576,65536,256,1) -> Float(1048576,1,4096,16) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 4.56077 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 4.91059 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 1002 Time: 4.56077 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1048576,65536,256,1) -> Float(262144,1:4,1024,4) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 4.46835 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 5.37702 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 1002 Time: 4.46835 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1048576,1,4096,16) -> Float(1048576,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 4.49216 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 11.7719 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 1002 Time: 4.49216 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(1048576,1,4096,16) -> Float(262144,1:4,1024,4) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:24] [V] [TRT] Tactic: 1002 Time: 4.69043 [04/18/2022-02:34:24] [V] [TRT] Tactic: 0 Time: 5.03091 [04/18/2022-02:34:24] [V] [TRT] Fastest Tactic: 1002 Time: 4.69043 [04/18/2022-02:34:24] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,1024,4) -> Float(1048576,65536,256,1) *************** [04/18/2022-02:34:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:25] [V] [TRT] Tactic: 1002 Time: 4.60045 [04/18/2022-02:34:25] [V] [TRT] Tactic: 0 Time: 11.7841 [04/18/2022-02:34:25] [V] [TRT] Fastest Tactic: 1002 Time: 4.60045 [04/18/2022-02:34:25] [V] [TRT] *************** Autotuning Reformat: Float(262144,1:4,1024,4) -> Float(1048576,1,4096,16) *************** [04/18/2022-02:34:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:25] [V] [TRT] Tactic: 1002 Time: 4.56154 [04/18/2022-02:34:25] [V] [TRT] Tactic: 0 Time: 4.56563 [04/18/2022-02:34:25] [V] [TRT] Fastest Tactic: 1002 Time: 4.56154 [04/18/2022-02:34:25] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:25] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:26] [V] [TRT] Tactic: 1002 Time: 27.5845 [04/18/2022-02:34:26] [V] [TRT] Tactic: 0 Time: 27.9685 [04/18/2022-02:34:26] [V] [TRT] Fastest Tactic: 1002 Time: 27.5845 [04/18/2022-02:34:26] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:26] [V] [TRT] Tactic: 1002 Time: 27.6626 [04/18/2022-02:34:27] [V] [TRT] Tactic: 0 Time: 28.459 [04/18/2022-02:34:27] [V] [TRT] Fastest Tactic: 1002 Time: 27.6626 [04/18/2022-02:34:27] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(196608,65536:32,256,1) *************** [04/18/2022-02:34:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:27] [V] [TRT] Tactic: 1002 Time: 27.2818 [04/18/2022-02:34:30] [V] [TRT] Tactic: 0 Time: 145.185 [04/18/2022-02:34:30] [V] [TRT] Fastest Tactic: 1002 Time: 27.2818 [04/18/2022-02:34:30] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:30] [V] [TRT] Tactic: 1002 Time: 28.4157 [04/18/2022-02:34:31] [V] [TRT] Tactic: 0 Time: 28.3341 [04/18/2022-02:34:31] [V] [TRT] Fastest Tactic: 0 Time: 28.3341 [04/18/2022-02:34:31] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:31] [V] [TRT] Tactic: 1002 Time: 27.9212 [04/18/2022-02:34:33] [V] [TRT] Tactic: 0 Time: 125.133 [04/18/2022-02:34:33] [V] [TRT] Fastest Tactic: 1002 Time: 27.9212 [04/18/2022-02:34:33] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:34] [V] [TRT] Tactic: 1002 Time: 27.319 [04/18/2022-02:34:34] [V] [TRT] Tactic: 0 Time: 27.8607 [04/18/2022-02:34:34] [V] [TRT] Fastest Tactic: 1002 Time: 27.319 [04/18/2022-02:34:34] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(196608,65536:32,256,1) *************** [04/18/2022-02:34:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:35] [V] [TRT] Tactic: 1002 Time: 28.5795 [04/18/2022-02:34:39] [V] [TRT] Tactic: 0 Time: 252.713 [04/18/2022-02:34:39] [V] [TRT] Fastest Tactic: 1002 Time: 28.5795 [04/18/2022-02:34:39] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:39] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:40] [V] [TRT] Tactic: 1002 Time: 28.5403 [04/18/2022-02:34:42] [V] [TRT] Tactic: 0 Time: 134.074 [04/18/2022-02:34:42] [V] [TRT] Fastest Tactic: 1002 Time: 28.5403 [04/18/2022-02:34:42] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:42] [V] [TRT] Tactic: 1002 Time: 27.3938 [04/18/2022-02:34:43] [V] [TRT] Tactic: 0 Time: 28.3247 [04/18/2022-02:34:43] [V] [TRT] Fastest Tactic: 1002 Time: 27.3938 [04/18/2022-02:34:43] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(196608,65536:32,256,1) *************** [04/18/2022-02:34:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:43] [V] [TRT] Tactic: 1002 Time: 27.7912 [04/18/2022-02:34:47] [V] [TRT] Tactic: 0 Time: 239.279 [04/18/2022-02:34:47] [V] [TRT] Fastest Tactic: 1002 Time: 27.7912 [04/18/2022-02:34:47] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:47] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:48] [V] [TRT] Tactic: 1002 Time: 26.7287 [04/18/2022-02:34:50] [V] [TRT] Tactic: 0 Time: 127.693 [04/18/2022-02:34:50] [V] [TRT] Fastest Tactic: 1002 Time: 26.7287 [04/18/2022-02:34:50] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:50] [V] [TRT] Tactic: 1002 Time: 27.398 [04/18/2022-02:34:51] [V] [TRT] Tactic: 0 Time: 27.8967 [04/18/2022-02:34:51] [V] [TRT] Fastest Tactic: 1002 Time: 27.398 [04/18/2022-02:34:51] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:51] [V] [TRT] Tactic: 1002 Time: 27.4529 [04/18/2022-02:34:52] [V] [TRT] Tactic: 0 Time: 28.3598 [04/18/2022-02:34:52] [V] [TRT] Fastest Tactic: 1002 Time: 27.4529 [04/18/2022-02:34:52] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:34:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:52] [V] [TRT] Tactic: 1002 Time: 41.2404 [04/18/2022-02:34:54] [V] [TRT] Tactic: 0 Time: 124.658 [04/18/2022-02:34:54] [V] [TRT] Fastest Tactic: 1002 Time: 41.2404 [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(196608,65536:32,256,1) *************** [04/18/2022-02:34:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(6291456,65536,256,1) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(6291456,1,24576,96) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1:4,6144,24) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(196608,65536:32,256,1) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,131072,512,2) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:34:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:54] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:34:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:55] [V] [TRT] Tactic: 1002 Time: 6.91136 [04/18/2022-02:34:55] [V] [TRT] Tactic: 0 Time: 6.92198 [04/18/2022-02:34:55] [V] [TRT] Fastest Tactic: 1002 Time: 6.91136 [04/18/2022-02:34:55] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:34:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:55] [V] [TRT] Tactic: 1002 Time: 7.41542 [04/18/2022-02:34:55] [V] [TRT] Tactic: 0 Time: 7.27962 [04/18/2022-02:34:55] [V] [TRT] Fastest Tactic: 0 Time: 7.27962 [04/18/2022-02:34:55] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:34:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:55] [V] [TRT] Tactic: 1002 Time: 6.72794 [04/18/2022-02:34:55] [V] [TRT] Tactic: 0 Time: 8.35712 [04/18/2022-02:34:55] [V] [TRT] Fastest Tactic: 1002 Time: 6.72794 [04/18/2022-02:34:55] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:34:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:55] [V] [TRT] Tactic: 1002 Time: 7.03898 [04/18/2022-02:34:56] [V] [TRT] Tactic: 0 Time: 7.58336 [04/18/2022-02:34:56] [V] [TRT] Fastest Tactic: 1002 Time: 7.03898 [04/18/2022-02:34:56] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:34:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:56] [V] [TRT] Tactic: 1002 Time: 7.40954 [04/18/2022-02:34:56] [V] [TRT] Tactic: 0 Time: 7.33811 [04/18/2022-02:34:56] [V] [TRT] Fastest Tactic: 0 Time: 7.33811 [04/18/2022-02:34:56] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:34:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:56] [V] [TRT] Tactic: 1002 Time: 6.70682 [04/18/2022-02:34:56] [V] [TRT] Tactic: 0 Time: 6.87155 [04/18/2022-02:34:56] [V] [TRT] Fastest Tactic: 1002 Time: 6.70682 [04/18/2022-02:34:56] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:34:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:56] [V] [TRT] Tactic: 1002 Time: 7.31686 [04/18/2022-02:34:56] [V] [TRT] Tactic: 0 Time: 15.612 [04/18/2022-02:34:56] [V] [TRT] Fastest Tactic: 1002 Time: 7.31686 [04/18/2022-02:34:56] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:34:56] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:34:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:57] [V] [TRT] Tactic: 1002 Time: 6.89664 [04/18/2022-02:34:57] [V] [TRT] Tactic: 0 Time: 7.22739 [04/18/2022-02:34:57] [V] [TRT] Fastest Tactic: 1002 Time: 6.89664 [04/18/2022-02:34:57] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:34:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:57] [V] [TRT] Tactic: 1002 Time: 7.30752 [04/18/2022-02:34:57] [V] [TRT] Tactic: 0 Time: 7.3911 [04/18/2022-02:34:57] [V] [TRT] Fastest Tactic: 1002 Time: 7.30752 [04/18/2022-02:34:57] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:34:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:57] [V] [TRT] Tactic: 1002 Time: 6.70106 [04/18/2022-02:34:57] [V] [TRT] Tactic: 0 Time: 15.8815 [04/18/2022-02:34:57] [V] [TRT] Fastest Tactic: 1002 Time: 6.70106 [04/18/2022-02:34:57] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:34:57] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:34:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:58] [V] [TRT] Tactic: 1002 Time: 7.376 [04/18/2022-02:34:58] [V] [TRT] Tactic: 0 Time: 8.05146 [04/18/2022-02:34:58] [V] [TRT] Fastest Tactic: 1002 Time: 7.376 [04/18/2022-02:34:58] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:34:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:58] [V] [TRT] Tactic: 1002 Time: 6.88499 [04/18/2022-02:34:58] [V] [TRT] Tactic: 0 Time: 6.95219 [04/18/2022-02:34:58] [V] [TRT] Fastest Tactic: 1002 Time: 6.88499 [04/18/2022-02:34:58] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:34:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:58] [V] [TRT] Tactic: 1002 Time: 6.7543 [04/18/2022-02:34:58] [V] [TRT] Tactic: 0 Time: 7.19283 [04/18/2022-02:34:58] [V] [TRT] Fastest Tactic: 1002 Time: 6.7543 [04/18/2022-02:34:58] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:34:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:34:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:34:58] [V] [TRT] Tactic: 1002 Time: 10.6209 [04/18/2022-02:34:59] [V] [TRT] Tactic: 0 Time: 31.0548 [04/18/2022-02:34:59] [V] [TRT] Fastest Tactic: 1002 Time: 10.6209 [04/18/2022-02:34:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:34:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:34:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:34:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:34:59] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:34:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:59] [V] [TRT] Tactic: 1002 Time: 6.70298 [04/18/2022-02:34:59] [V] [TRT] Tactic: 0 Time: 6.9024 [04/18/2022-02:34:59] [V] [TRT] Fastest Tactic: 1002 Time: 6.70298 [04/18/2022-02:34:59] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:34:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:34:59] [V] [TRT] Tactic: 1002 Time: 6.87245 [04/18/2022-02:35:00] [V] [TRT] Tactic: 0 Time: 6.92992 [04/18/2022-02:35:00] [V] [TRT] Fastest Tactic: 1002 Time: 6.87245 [04/18/2022-02:35:00] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:00] [V] [TRT] Tactic: 1002 Time: 7.344 [04/18/2022-02:35:00] [V] [TRT] Tactic: 0 Time: 8.76429 [04/18/2022-02:35:00] [V] [TRT] Fastest Tactic: 1002 Time: 7.344 [04/18/2022-02:35:00] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:00] [V] [TRT] Tactic: 1002 Time: 7.55661 [04/18/2022-02:35:00] [V] [TRT] Tactic: 0 Time: 7.08518 [04/18/2022-02:35:00] [V] [TRT] Fastest Tactic: 0 Time: 7.08518 [04/18/2022-02:35:00] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:00] [V] [TRT] Tactic: 1002 Time: 6.94029 [04/18/2022-02:35:00] [V] [TRT] Tactic: 0 Time: 7.2983 [04/18/2022-02:35:00] [V] [TRT] Fastest Tactic: 1002 Time: 6.94029 [04/18/2022-02:35:00] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:01] [V] [TRT] Tactic: 1002 Time: 7.30829 [04/18/2022-02:35:01] [V] [TRT] Tactic: 0 Time: 7.44154 [04/18/2022-02:35:01] [V] [TRT] Fastest Tactic: 1002 Time: 7.30829 [04/18/2022-02:35:01] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:01] [V] [TRT] Tactic: 1002 Time: 6.7991 [04/18/2022-02:35:01] [V] [TRT] Tactic: 0 Time: 15.9625 [04/18/2022-02:35:01] [V] [TRT] Fastest Tactic: 1002 Time: 6.7991 [04/18/2022-02:35:01] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:01] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:01] [V] [TRT] Tactic: 1002 Time: 6.88448 [04/18/2022-02:35:01] [V] [TRT] Tactic: 0 Time: 7.14931 [04/18/2022-02:35:01] [V] [TRT] Fastest Tactic: 1002 Time: 6.88448 [04/18/2022-02:35:01] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:02] [V] [TRT] Tactic: 1002 Time: 6.80115 [04/18/2022-02:35:02] [V] [TRT] Tactic: 0 Time: 6.95488 [04/18/2022-02:35:02] [V] [TRT] Fastest Tactic: 1002 Time: 6.80115 [04/18/2022-02:35:02] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:02] [V] [TRT] Tactic: 1002 Time: 6.80346 [04/18/2022-02:35:02] [V] [TRT] Tactic: 0 Time: 15.9764 [04/18/2022-02:35:02] [V] [TRT] Fastest Tactic: 1002 Time: 6.80346 [04/18/2022-02:35:02] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:02] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:02] [V] [TRT] Tactic: 1002 Time: 6.84518 [04/18/2022-02:35:02] [V] [TRT] Tactic: 0 Time: 7.21882 [04/18/2022-02:35:02] [V] [TRT] Fastest Tactic: 1002 Time: 6.84518 [04/18/2022-02:35:02] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:03] [V] [TRT] Tactic: 1002 Time: 6.75802 [04/18/2022-02:35:03] [V] [TRT] Tactic: 0 Time: 7.23251 [04/18/2022-02:35:03] [V] [TRT] Fastest Tactic: 1002 Time: 6.75802 [04/18/2022-02:35:03] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:03] [V] [TRT] Tactic: 1002 Time: 7.25158 [04/18/2022-02:35:03] [V] [TRT] Tactic: 0 Time: 6.89037 [04/18/2022-02:35:03] [V] [TRT] Fastest Tactic: 0 Time: 6.89037 [04/18/2022-02:35:03] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:03] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:03] [V] [TRT] Tactic: 1002 Time: 10.0241 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 30.6307 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 1002 Time: 10.0241 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045824 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.04544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.04544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044672 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045056 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.029696 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.016768 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,1,1) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(4,1,4,4) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:4,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(4,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(4,1,4,4) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:4,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(3,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.029696 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(3,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(96,1,96,96) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(3,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(24,1:4,24,24) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(3,1:32,1,1) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(3,1:32,1,1) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044928 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(3,1:32,1,1) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(3,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(96,1,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(96,1,96,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(24,1:4,24,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(3,1:32,1,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(49152,16384:32,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,16384,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1572864,1,12288,96) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,1:4,3072,24) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(49152,16384:32,128,1) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(1572864,1,12288,96) *************** [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(393216,1:4,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:04] [V] [TRT] Tactic: 1002 Time: 2.45376 [04/18/2022-02:35:04] [V] [TRT] Tactic: 0 Time: 1.8016 [04/18/2022-02:35:04] [V] [TRT] Fastest Tactic: 0 Time: 1.8016 [04/18/2022-02:35:04] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:04] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.7504 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 2.27917 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.7504 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.71674 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.78496 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.71674 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.79584 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.75949 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 0 Time: 1.75949 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.78227 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.78714 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.78227 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.71046 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 2.37901 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.71046 [04/18/2022-02:35:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.75437 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.80288 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.75437 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.73018 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.80262 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.73018 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 1.79098 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 2.22221 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 1002 Time: 1.79098 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 2.02061 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.75987 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 0 Time: 1.75987 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:05] [V] [TRT] Tactic: 1002 Time: 2.47488 [04/18/2022-02:35:05] [V] [TRT] Tactic: 0 Time: 1.78586 [04/18/2022-02:35:05] [V] [TRT] Fastest Tactic: 0 Time: 1.78586 [04/18/2022-02:35:05] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:06] [V] [TRT] Tactic: 1002 Time: 2.46822 [04/18/2022-02:35:06] [V] [TRT] Tactic: 0 Time: 1.74899 [04/18/2022-02:35:06] [V] [TRT] Fastest Tactic: 0 Time: 1.74899 [04/18/2022-02:35:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:06] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:06] [V] [TRT] Tactic: 1002 Time: 10.313 [04/18/2022-02:35:06] [V] [TRT] Tactic: 0 Time: 10.7863 [04/18/2022-02:35:06] [V] [TRT] Fastest Tactic: 1002 Time: 10.313 [04/18/2022-02:35:06] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:06] [V] [TRT] Tactic: 1002 Time: 10.8676 [04/18/2022-02:35:06] [V] [TRT] Tactic: 0 Time: 10.6565 [04/18/2022-02:35:06] [V] [TRT] Fastest Tactic: 0 Time: 10.6565 [04/18/2022-02:35:06] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:07] [V] [TRT] Tactic: 1002 Time: 10.6327 [04/18/2022-02:35:07] [V] [TRT] Tactic: 0 Time: 13.7668 [04/18/2022-02:35:07] [V] [TRT] Fastest Tactic: 1002 Time: 10.6327 [04/18/2022-02:35:07] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:07] [V] [TRT] Tactic: 1002 Time: 10.8993 [04/18/2022-02:35:07] [V] [TRT] Tactic: 0 Time: 10.8239 [04/18/2022-02:35:07] [V] [TRT] Fastest Tactic: 0 Time: 10.8239 [04/18/2022-02:35:07] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:07] [V] [TRT] Tactic: 1002 Time: 11.1014 [04/18/2022-02:35:08] [V] [TRT] Tactic: 0 Time: 11.0684 [04/18/2022-02:35:08] [V] [TRT] Fastest Tactic: 0 Time: 11.0684 [04/18/2022-02:35:08] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:08] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:08] [V] [TRT] Tactic: 1002 Time: 10.176 [04/18/2022-02:35:08] [V] [TRT] Tactic: 0 Time: 10.4449 [04/18/2022-02:35:08] [V] [TRT] Fastest Tactic: 1002 Time: 10.176 [04/18/2022-02:35:08] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:08] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:08] [V] [TRT] Tactic: 1002 Time: 11.3748 [04/18/2022-02:35:09] [V] [TRT] Tactic: 0 Time: 27.1996 [04/18/2022-02:35:09] [V] [TRT] Fastest Tactic: 1002 Time: 11.3748 [04/18/2022-02:35:09] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:09] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:09] [V] [TRT] Tactic: 1002 Time: 10.4877 [04/18/2022-02:35:09] [V] [TRT] Tactic: 0 Time: 11.4021 [04/18/2022-02:35:09] [V] [TRT] Fastest Tactic: 1002 Time: 10.4877 [04/18/2022-02:35:09] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:09] [V] [TRT] Tactic: 1002 Time: 10.683 [04/18/2022-02:35:10] [V] [TRT] Tactic: 0 Time: 10.9446 [04/18/2022-02:35:10] [V] [TRT] Fastest Tactic: 1002 Time: 10.683 [04/18/2022-02:35:10] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:10] [V] [TRT] Tactic: 1002 Time: 10.6898 [04/18/2022-02:35:10] [V] [TRT] Tactic: 0 Time: 27.5951 [04/18/2022-02:35:10] [V] [TRT] Fastest Tactic: 1002 Time: 10.6898 [04/18/2022-02:35:10] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:10] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:10] [V] [TRT] Tactic: 1002 Time: 10.7985 [04/18/2022-02:35:11] [V] [TRT] Tactic: 0 Time: 11.0514 [04/18/2022-02:35:11] [V] [TRT] Fastest Tactic: 1002 Time: 10.7985 [04/18/2022-02:35:11] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:11] [V] [TRT] Tactic: 1002 Time: 10.1426 [04/18/2022-02:35:11] [V] [TRT] Tactic: 0 Time: 10.5487 [04/18/2022-02:35:11] [V] [TRT] Fastest Tactic: 1002 Time: 10.1426 [04/18/2022-02:35:11] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:11] [V] [TRT] Tactic: 1002 Time: 10.1484 [04/18/2022-02:35:11] [V] [TRT] Tactic: 0 Time: 11.07 [04/18/2022-02:35:11] [V] [TRT] Fastest Tactic: 1002 Time: 10.1484 [04/18/2022-02:35:11] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:12] [V] [TRT] Tactic: 1002 Time: 15.6905 [04/18/2022-02:35:12] [V] [TRT] Tactic: 0 Time: 46.6561 [04/18/2022-02:35:12] [V] [TRT] Fastest Tactic: 1002 Time: 15.6905 [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:12] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:13] [V] [TRT] Tactic: 1002 Time: 10.9526 [04/18/2022-02:35:13] [V] [TRT] Tactic: 0 Time: 10.7348 [04/18/2022-02:35:13] [V] [TRT] Fastest Tactic: 0 Time: 10.7348 [04/18/2022-02:35:13] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:13] [V] [TRT] Tactic: 1002 Time: 10.7645 [04/18/2022-02:35:13] [V] [TRT] Tactic: 0 Time: 10.2369 [04/18/2022-02:35:13] [V] [TRT] Fastest Tactic: 0 Time: 10.2369 [04/18/2022-02:35:13] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:13] [V] [TRT] Tactic: 1002 Time: 10.6953 [04/18/2022-02:35:14] [V] [TRT] Tactic: 0 Time: 14.1784 [04/18/2022-02:35:14] [V] [TRT] Fastest Tactic: 1002 Time: 10.6953 [04/18/2022-02:35:14] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:14] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:14] [V] [TRT] Tactic: 1002 Time: 10.7912 [04/18/2022-02:35:14] [V] [TRT] Tactic: 0 Time: 10.2735 [04/18/2022-02:35:14] [V] [TRT] Fastest Tactic: 0 Time: 10.2735 [04/18/2022-02:35:14] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:14] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:14] [V] [TRT] Tactic: 1002 Time: 10.4805 [04/18/2022-02:35:15] [V] [TRT] Tactic: 0 Time: 11.0227 [04/18/2022-02:35:15] [V] [TRT] Fastest Tactic: 1002 Time: 10.4805 [04/18/2022-02:35:15] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:15] [V] [TRT] Tactic: 1002 Time: 10.6967 [04/18/2022-02:35:15] [V] [TRT] Tactic: 0 Time: 10.9875 [04/18/2022-02:35:15] [V] [TRT] Fastest Tactic: 1002 Time: 10.6967 [04/18/2022-02:35:15] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:15] [V] [TRT] Tactic: 1002 Time: 11.3435 [04/18/2022-02:35:16] [V] [TRT] Tactic: 0 Time: 27.7414 [04/18/2022-02:35:16] [V] [TRT] Fastest Tactic: 1002 Time: 11.3435 [04/18/2022-02:35:16] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:16] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:16] [V] [TRT] Tactic: 1002 Time: 11.0024 [04/18/2022-02:35:16] [V] [TRT] Tactic: 0 Time: 11.3748 [04/18/2022-02:35:16] [V] [TRT] Fastest Tactic: 1002 Time: 11.0024 [04/18/2022-02:35:16] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:16] [V] [TRT] Tactic: 1002 Time: 10.1926 [04/18/2022-02:35:16] [V] [TRT] Tactic: 0 Time: 10.3631 [04/18/2022-02:35:16] [V] [TRT] Fastest Tactic: 1002 Time: 10.1926 [04/18/2022-02:35:16] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:17] [V] [TRT] Tactic: 1002 Time: 10.7322 [04/18/2022-02:35:17] [V] [TRT] Tactic: 0 Time: 27.4506 [04/18/2022-02:35:17] [V] [TRT] Fastest Tactic: 1002 Time: 10.7322 [04/18/2022-02:35:17] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:17] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:17] [V] [TRT] Tactic: 1002 Time: 10.2769 [04/18/2022-02:35:17] [V] [TRT] Tactic: 0 Time: 10.8357 [04/18/2022-02:35:17] [V] [TRT] Fastest Tactic: 1002 Time: 10.2769 [04/18/2022-02:35:17] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:18] [V] [TRT] Tactic: 1002 Time: 10.0211 [04/18/2022-02:35:18] [V] [TRT] Tactic: 0 Time: 10.9893 [04/18/2022-02:35:18] [V] [TRT] Fastest Tactic: 1002 Time: 10.0211 [04/18/2022-02:35:18] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:18] [V] [TRT] Tactic: 1002 Time: 10.6856 [04/18/2022-02:35:18] [V] [TRT] Tactic: 0 Time: 11.1365 [04/18/2022-02:35:18] [V] [TRT] Fastest Tactic: 1002 Time: 10.6856 [04/18/2022-02:35:18] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 15.61 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 46.0933 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 1002 Time: 15.61 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(144,1,1,1) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.043392 [04/18/2022-02:35:19] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:19] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:19] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,1,1) *************** [04/18/2022-02:35:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:19] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,144,144) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.030336 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045056 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044928 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.029696 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.029696 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(144,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,144,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(144,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.043648 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.043264 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(144,1,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 0.057344 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(144,1,144,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,16384,128,1) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(393216,1,3072,24) -> Float(98304,1:4,768,6) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(98304,1:4,768,6) -> Float(393216,1,3072,24) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(1:4,32768,256,2) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(81920,16384:32,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,16384,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(2359296,1,18432,144) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,1:4,4608,36) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(81920,16384:32,128,1) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,16384,128,1) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(2359296,1,18432,144) *************** [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32768,256,2) -> Float(589824,1:4,4608,36) *************** [04/18/2022-02:35:20] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 2.57101 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 2.67712 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 1002 Time: 2.57101 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 2.60416 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 2.59264 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 0 Time: 2.59264 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:20] [V] [TRT] Tactic: 1002 Time: 2.73011 [04/18/2022-02:35:20] [V] [TRT] Tactic: 0 Time: 3.1744 [04/18/2022-02:35:20] [V] [TRT] Fastest Tactic: 1002 Time: 2.73011 [04/18/2022-02:35:20] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.62042 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 3.22906 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.62042 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.63053 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 2.624 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 0 Time: 2.624 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.53901 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 2.6569 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.53901 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 3.48352 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 3.71354 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 3.48352 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.55194 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 2.72742 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.55194 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.53965 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 2.65958 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.53965 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.77568 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 4.11405 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.77568 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:21] [V] [TRT] Tactic: 1002 Time: 2.49536 [04/18/2022-02:35:21] [V] [TRT] Tactic: 0 Time: 2.57971 [04/18/2022-02:35:21] [V] [TRT] Fastest Tactic: 1002 Time: 2.49536 [04/18/2022-02:35:21] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 2.51443 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 3.14522 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 1002 Time: 2.51443 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 2.60826 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 2.63949 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 1002 Time: 2.60826 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 3.80813 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 11.2842 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 1002 Time: 3.80813 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 2.65792 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 2.59456 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 0 Time: 2.59456 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 3.2 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 2.57459 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 0 Time: 2.57459 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:22] [V] [TRT] Tactic: 1002 Time: 2.71821 [04/18/2022-02:35:22] [V] [TRT] Tactic: 0 Time: 3.10387 [04/18/2022-02:35:22] [V] [TRT] Fastest Tactic: 1002 Time: 2.71821 [04/18/2022-02:35:22] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 3.34707 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 2.72205 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 0 Time: 2.72205 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 3.65875 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 2.7497 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 0 Time: 2.7497 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 2.54477 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 2.66534 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 1002 Time: 2.54477 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 2.78451 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 3.70816 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 1002 Time: 2.78451 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 3.36845 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 3.07968 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 0 Time: 3.07968 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 3.84384 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 2.62784 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 0 Time: 2.62784 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 2.9225 [04/18/2022-02:35:23] [V] [TRT] Tactic: 0 Time: 3.6896 [04/18/2022-02:35:23] [V] [TRT] Fastest Tactic: 1002 Time: 2.9225 [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:23] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:23] [V] [TRT] Tactic: 1002 Time: 3.10579 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 2.59776 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 0 Time: 2.59776 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 2.50458 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 3.19232 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 2.50458 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 2.6048 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 2.64307 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 2.6048 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 3.81043 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 11.281 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 3.81043 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(6,1,6,6) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(2,1:4,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(6,1,6,6) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1,144,144) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(36,1:4,36,36) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(5,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(144,1,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(144,1,144,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:4,36,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:32,1,1) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(20480,4096:32,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,4096,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(589824,1,9216,144) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(147456,1:4,2304,36) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(20480,4096:32,64,1) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(589824,1,9216,144) *************** [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(147456,1:4,2304,36) *************** [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.69504 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 0.786048 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 0.69504 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.699264 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 0.709376 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 0.699264 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 2.04122 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 0.69696 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 0 Time: 0.69696 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.764416 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 0.768896 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 0.764416 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.690432 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 0.71488 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 0.690432 [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.699392 [04/18/2022-02:35:24] [V] [TRT] Tactic: 0 Time: 1.33261 [04/18/2022-02:35:24] [V] [TRT] Fastest Tactic: 1002 Time: 0.699392 [04/18/2022-02:35:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:24] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:24] [V] [TRT] Tactic: 1002 Time: 0.72192 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.691328 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 0 Time: 0.691328 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 0.693248 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.709504 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 1002 Time: 0.693248 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 0.715264 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.7008 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 0 Time: 0.7008 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 1.48173 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.689152 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 0 Time: 0.689152 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 0.707968 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.697216 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 0 Time: 0.697216 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 0.695168 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 0.774784 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 1002 Time: 0.695168 [04/18/2022-02:35:25] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 4.88909 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 4.29248 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 0 Time: 4.29248 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 4.26714 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 4.28736 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 1002 Time: 4.26714 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 4.88666 [04/18/2022-02:35:25] [V] [TRT] Tactic: 0 Time: 5.34989 [04/18/2022-02:35:25] [V] [TRT] Fastest Tactic: 1002 Time: 4.88666 [04/18/2022-02:35:25] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:25] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:25] [V] [TRT] Tactic: 1002 Time: 4.39181 [04/18/2022-02:35:26] [V] [TRT] Tactic: 0 Time: 4.53414 [04/18/2022-02:35:26] [V] [TRT] Fastest Tactic: 1002 Time: 4.39181 [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:26] [V] [TRT] Tactic: 1002 Time: 4.32013 [04/18/2022-02:35:26] [V] [TRT] Tactic: 0 Time: 4.4151 [04/18/2022-02:35:26] [V] [TRT] Fastest Tactic: 1002 Time: 4.32013 [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:26] [V] [TRT] Tactic: 1002 Time: 4.22758 [04/18/2022-02:35:26] [V] [TRT] Tactic: 0 Time: 4.29184 [04/18/2022-02:35:26] [V] [TRT] Fastest Tactic: 1002 Time: 4.22758 [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:26] [V] [TRT] Tactic: 1002 Time: 4.3776 [04/18/2022-02:35:26] [V] [TRT] Tactic: 0 Time: 6.47565 [04/18/2022-02:35:26] [V] [TRT] Fastest Tactic: 1002 Time: 4.3776 [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:26] [V] [TRT] Tactic: 1002 Time: 4.24525 [04/18/2022-02:35:26] [V] [TRT] Tactic: 0 Time: 4.4471 [04/18/2022-02:35:26] [V] [TRT] Fastest Tactic: 1002 Time: 4.24525 [04/18/2022-02:35:26] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:26] [V] [TRT] Tactic: 1002 Time: 4.28685 [04/18/2022-02:35:27] [V] [TRT] Tactic: 0 Time: 4.29645 [04/18/2022-02:35:27] [V] [TRT] Fastest Tactic: 1002 Time: 4.28685 [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:27] [V] [TRT] Tactic: 1002 Time: 4.38541 [04/18/2022-02:35:27] [V] [TRT] Tactic: 0 Time: 6.10304 [04/18/2022-02:35:27] [V] [TRT] Fastest Tactic: 1002 Time: 4.38541 [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:27] [V] [TRT] Tactic: 1002 Time: 4.17818 [04/18/2022-02:35:27] [V] [TRT] Tactic: 0 Time: 4.40371 [04/18/2022-02:35:27] [V] [TRT] Fastest Tactic: 1002 Time: 4.17818 [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:27] [V] [TRT] Tactic: 1002 Time: 4.19827 [04/18/2022-02:35:27] [V] [TRT] Tactic: 0 Time: 4.38899 [04/18/2022-02:35:27] [V] [TRT] Fastest Tactic: 1002 Time: 4.19827 [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:27] [V] [TRT] Tactic: 1002 Time: 4.2199 [04/18/2022-02:35:27] [V] [TRT] Tactic: 0 Time: 4.49242 [04/18/2022-02:35:27] [V] [TRT] Fastest Tactic: 1002 Time: 4.2199 [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:27] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:27] [V] [TRT] Tactic: 1002 Time: 6.29696 [04/18/2022-02:35:28] [V] [TRT] Tactic: 0 Time: 19.2678 [04/18/2022-02:35:28] [V] [TRT] Fastest Tactic: 1002 Time: 6.29696 [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:28] [V] [TRT] Tactic: 1002 Time: 4.87155 [04/18/2022-02:35:28] [V] [TRT] Tactic: 0 Time: 4.40691 [04/18/2022-02:35:28] [V] [TRT] Fastest Tactic: 0 Time: 4.40691 [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:28] [V] [TRT] Tactic: 1002 Time: 4.26714 [04/18/2022-02:35:28] [V] [TRT] Tactic: 0 Time: 4.29939 [04/18/2022-02:35:28] [V] [TRT] Fastest Tactic: 1002 Time: 4.26714 [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:28] [V] [TRT] Tactic: 1002 Time: 4.37094 [04/18/2022-02:35:28] [V] [TRT] Tactic: 0 Time: 5.13216 [04/18/2022-02:35:28] [V] [TRT] Fastest Tactic: 1002 Time: 4.37094 [04/18/2022-02:35:28] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:28] [V] [TRT] Tactic: 1002 Time: 4.384 [04/18/2022-02:35:29] [V] [TRT] Tactic: 0 Time: 4.42278 [04/18/2022-02:35:29] [V] [TRT] Fastest Tactic: 1002 Time: 4.384 [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:29] [V] [TRT] Tactic: 1002 Time: 4.24653 [04/18/2022-02:35:29] [V] [TRT] Tactic: 0 Time: 4.90086 [04/18/2022-02:35:29] [V] [TRT] Fastest Tactic: 1002 Time: 4.24653 [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:29] [V] [TRT] Tactic: 1002 Time: 4.27571 [04/18/2022-02:35:29] [V] [TRT] Tactic: 0 Time: 4.71885 [04/18/2022-02:35:29] [V] [TRT] Fastest Tactic: 1002 Time: 4.27571 [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:29] [V] [TRT] Tactic: 1002 Time: 4.38592 [04/18/2022-02:35:29] [V] [TRT] Tactic: 0 Time: 6.07053 [04/18/2022-02:35:29] [V] [TRT] Fastest Tactic: 1002 Time: 4.38592 [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:29] [V] [TRT] Tactic: 1002 Time: 4.21248 [04/18/2022-02:35:29] [V] [TRT] Tactic: 0 Time: 4.40422 [04/18/2022-02:35:29] [V] [TRT] Fastest Tactic: 1002 Time: 4.21248 [04/18/2022-02:35:29] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:29] [V] [TRT] Tactic: 1002 Time: 4.18419 [04/18/2022-02:35:30] [V] [TRT] Tactic: 0 Time: 4.3872 [04/18/2022-02:35:30] [V] [TRT] Fastest Tactic: 1002 Time: 4.18419 [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:30] [V] [TRT] Tactic: 1002 Time: 4.91648 [04/18/2022-02:35:30] [V] [TRT] Tactic: 0 Time: 6.08384 [04/18/2022-02:35:30] [V] [TRT] Fastest Tactic: 1002 Time: 4.91648 [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:30] [V] [TRT] Tactic: 1002 Time: 4.14784 [04/18/2022-02:35:30] [V] [TRT] Tactic: 0 Time: 4.3648 [04/18/2022-02:35:30] [V] [TRT] Fastest Tactic: 1002 Time: 4.14784 [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:30] [V] [TRT] Tactic: 1002 Time: 4.19034 [04/18/2022-02:35:30] [V] [TRT] Tactic: 0 Time: 4.33549 [04/18/2022-02:35:30] [V] [TRT] Fastest Tactic: 1002 Time: 4.19034 [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:30] [V] [TRT] Tactic: 1002 Time: 4.22899 [04/18/2022-02:35:30] [V] [TRT] Tactic: 0 Time: 4.83034 [04/18/2022-02:35:30] [V] [TRT] Fastest Tactic: 1002 Time: 4.22899 [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:30] [V] [TRT] Tactic: 1002 Time: 6.29645 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 19.281 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 1002 Time: 6.29645 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.016896 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.016896 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045056 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.04544 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.016768 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.016768 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.043776 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044672 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.047104 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045056 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.029696 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,10,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.029312 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.045056 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.046208 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.043648 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.043392 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(240,1,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_2/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:31] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:35:31] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:35:31] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(240,1,240,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(32768,4096:32,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,4096,64,1) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(163840,1,2560,40) -> Float(40960,1:4,640,10) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,4096,64,1) *************** [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(40960,1:4,640,10) -> Float(163840,1,2560,40) *************** [04/18/2022-02:35:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:31] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:32] [V] [TRT] Tactic: 1002 Time: 5.34426 [04/18/2022-02:35:32] [V] [TRT] Tactic: 0 Time: 5.49734 [04/18/2022-02:35:32] [V] [TRT] Fastest Tactic: 1002 Time: 5.34426 [04/18/2022-02:35:32] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:32] [V] [TRT] Tactic: 1002 Time: 5.33696 [04/18/2022-02:35:32] [V] [TRT] Tactic: 0 Time: 5.51296 [04/18/2022-02:35:32] [V] [TRT] Fastest Tactic: 1002 Time: 5.33696 [04/18/2022-02:35:32] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:32] [V] [TRT] Tactic: 1002 Time: 5.51002 [04/18/2022-02:35:32] [V] [TRT] Tactic: 0 Time: 6.38016 [04/18/2022-02:35:32] [V] [TRT] Fastest Tactic: 1002 Time: 5.51002 [04/18/2022-02:35:32] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:32] [V] [TRT] Tactic: 1002 Time: 5.58054 [04/18/2022-02:35:32] [V] [TRT] Tactic: 0 Time: 5.71379 [04/18/2022-02:35:32] [V] [TRT] Fastest Tactic: 1002 Time: 5.58054 [04/18/2022-02:35:32] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:33] [V] [TRT] Tactic: 1002 Time: 5.40531 [04/18/2022-02:35:33] [V] [TRT] Tactic: 0 Time: 5.62931 [04/18/2022-02:35:33] [V] [TRT] Fastest Tactic: 1002 Time: 5.40531 [04/18/2022-02:35:33] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:33] [V] [TRT] Tactic: 1002 Time: 5.4441 [04/18/2022-02:35:33] [V] [TRT] Tactic: 0 Time: 5.3769 [04/18/2022-02:35:33] [V] [TRT] Fastest Tactic: 0 Time: 5.3769 [04/18/2022-02:35:33] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:33] [V] [TRT] Tactic: 1002 Time: 5.56608 [04/18/2022-02:35:33] [V] [TRT] Tactic: 0 Time: 7.83194 [04/18/2022-02:35:33] [V] [TRT] Fastest Tactic: 1002 Time: 5.56608 [04/18/2022-02:35:33] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:33] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:33] [V] [TRT] Tactic: 1002 Time: 5.4359 [04/18/2022-02:35:33] [V] [TRT] Tactic: 0 Time: 5.54112 [04/18/2022-02:35:33] [V] [TRT] Fastest Tactic: 1002 Time: 5.4359 [04/18/2022-02:35:33] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:33] [V] [TRT] Tactic: 1002 Time: 5.91898 [04/18/2022-02:35:34] [V] [TRT] Tactic: 0 Time: 5.42106 [04/18/2022-02:35:34] [V] [TRT] Fastest Tactic: 0 Time: 5.42106 [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:34] [V] [TRT] Tactic: 1002 Time: 5.58003 [04/18/2022-02:35:34] [V] [TRT] Tactic: 0 Time: 7.81146 [04/18/2022-02:35:34] [V] [TRT] Fastest Tactic: 1002 Time: 5.58003 [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:34] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:34] [V] [TRT] Tactic: 1002 Time: 4.22771 [04/18/2022-02:35:34] [V] [TRT] Tactic: 0 Time: 4.41894 [04/18/2022-02:35:34] [V] [TRT] Fastest Tactic: 1002 Time: 4.22771 [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:34] [V] [TRT] Tactic: 1002 Time: 4.26445 [04/18/2022-02:35:34] [V] [TRT] Tactic: 0 Time: 4.3191 [04/18/2022-02:35:34] [V] [TRT] Fastest Tactic: 1002 Time: 4.26445 [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:34] [V] [TRT] Tactic: 1002 Time: 5.00454 [04/18/2022-02:35:34] [V] [TRT] Tactic: 0 Time: 5.13779 [04/18/2022-02:35:34] [V] [TRT] Fastest Tactic: 1002 Time: 5.00454 [04/18/2022-02:35:34] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:34] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.86362 [04/18/2022-02:35:35] [V] [TRT] Tactic: 0 Time: 4.4905 [04/18/2022-02:35:35] [V] [TRT] Fastest Tactic: 0 Time: 4.4905 [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:35] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.29722 [04/18/2022-02:35:35] [V] [TRT] Tactic: 0 Time: 4.41216 [04/18/2022-02:35:35] [V] [TRT] Fastest Tactic: 1002 Time: 4.29722 [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:35] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.26074 [04/18/2022-02:35:35] [V] [TRT] Tactic: 0 Time: 4.45286 [04/18/2022-02:35:35] [V] [TRT] Fastest Tactic: 1002 Time: 4.26074 [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:35] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.4864 [04/18/2022-02:35:35] [V] [TRT] Tactic: 0 Time: 6.33869 [04/18/2022-02:35:35] [V] [TRT] Fastest Tactic: 1002 Time: 4.4864 [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:35] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.26074 [04/18/2022-02:35:35] [V] [TRT] Tactic: 0 Time: 4.8768 [04/18/2022-02:35:35] [V] [TRT] Fastest Tactic: 1002 Time: 4.26074 [04/18/2022-02:35:35] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:35] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:35] [V] [TRT] Tactic: 1002 Time: 4.37427 [04/18/2022-02:35:36] [V] [TRT] Tactic: 0 Time: 4.45453 [04/18/2022-02:35:36] [V] [TRT] Fastest Tactic: 1002 Time: 4.37427 [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:36] [V] [TRT] Tactic: 1002 Time: 4.40512 [04/18/2022-02:35:36] [V] [TRT] Tactic: 0 Time: 6.02714 [04/18/2022-02:35:36] [V] [TRT] Fastest Tactic: 1002 Time: 4.40512 [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:36] [V] [TRT] Tactic: 1002 Time: 4.72243 [04/18/2022-02:35:36] [V] [TRT] Tactic: 0 Time: 4.36058 [04/18/2022-02:35:36] [V] [TRT] Fastest Tactic: 0 Time: 4.36058 [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:36] [V] [TRT] Tactic: 1002 Time: 4.73011 [04/18/2022-02:35:36] [V] [TRT] Tactic: 0 Time: 4.41498 [04/18/2022-02:35:36] [V] [TRT] Fastest Tactic: 0 Time: 4.41498 [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:36] [V] [TRT] Tactic: 1002 Time: 4.72832 [04/18/2022-02:35:36] [V] [TRT] Tactic: 0 Time: 4.51238 [04/18/2022-02:35:36] [V] [TRT] Fastest Tactic: 0 Time: 4.51238 [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:35:36] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:36] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:36] [V] [TRT] Tactic: 1002 Time: 6.29645 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 19.296 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 6.29645 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:37] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(983040,4096,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(983040,1,15360,240) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1:4,3840,60) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(32768,4096:32,64,1) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,4096,64,1) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(983040,1,15360,240) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(245760,1:4,3840,60) *************** [04/18/2022-02:35:37] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.59616 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.12947 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 0 Time: 1.12947 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.08518 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.73965 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.08518 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.30189 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.80672 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.30189 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.14752 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.6343 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.14752 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.03603 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.69741 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.03603 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.76307 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.1447 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 0 Time: 1.1447 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.07558 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.45856 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.07558 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.08518 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.16198 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.08518 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.03616 [04/18/2022-02:35:37] [V] [TRT] Tactic: 0 Time: 1.03936 [04/18/2022-02:35:37] [V] [TRT] Fastest Tactic: 1002 Time: 1.03616 [04/18/2022-02:35:37] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:37] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:37] [V] [TRT] Tactic: 1002 Time: 1.05472 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.49875 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.05472 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.11603 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.1625 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.11603 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.11475 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.80211 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.11475 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.03168 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.1465 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.03168 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.5561 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 4.79744 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.5561 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.12525 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.11834 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 0 Time: 1.11834 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.03629 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.73286 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.03629 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.83002 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.09312 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 0 Time: 1.09312 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.09274 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.17171 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.09274 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.29843 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.14419 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 0 Time: 1.14419 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.0409 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.04755 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.0409 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.14368 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 2.29107 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 1002 Time: 1.14368 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.83923 [04/18/2022-02:35:38] [V] [TRT] Tactic: 0 Time: 1.15738 [04/18/2022-02:35:38] [V] [TRT] Fastest Tactic: 0 Time: 1.15738 [04/18/2022-02:35:38] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:38] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:38] [V] [TRT] Tactic: 1002 Time: 1.45741 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.03782 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 0 Time: 1.03782 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.12064 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.85574 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.12064 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.00442 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.15635 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.00442 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.09965 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.0999 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.09965 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.01786 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.79584 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.01786 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.55482 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 4.7113 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.55482 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:39] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.1639 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.22291 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.1639 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.17427 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.19808 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.17427 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,4096,64,1) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.18387 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.28038 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.18387 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 2.05581 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.14112 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 0 Time: 1.14112 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.64672 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.17274 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 0 Time: 1.17274 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(1245184,1,19456,304) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.18656 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.59245 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.18656 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:39] [V] [TRT] Tactic: 1002 Time: 1.10899 [04/18/2022-02:35:39] [V] [TRT] Tactic: 0 Time: 1.13677 [04/18/2022-02:35:39] [V] [TRT] Fastest Tactic: 1002 Time: 1.10899 [04/18/2022-02:35:39] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:39] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.18362 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 1.17235 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 0 Time: 1.17235 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(311296,1:4,4864,76) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.10426 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 1.59014 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 1.10426 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.16314 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 1.23456 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 1.16314 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.08774 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 1.13869 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 1.08774 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(40960,4096:32,64,1) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.56557 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 1.22022 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 0 Time: 1.22022 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(1245184,4096,64,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_0_up_lvl_3/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 1.71994 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 4.98445 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 1.71994 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(1245184,1,19456,304) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(311296,1:4,4864,76) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(40960,4096:32,64,1) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(10,1,10,10) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(3,1:4,3,3) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(10,1,10,10) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(3,1:4,3,3) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(240,1,240,240) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(60,1:4,60,60) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(240,1,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(240,1,240,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(60,1:4,60,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(8,1:32,1,1) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(8192,1024:32,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1024,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(245760,1,7680,240) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(61440,1:4,1920,60) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024:32,32,1) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(245760,1,7680,240) *************** [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(61440,1:4,1920,60) *************** [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.3072 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.364416 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.3072 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.321792 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.363264 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.321792 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.345216 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.359168 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.345216 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.342144 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.339456 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 0 Time: 0.339456 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.343424 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.360064 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.343424 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.342784 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.340736 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 0 Time: 0.340736 [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.32256 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.357632 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.32256 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.320768 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.362496 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.320768 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.345984 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.359552 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.345984 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.342528 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.34304 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.342528 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.345088 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.362624 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.345088 [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 0.33984 [04/18/2022-02:35:40] [V] [TRT] Tactic: 0 Time: 0.341504 [04/18/2022-02:35:40] [V] [TRT] Fastest Tactic: 1002 Time: 0.33984 [04/18/2022-02:35:40] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:40] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:40] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:40] [V] [TRT] Tactic: 1002 Time: 2.66253 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.61542 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 0 Time: 2.61542 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 3.26861 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.66675 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 0 Time: 2.66675 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 2.11136 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.11213 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 1002 Time: 2.11136 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 3.08902 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.2016 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 0 Time: 2.2016 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 2.10458 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.30592 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 1002 Time: 2.10458 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 2.11264 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.14861 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 1002 Time: 2.11264 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 2.64166 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 3.17376 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 1002 Time: 2.64166 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 3.21664 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.20186 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 0 Time: 2.20186 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:41] [V] [TRT] Tactic: 1002 Time: 2.13414 [04/18/2022-02:35:41] [V] [TRT] Tactic: 0 Time: 2.19443 [04/18/2022-02:35:41] [V] [TRT] Fastest Tactic: 1002 Time: 2.13414 [04/18/2022-02:35:41] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:41] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 3.2416 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.81894 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 0 Time: 2.81894 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 2.07488 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.23078 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 2.07488 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 2.12429 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.16384 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 2.12429 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 2.03507 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.17702 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 2.03507 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 3.13626 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 9.44474 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 3.13626 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 2.13581 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.14656 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 2.13581 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 2.11059 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.52659 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 1002 Time: 2.11059 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:42] [V] [TRT] Tactic: 1002 Time: 3.19002 [04/18/2022-02:35:42] [V] [TRT] Tactic: 0 Time: 2.11994 [04/18/2022-02:35:42] [V] [TRT] Fastest Tactic: 0 Time: 2.11994 [04/18/2022-02:35:42] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:42] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.37056 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.22554 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 0 Time: 2.22554 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.38067 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.29837 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 0 Time: 2.29837 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.11328 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.14758 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 1002 Time: 2.11328 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 3.21037 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 3.17248 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 0 Time: 3.17248 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.07974 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.19546 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 1002 Time: 2.07974 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.79194 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.15603 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 0 Time: 2.15603 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.68685 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.82048 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 1002 Time: 2.68685 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.05568 [04/18/2022-02:35:43] [V] [TRT] Tactic: 0 Time: 2.22682 [04/18/2022-02:35:43] [V] [TRT] Fastest Tactic: 1002 Time: 2.05568 [04/18/2022-02:35:43] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:43] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:43] [V] [TRT] Tactic: 1002 Time: 2.07949 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 2.17882 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 1002 Time: 2.07949 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 3.3335 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 2.18189 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 2.18189 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 3.19744 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 9.45715 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 1002 Time: 3.19744 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.026496 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.025344 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.025472 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.043648 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044672 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.030208 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017152 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044928 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044544 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.046464 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017024 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017024 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.04608 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:44] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.029952 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:35:44] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:44] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:44] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,1,1) *************** [04/18/2022-02:35:44] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:44] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.0448 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_3/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1024,32,1) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,2560,80) -> Float(20480,1:4,640,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,640,20) -> Float(81920,1,2560,80) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(20,1,20,20) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(5,1:4,5,5) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(20,1,20,20) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(5,1:4,5,5) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(480,1,480,480) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(120,1:4,120,120) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(480,1,480,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(120,1:4,120,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(15,1:32,1,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(15360,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1024,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(491520,1,15360,480) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(122880,1:4,3840,120) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(15360,1024:32,32,1) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(491520,1,15360,480) *************** [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(122880,1:4,3840,120) *************** [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.482816 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.50304 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 0.482816 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 1.41363 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.494208 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.494208 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 1.29702 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 1.59053 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 1.29702 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.492672 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.485888 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.485888 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.510208 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 1.1721 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 0.510208 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.47744 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.495104 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 0.47744 [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.499968 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 1.68947 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 0.499968 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.490112 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.506368 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 0.490112 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.505344 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.503552 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.503552 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.493184 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.487296 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.487296 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.99264 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.500608 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.500608 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 0.812032 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 0.487552 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 0.487552 [04/18/2022-02:35:45] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 3.01811 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 3.04448 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 1002 Time: 3.01811 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 3.57069 [04/18/2022-02:35:45] [V] [TRT] Tactic: 0 Time: 2.99725 [04/18/2022-02:35:45] [V] [TRT] Fastest Tactic: 0 Time: 2.99725 [04/18/2022-02:35:45] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:45] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:45] [V] [TRT] Tactic: 1002 Time: 3.01594 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 2.97907 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 0 Time: 2.97907 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.08595 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 3.64454 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 1002 Time: 3.08595 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.00621 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 3.11834 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 1002 Time: 3.00621 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.02682 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 3.75782 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 1002 Time: 3.02682 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.53446 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 3.99731 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 1002 Time: 3.53446 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.55187 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 3.17171 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 0 Time: 3.17171 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:46] [V] [TRT] Tactic: 1002 Time: 3.10822 [04/18/2022-02:35:46] [V] [TRT] Tactic: 0 Time: 2.99187 [04/18/2022-02:35:46] [V] [TRT] Fastest Tactic: 0 Time: 2.99187 [04/18/2022-02:35:46] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:46] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 3.04026 [04/18/2022-02:35:47] [V] [TRT] Tactic: 0 Time: 3.99104 [04/18/2022-02:35:47] [V] [TRT] Fastest Tactic: 1002 Time: 3.04026 [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 3.40122 [04/18/2022-02:35:47] [V] [TRT] Tactic: 0 Time: 3.11296 [04/18/2022-02:35:47] [V] [TRT] Fastest Tactic: 0 Time: 3.11296 [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 2.9015 [04/18/2022-02:35:47] [V] [TRT] Tactic: 0 Time: 3.02925 [04/18/2022-02:35:47] [V] [TRT] Fastest Tactic: 1002 Time: 2.9015 [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 4.21658 [04/18/2022-02:35:47] [V] [TRT] Tactic: 0 Time: 3.12742 [04/18/2022-02:35:47] [V] [TRT] Fastest Tactic: 0 Time: 3.12742 [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 4.40678 [04/18/2022-02:35:47] [V] [TRT] Tactic: 0 Time: 13.1523 [04/18/2022-02:35:47] [V] [TRT] Fastest Tactic: 1002 Time: 4.40678 [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:47] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:47] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:47] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:47] [V] [TRT] Tactic: 1002 Time: 4.3639 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 2.9591 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 0 Time: 2.9591 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 2.94336 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 2.96896 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 1002 Time: 2.94336 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 2.91277 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 3.4921 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 1002 Time: 2.91277 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 3.16352 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 3.28333 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 1002 Time: 3.16352 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 3.62598 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 3.16685 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 0 Time: 3.16685 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 3.05114 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 3.08198 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 1002 Time: 3.05114 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 2.91238 [04/18/2022-02:35:48] [V] [TRT] Tactic: 0 Time: 3.9241 [04/18/2022-02:35:48] [V] [TRT] Fastest Tactic: 1002 Time: 2.91238 [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:48] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:48] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:48] [V] [TRT] Tactic: 1002 Time: 4.10163 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 3.05805 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 0 Time: 3.05805 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 2.95347 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 3.0647 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 1002 Time: 2.95347 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 3.02336 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 4.31475 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 1002 Time: 3.02336 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 2.87194 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 3.06253 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 1002 Time: 2.87194 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 2.95654 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 3.00838 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 1002 Time: 2.95654 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 2.92941 [04/18/2022-02:35:49] [V] [TRT] Tactic: 0 Time: 3.01773 [04/18/2022-02:35:49] [V] [TRT] Fastest Tactic: 1002 Time: 2.92941 [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:49] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:49] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:49] [V] [TRT] Tactic: 1002 Time: 5.04051 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 13.7146 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 1002 Time: 5.04051 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.025216 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.04544 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.0256 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.025088 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044032 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.064128 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.043904 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.030336 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.046592 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01728 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01728 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.046336 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.063232 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.030208 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.062976 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.063488 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.06528 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.04416 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.043776 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_4/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 0.056448 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,672,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(21504,1024:32,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1024,32,1) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(114688,1,3584,112) -> Float(28672,1:4,896,28) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1024,32,1) *************** [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(28672,1:4,896,28) -> Float(114688,1,3584,112) *************** [04/18/2022-02:35:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:50] [V] [TRT] Tactic: 1002 Time: 4.09638 [04/18/2022-02:35:50] [V] [TRT] Tactic: 0 Time: 3.60474 [04/18/2022-02:35:50] [V] [TRT] Fastest Tactic: 0 Time: 3.60474 [04/18/2022-02:35:50] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:50] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.56147 [04/18/2022-02:35:51] [V] [TRT] Tactic: 0 Time: 3.6041 [04/18/2022-02:35:51] [V] [TRT] Fastest Tactic: 1002 Time: 3.56147 [04/18/2022-02:35:51] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.54266 [04/18/2022-02:35:51] [V] [TRT] Tactic: 0 Time: 3.54163 [04/18/2022-02:35:51] [V] [TRT] Fastest Tactic: 0 Time: 3.54163 [04/18/2022-02:35:51] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.75757 [04/18/2022-02:35:51] [V] [TRT] Tactic: 0 Time: 3.7321 [04/18/2022-02:35:51] [V] [TRT] Fastest Tactic: 0 Time: 3.7321 [04/18/2022-02:35:51] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.50656 [04/18/2022-02:35:51] [V] [TRT] Tactic: 0 Time: 3.85434 [04/18/2022-02:35:51] [V] [TRT] Fastest Tactic: 1002 Time: 3.50656 [04/18/2022-02:35:51] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.47866 [04/18/2022-02:35:51] [V] [TRT] Tactic: 0 Time: 4.06362 [04/18/2022-02:35:51] [V] [TRT] Fastest Tactic: 1002 Time: 3.47866 [04/18/2022-02:35:51] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:51] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:51] [V] [TRT] Tactic: 1002 Time: 3.57926 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 4.85312 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 1002 Time: 3.57926 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.53766 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 3.87558 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 1002 Time: 3.53766 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.58106 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 3.59757 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 1002 Time: 3.58106 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_0_up_lvl_4/1x1_pre_sample/conv/BiasAdd || StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_conv2d/Conv2D) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.5767 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 4.74406 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 1002 Time: 3.5767 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.70048 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 3.40864 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 0 Time: 3.40864 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.73786 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 2.97638 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 0 Time: 2.97638 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:52] [V] [TRT] Tactic: 1002 Time: 3.02502 [04/18/2022-02:35:52] [V] [TRT] Tactic: 0 Time: 2.96102 [04/18/2022-02:35:52] [V] [TRT] Fastest Tactic: 0 Time: 2.96102 [04/18/2022-02:35:52] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:52] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 3.15046 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 3.14675 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 0 Time: 3.14675 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 2.92723 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 3.2471 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 1002 Time: 2.92723 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 3.04154 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 3.10758 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 1002 Time: 3.04154 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 2.90931 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 4.37376 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 1002 Time: 2.90931 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 2.92237 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 3.23814 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 1002 Time: 2.92237 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 3.4473 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 3.00966 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 0 Time: 3.00966 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:53] [V] [TRT] Tactic: 1002 Time: 3.56083 [04/18/2022-02:35:53] [V] [TRT] Tactic: 0 Time: 4.04915 [04/18/2022-02:35:53] [V] [TRT] Fastest Tactic: 1002 Time: 3.56083 [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:53] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:53] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 3.52742 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 3.15098 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 0 Time: 3.15098 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 3.02797 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 3.0409 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 1002 Time: 3.02797 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 3.04512 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 3.01427 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 0 Time: 3.01427 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 4.38208 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 13.2037 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 1002 Time: 4.38208 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(688128,1024,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(688128,1,21504,672) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,1:4,5376,168) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(21504,1024:32,32,1) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1024,32,1) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(688128,1,21504,672) *************** [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(172032,1:4,5376,168) *************** [04/18/2022-02:35:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 0.733696 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 1.40083 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 1002 Time: 0.733696 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 0.717824 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 0.761472 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 1002 Time: 0.717824 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 0.717056 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 0.719616 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 1002 Time: 0.717056 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 1.43731 [04/18/2022-02:35:54] [V] [TRT] Tactic: 0 Time: 0.756352 [04/18/2022-02:35:54] [V] [TRT] Fastest Tactic: 0 Time: 0.756352 [04/18/2022-02:35:54] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:54] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:54] [V] [TRT] Tactic: 1002 Time: 0.743552 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.738688 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.738688 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.753536 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.752256 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.752256 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.730112 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.05702 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.730112 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.704256 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.742528 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.704256 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 1.18989 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.726144 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.726144 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.711424 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.6169 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.711424 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.711552 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.45434 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.711552 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.7424 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.21664 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.7424 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.737792 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.821632 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.737792 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 1.1689 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 3.3175 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 1.1689 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 1.38765 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.736512 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.736512 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 1.49376 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.746368 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.746368 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.70848 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.47635 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.70848 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 1.33658 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.749184 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.749184 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.706944 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 1.45395 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.706944 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.731904 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.731136 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 0 Time: 0.731136 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.72832 [04/18/2022-02:35:55] [V] [TRT] Tactic: 0 Time: 0.982016 [04/18/2022-02:35:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.72832 [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:55] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:55] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:55] [V] [TRT] Tactic: 1002 Time: 0.797568 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.75008 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 0 Time: 0.75008 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.733312 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.818048 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.733312 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.805632 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 1.94394 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.805632 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.778368 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.757888 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 0 Time: 0.757888 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.72896 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.749568 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.72896 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 1.67923 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.811776 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 0 Time: 0.811776 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 1.75091 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 3.30496 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 1.75091 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.15424 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.1888 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.15424 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.170752 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.174976 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.170752 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.151936 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.28672 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.151936 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.166656 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.189952 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.166656 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.155136 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.191488 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.155136 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.163968 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.400384 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.163968 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.17664 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.204416 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.17664 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.152064 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.178688 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.152064 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.166016 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.397824 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.166016 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.135936 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.194816 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.135936 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.142464 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.157056 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.142464 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.133376 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.187392 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.133376 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_0_up_lvl_4/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.44608 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.132224 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 0 Time: 0.132224 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1024,32,1) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(819200,1,25600,800) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(204800,1:4,6400,200) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(25600,1024:32,32,1) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1024,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(819200,1,25600,800) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(204800,1:4,6400,200) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(25600,1024:32,32,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(28,1,28,28) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(7,1:4,7,7) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1,1:32,1,1) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(28,1,28,28) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(7,1:4,7,7) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(672,1,672,672) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(168,1:4,168,168) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(21,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(672,1,672,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(168,1:4,168,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(21,1:32,1,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(5376,256:32,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,256,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(172032,1,10752,672) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(43008,1:4,2688,168) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(5376,256:32,16,1) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(172032,1,10752,672) *************** [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(43008,1:4,2688,168) *************** [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.068608 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.09344 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.068608 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.069632 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.069632 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.073088 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.109568 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.073088 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.059264 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.07424 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.059264 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.076032 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.11008 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.076032 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.058112 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.075392 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.058112 [04/18/2022-02:35:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.068224 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.095232 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.068224 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.068736 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.095232 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.068736 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.084608 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.109824 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.084608 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.074112 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.07488 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.074112 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.085248 [04/18/2022-02:35:56] [V] [TRT] Tactic: 0 Time: 0.110208 [04/18/2022-02:35:56] [V] [TRT] Fastest Tactic: 1002 Time: 0.085248 [04/18/2022-02:35:56] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:35:56] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:56] [V] [TRT] Tactic: 1002 Time: 0.057472 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 0.073728 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 0.057472 [04/18/2022-02:35:57] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.31162 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.28461 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.28461 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.30752 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.38522 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 1.30752 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.30061 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.21958 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.21958 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 2.00576 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.36218 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.36218 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.30854 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.36883 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 1.30854 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.32224 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.26733 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.26733 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.31891 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.76819 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 1.31891 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 2.48294 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.31776 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.31776 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.82758 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.35066 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.35066 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.3271 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.75309 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 1.3271 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.52678 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.28307 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.28307 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.28794 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.78317 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 1002 Time: 1.28794 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:57] [V] [TRT] Tactic: 1002 Time: 1.99718 [04/18/2022-02:35:57] [V] [TRT] Tactic: 0 Time: 1.36742 [04/18/2022-02:35:57] [V] [TRT] Fastest Tactic: 0 Time: 1.36742 [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:57] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:57] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/expand_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.93037 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 5.69728 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.93037 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 2.73254 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.3751 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 0 Time: 1.3751 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.3097 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.37498 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.3097 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.30893 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.30522 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 0 Time: 1.30522 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.38394 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.35987 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 0 Time: 1.35987 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 2.01587 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.37037 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 0 Time: 1.37037 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.23571 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.81248 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.23571 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.3161 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.76896 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.3161 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.21856 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.32403 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.21856 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 2.01741 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.3536 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 0 Time: 1.3536 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.22906 [04/18/2022-02:35:58] [V] [TRT] Tactic: 0 Time: 1.76794 [04/18/2022-02:35:58] [V] [TRT] Fastest Tactic: 1002 Time: 1.22906 [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:58] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:58] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:58] [V] [TRT] Tactic: 1002 Time: 1.81645 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 1.35578 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 1.35578 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 1.23277 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 1.37382 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 1002 Time: 1.23277 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 1.82477 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 1.34259 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 1.34259 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/depthwise_activation/mul:0) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 3.24864 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 6.13146 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 1002 Time: 3.24864 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.025088 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.04352 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.025088 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.0256 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.043136 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.08192 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_squeeze/Mean:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.043136 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044672 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(2,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.045312 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(2,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.047104 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044672 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(2,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.045184 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044416 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.046336 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.045568 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_reduce_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(36,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.081536 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.029952 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(36,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.082688 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(36,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.083328 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.082432 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.044288 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.043008 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_5/block_1/se_expand_conv2d/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:35:59] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:35:59] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:35:59] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(288,1:4,288,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:32,1,1) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:35:59] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,256,16,1) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(49152,1,3072,192) -> Float(12288,1:4,768,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12288,1:4,768,48) -> Float(49152,1,3072,192) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(2,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(48,1,48,48) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(12,1:4,12,12) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(2,1:32,1,1) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(48,1,48,48) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(12,1:4,12,12) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1152,1,1152,1152) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(288,1:4,288,288) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(36,1:32,1,1) -> Float(1:4,2,2,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(1152,1,1152,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(288,1:4,288,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2,2,2) -> Float(36,1:32,1,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(9216,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,256,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(294912,1,18432,1152) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(73728,1:4,4608,288) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(9216,256:32,16,1) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(294912,1,18432,1152) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(73728,1:4,4608,288) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.275456 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.354432 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.275456 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.263936 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.36416 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.263936 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.270592 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.369536 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.270592 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.270464 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.348672 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.270464 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.278656 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.382976 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.278656 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.26816 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.33984 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.26816 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.269312 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.354688 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.269312 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.26432 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.363264 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.26432 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.278144 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.367744 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.278144 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.267392 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.34048 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.267392 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.265088 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.357248 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.265088 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_6/block_0/project_bn/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.253952 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.337536 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.253952 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.06656 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.05568 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.067328 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.055936 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.15488 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.072192 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.066304 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.066304 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.056576 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.19648 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.065792 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.052864 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.052864 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3 || StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/conv/BiasAdd + StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.196736 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,256,16,1) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(81920,1,5120,320) -> Float(20480,1:4,1280,80) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(20480,1:4,1280,80) -> Float(81920,1,5120,320) *************** [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.040448 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.040448 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.056448 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.041088 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.041088 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.051968 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.051328 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.023936 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.023936 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.048512 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.03456 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.03456 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.05248 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.0544 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.019328 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.019328 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.051712 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.052736 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.052096 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.028672 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.048512 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.047488 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:00] [V] [TRT] Tactic: 1002 Time: 0.049024 [04/18/2022-02:36:00] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:00] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:00] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:00] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.046336 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.048 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_01/0_up_lvl_7/input_0_up_lvl_6/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.04672 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.029184 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.050688 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.050688 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.05632 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.051456 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.051456 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.052352 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.056064 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.051968 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.051968 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.053888 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6299:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.049792 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.050816 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.036992 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.036992 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.048 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.089856 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 1002 Time: 0.048 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.040832 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.040832 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.046464 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.04608 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.110592 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 1002 Time: 0.04608 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.055808 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.041472 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.041472 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.046976 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.047744 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.112512 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 1002 Time: 0.047744 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.053504 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.040704 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.040704 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.047488 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_0_up_lvl_5/1x1_pre_sample/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.045568 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.03776 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.03776 [04/18/2022-02:36:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,256,16,1) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(32768,1,2048,128) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,1:4,512,32) -> Float(1024,256:32,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(32768,256,16,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(32768,1,2048,128) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,256:32,16,1) -> Float(8192,1:4,512,32) *************** [04/18/2022-02:36:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.050304 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.050944 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.023936 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.023936 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.051456 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.034176 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.034176 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.046592 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.04736 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.04608 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.047744 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_00/0_up_lvl_6/input_0_up_lvl_5/downsample_max_x2/MaxPool:0 -> ) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.05056 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.089856 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.062336 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.090368 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 1002 Time: 0.062336 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:01] [V] [TRT] Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:01] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:01] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:01] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:01] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.090496 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.05888 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.029056 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.029184 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.0512 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.0512 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053248 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053248 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.049664 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.049664 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.050304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.050304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054912 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(Transpose__6299:0 -> ) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.0512 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.0512 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.0544 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.052864 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.051328 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.051328 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.050048 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.050048 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055552 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0) (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.052992 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.082304 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.029056 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.029312 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.029312 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.023296 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.023296 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.024832 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.0544 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053888 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.05504 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.053504 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.052608 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.056064 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.023552 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.023552 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:02] [V] [TRT] Tactic: 1002 Time: 0.08128 [04/18/2022-02:36:02] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:36:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:02] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.051456 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.051456 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.024576 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.053504 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.050816 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.050816 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.05696 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.050432 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.050432 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__881:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.052352 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.052352 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.02944 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.08896 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.088576 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.088064 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__882:0 -> ) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.029312 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.05888 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.089088 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.05888 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.08704 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.06272 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.062208 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.087808 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.062208 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.058752 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.02304 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.02304 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0) (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.119168 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.211584 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.019968 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.019968 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.214656 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.213888 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.027264 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.027264 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:03] [V] [TRT] Tactic: 1002 Time: 0.207232 [04/18/2022-02:36:03] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:03] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 2.26176 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.59066 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.027776 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.027776 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.2112 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 2.42445 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.06336 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.59386 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.028032 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.028032 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.209664 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.59322 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.91462 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.117632 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.212992 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.215424 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.66554 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.086656 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.086656 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.208384 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.58925 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.58976 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.08896 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.08896 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.210688 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 2.9161 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.59373 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.087424 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.087424 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.210816 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 1.59526 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:04] [V] [TRT] Tactic: 1002 Time: 0.063616 [04/18/2022-02:36:04] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:04] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:04] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__883:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 1.59334 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.090752 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.090752 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.032 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.031744 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.095232 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.15424 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.095232 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.094848 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.155264 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.09472 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.155392 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.094592 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.024832 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.024832 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/input_0_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__885:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.022912 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.022912 [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.20864 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.19328 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.19392 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.193536 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.025472 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.025472 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.208768 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.192512 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.193024 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/stack_Unsqueeze__888:0 copy (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.194944 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02496 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02496 [04/18/2022-02:36:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.032768 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.032768 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.058496 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.088448 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.058496 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.089472 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.061056 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.089216 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.060544 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.062976 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:05] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.057856 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.089856 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.057856 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.032768 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.058752 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.063232 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.089472 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.058624 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.091264 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.058752 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.02496 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.02496 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:05] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:05] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:05] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:05] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_02/1_dn_lvl_6/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.032768 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.089472 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.061184 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.023936 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.023936 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.060928 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.089088 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.024704 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.024704 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.091008 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059264 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.024576 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.024576 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.062592 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6311:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.02304 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.02304 [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.032512 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.033024 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.088832 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.057984 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.063232 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.088576 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.090624 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.058496 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.061056 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0) (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.205952 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.034432 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.025472 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.061056 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.035584 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.035584 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.060544 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.060928 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.0352 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.0352 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.05888 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.062208 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.036736 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.036736 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.205312 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.032768 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:06] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:06] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:06] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.058752 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.09024 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.058752 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.0256 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02112 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02112 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.088448 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.09024 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.024704 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.024704 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__897:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.089984 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:07] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.040704 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025344 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025344 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.041344 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025216 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025216 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.094976 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.155136 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.094976 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.026496 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.026496 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025728 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025728 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.155904 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.092416 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027392 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027392 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025344 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025344 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.156544 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.09216 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025472 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025472 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__898:0 -> ) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027008 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027008 [04/18/2022-02:36:07] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.041472 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.040832 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02496 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02496 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.154112 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02688 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02688 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025728 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025728 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.092672 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.155904 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.092672 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.092288 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02496 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02496 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.156928 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.02944 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.02944 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.0256 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.0256 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0) (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027136 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027136 [04/18/2022-02:36:07] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.365952 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025216 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025216 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.393088 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.025984 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.025984 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.397184 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.396288 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.05248 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.05248 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 0.387456 [04/18/2022-02:36:07] [V] [TRT] Tactic: 0 Time: 0.027264 [04/18/2022-02:36:07] [V] [TRT] Fastest Tactic: 0 Time: 0.027264 [04/18/2022-02:36:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:07] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:07] [V] [TRT] Tactic: 1002 Time: 3.13638 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.02624 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.02624 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.72544 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.13523 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.057984 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.057984 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.393728 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.03008 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.03008 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.14214 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.028672 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.028672 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.14189 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.031616 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.031616 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.81261 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.059264 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.059264 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.032 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.032 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.1424 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 4.46605 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.030976 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.030976 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.14163 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.064512 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.064512 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.365568 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.026496 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.026496 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.393984 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.028032 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.028032 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.397312 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.028928 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.028928 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 3.28538 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.155904 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.155904 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:08] [V] [TRT] Tactic: 1002 Time: 0.387968 [04/18/2022-02:36:08] [V] [TRT] Tactic: 0 Time: 0.029056 [04/18/2022-02:36:08] [V] [TRT] Fastest Tactic: 0 Time: 0.029056 [04/18/2022-02:36:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:08] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.13626 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.027776 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.027776 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.13498 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.136 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.158464 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.158464 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.029952 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.029952 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.72826 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.029056 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.029056 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.14214 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.93011 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.158208 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.158208 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.031616 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.031616 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.14253 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.14202 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.03136 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.03136 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__900:0 copy (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 3.14163 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.164352 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.164352 [04/18/2022-02:36:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:09] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.063616 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.063104 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.284416 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.143104 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.0384 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.0384 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.142336 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.141568 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.289792 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 1002 Time: 0.141568 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.144128 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.037888 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.037888 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.141056 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.036096 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.036096 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.286848 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:09] [V] [TRT] Tactic: 1002 Time: 0.143744 [04/18/2022-02:36:09] [V] [TRT] Tactic: 0 Time: 0.0416 [04/18/2022-02:36:09] [V] [TRT] Fastest Tactic: 0 Time: 0.0416 [04/18/2022-02:36:09] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:09] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.139776 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/input_1_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__901:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.14144 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.038912 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.038912 [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.743936 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.361216 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.033792 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.033792 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.361856 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__903:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.361984 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.039808 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.039808 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.743424 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.361216 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/stack_Unsqueeze__904:0 copy (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.361088 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.039424 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.039424 [04/18/2022-02:36:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.041472 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.040704 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035584 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035584 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.09856 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.154112 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.09856 [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.041216 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.041088 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035712 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.096512 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.155392 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.096512 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.096768 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093312 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.160512 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.094848 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.0352 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.0352 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.16128 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.095104 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.0384 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.0384 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035584 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035584 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.039552 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.039552 [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.041216 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.0416 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.095616 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.157056 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.095616 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.064896 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.096 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.097792 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.162432 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.09536 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.162688 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.09472 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.0416 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.0416 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.038912 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.038912 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.143488 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.037632 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.037632 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.051456 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.037632 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.037632 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.049408 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.038656 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.038656 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:10] [V] [TRT] Tactic: 1002 Time: 0.04736 [04/18/2022-02:36:10] [V] [TRT] Tactic: 0 Time: 0.0352 [04/18/2022-02:36:10] [V] [TRT] Fastest Tactic: 0 Time: 0.0352 [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:10] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:10] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.046976 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.062848 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.041856 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.041856 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.048256 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/separable_conv/separable_conv2d/depthwise__906:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.049408 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.039168 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.039168 [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.04992 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.091648 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.04992 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.047104 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.112384 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.047104 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_03/1_dn_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.047104 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.112768 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.047104 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.04096 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.04096 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.095872 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.15552 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.095872 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.036224 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.036224 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034816 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034816 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.159616 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034688 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034688 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092416 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.162048 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.092416 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.094592 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.040192 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.040192 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6317:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.037888 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.037888 [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.039168 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.039424 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.095104 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.155008 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.095104 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.09408 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.030848 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.030848 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.033664 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.033664 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.159488 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092288 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.160384 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.092288 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.09472 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.036864 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.036864 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.09024 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0) (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.091264 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03776 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03776 [04/18/2022-02:36:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.732288 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.029696 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.029696 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.03904 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.03904 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.038912 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.090752 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.038912 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.094464 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.030592 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.030592 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.032256 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034304 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034304 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092928 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.096512 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.092928 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03328 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03328 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.037504 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.037504 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.093312 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.095232 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.093312 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.094592 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.03712 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.03712 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:11] [V] [TRT] Tactic: 0 Time: 0.037632 [04/18/2022-02:36:11] [V] [TRT] Fastest Tactic: 0 Time: 0.037632 [04/18/2022-02:36:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:11] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:11] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.105088 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.732928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.03008 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.03008 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.039168 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.039296 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.094464 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.154496 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.094464 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.09472 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.03072 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.03072 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.031616 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.094592 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.033792 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.033792 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.15872 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.033536 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.033536 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.033792 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.033792 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.090752 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.03776 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.03776 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.15872 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.094976 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.092928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.034176 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.034176 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.037632 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.037632 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__912:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.166016 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.057856 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.048 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.048 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.058624 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.055424 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.055424 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.283008 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.147712 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.05568 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.05568 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140288 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.052864 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.052864 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.141056 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.299648 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.141056 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.147456 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.053504 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.053504 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.056704 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.056704 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.300032 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.146944 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.064128 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.064128 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14016 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.0544 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.0544 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__914:0 -> ) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140672 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.060928 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.060928 [04/18/2022-02:36:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.050304 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.050304 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.054784 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.054784 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.148608 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.281856 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.148608 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.055296 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.055296 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140928 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.06464 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.06464 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.299904 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.14208 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.054016 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.054016 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140288 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.052992 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.052992 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14144 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.300672 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 1002 Time: 0.14144 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.063488 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.063488 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.140416 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.055808 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.055808 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0) (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.14016 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.06016 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.06016 [04/18/2022-02:36:12] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 2.59213 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.053632 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.053632 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 1.264 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.056832 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.056832 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:12] [V] [TRT] Tactic: 1002 Time: 0.762368 [04/18/2022-02:36:12] [V] [TRT] Tactic: 0 Time: 0.059648 [04/18/2022-02:36:12] [V] [TRT] Fastest Tactic: 0 Time: 0.059648 [04/18/2022-02:36:12] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:12] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:12] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 0.763264 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.155776 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.155776 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 0.7456 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.065792 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.065792 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.82291 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.058112 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.058112 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.23885 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.06272 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.06272 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.23834 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.17536 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.17536 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 0.755968 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.061184 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.061184 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.25216 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.06336 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.06336 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.24998 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.070144 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.070144 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 6.85619 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.175232 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.175232 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:13] [V] [TRT] Tactic: 1002 Time: 0.757376 [04/18/2022-02:36:13] [V] [TRT] Tactic: 0 Time: 0.068352 [04/18/2022-02:36:13] [V] [TRT] Fastest Tactic: 0 Time: 0.068352 [04/18/2022-02:36:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:13] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:13] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.85299 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.06208 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.06208 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.25037 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.068992 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.068992 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.25459 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.197504 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.197504 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 1.3568 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.05312 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.05312 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 0.75776 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.056704 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.056704 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 0.795648 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.061824 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.061824 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 7.15264 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.2848 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.2848 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 0.746496 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.056192 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.056192 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.8297 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.057472 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.057472 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.23821 [04/18/2022-02:36:14] [V] [TRT] Tactic: 0 Time: 0.060416 [04/18/2022-02:36:14] [V] [TRT] Fastest Tactic: 0 Time: 0.060416 [04/18/2022-02:36:14] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:14] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:14] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:14] [V] [TRT] Tactic: 1002 Time: 6.23898 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.301568 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.301568 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.859264 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.063104 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.063104 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.25126 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.06144 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.06144 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.25088 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.070528 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.070528 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.81523 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.305152 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.305152 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.756608 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.06656 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.06656 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.83738 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.06208 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.06208 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.78682 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.06912 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.06912 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__916:0 copy (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 6.2528 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.322944 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.322944 [04/18/2022-02:36:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:15] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.109824 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.094848 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.102528 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.097664 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.097664 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.413184 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.949632 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 1002 Time: 0.413184 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:15] [V] [TRT] Tactic: 1002 Time: 0.242432 [04/18/2022-02:36:15] [V] [TRT] Tactic: 0 Time: 0.088064 [04/18/2022-02:36:15] [V] [TRT] Fastest Tactic: 0 Time: 0.088064 [04/18/2022-02:36:15] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.237696 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.091648 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.091648 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.511232 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.580608 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.511232 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.24256 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.09024 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.09024 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.237696 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.095488 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.095488 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.423168 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.585856 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.423168 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.330752 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.300288 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.300288 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.337408 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.281984 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.281984 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/input_1_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__917:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.348544 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.281216 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.281216 [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 2.8457 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.470144 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.470144 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.698112 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.718208 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.698112 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.698624 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.424192 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.424192 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__943:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 1.22752 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.483328 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.483328 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 2.848 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.422912 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.422912 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.697728 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 1.1447 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.697728 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.698112 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.434304 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.434304 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/stack_Unsqueeze__944:0 copy (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.698496 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.43008 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.43008 [04/18/2022-02:36:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.145408 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.092416 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.092416 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.145792 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.095488 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.095488 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.167168 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.287488 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.167168 [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.146048 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.092928 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.092928 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.149376 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.09792 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.09792 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.168576 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.28736 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.168576 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.145536 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.117248 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.117248 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.146688 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.092672 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.092672 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.144384 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.337792 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.144384 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.145664 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.093824 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.093824 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.147456 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.116608 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.116608 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.143744 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.340736 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.143744 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.144768 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.094464 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.094464 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.144256 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.091392 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.091392 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.145664 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.108288 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.108288 [04/18/2022-02:36:16] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.146816 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.094464 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.094464 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.147968 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.124672 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 0 Time: 0.124672 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.147328 [04/18/2022-02:36:16] [V] [TRT] Tactic: 0 Time: 0.28736 [04/18/2022-02:36:16] [V] [TRT] Fastest Tactic: 1002 Time: 0.147328 [04/18/2022-02:36:16] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:16] [V] [TRT] Tactic: 1002 Time: 0.196096 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.116864 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.116864 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146304 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.107648 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.107648 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144128 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.09536 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.09536 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.143104 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.360704 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.143104 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146944 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.11136 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.11136 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.121984 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.121984 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144256 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.360192 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.144256 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146816 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.109184 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.109184 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.16192 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.094464 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.094464 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.126976 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.126976 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.446848 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.127616 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.127616 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.089856 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.101888 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.089856 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.086784 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.103936 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.086784 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.095744 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.11072 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.095744 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.0736 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.093952 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.0736 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.133504 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.075136 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.137344 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.075136 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.09216 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.109184 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.09216 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.073984 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.099712 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.073984 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/separable_conv/separable_conv2d/depthwise__946:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.0736 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.130176 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.0736 [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.08896 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.287488 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.08896 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.07296 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.3648 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.07296 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_04/1_dn_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.075264 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.364416 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.075264 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.145792 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.102016 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.102016 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146432 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.125312 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.125312 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.145664 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.287616 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.145664 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.161536 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.1088 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.1088 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.091648 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.091648 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.360832 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146688 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.131328 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.131328 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.161536 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.098688 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.098688 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144384 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.35968 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.144384 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.164608 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.113664 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.113664 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.143104 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.106752 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.106752 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> Transpose__6293:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.14528 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.106752 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.106752 [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.147456 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.091648 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.091648 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.145664 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.09536 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.09536 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146176 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.28736 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.146176 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146176 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.092672 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.092672 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144768 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.09344 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.09344 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.337792 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.164352 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.095104 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.095104 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144768 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.09472 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.09472 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.33856 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.146304 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.11264 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.11264 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.091392 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.091392 [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0) (Reformat) [04/18/2022-02:36:17] [V] [TRT] Tactic: 1002 Time: 0.144128 [04/18/2022-02:36:17] [V] [TRT] Tactic: 0 Time: 0.105728 [04/18/2022-02:36:17] [V] [TRT] Fastest Tactic: 0 Time: 0.105728 [04/18/2022-02:36:17] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:17] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 2.80358 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.077312 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.077312 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.144768 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.094592 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.094592 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.15616 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.102656 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.102656 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.160384 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.288 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.160384 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.144896 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.135936 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.135936 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.079872 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.087296 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.079872 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.153344 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.091136 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.091136 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.16768 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.337536 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.16768 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.145408 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.119936 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.119936 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14336 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.110208 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.110208 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.150656 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.105472 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.105472 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.152448 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.338176 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.152448 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.144128 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.0992 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.0992 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.143616 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.110336 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.110336 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14336 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.111616 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.111616 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.142208 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.339328 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.142208 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 3.39597 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.0768 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.0768 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.141952 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.093568 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.093568 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14144 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.096256 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.096256 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.289152 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.156288 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.098176 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.098176 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.11456 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.082048 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.082048 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.149376 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.089344 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.089344 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.141696 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.338688 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.141696 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.147584 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.092672 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.092672 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.139264 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.089472 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.089472 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.139776 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.104064 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.104064 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14912 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.337152 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.14912 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14592 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.11712 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.11712 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.148352 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.09152 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.09152 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.159488 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.11904 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.11904 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__953:0 copy (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.33792 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.58688 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.569856 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.569856 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 1.42618 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.582016 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.582016 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 1.37357 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.5568 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.5568 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.58176 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.577024 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.577024 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 0.573568 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.566656 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.566656 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:18] [V] [TRT] Tactic: 1002 Time: 1.10976 [04/18/2022-02:36:18] [V] [TRT] Tactic: 0 Time: 0.773504 [04/18/2022-02:36:18] [V] [TRT] Fastest Tactic: 0 Time: 0.773504 [04/18/2022-02:36:18] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.559744 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.571264 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.559744 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.57024 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.5696 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.5696 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.557824 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.701568 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.557824 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.559744 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.571904 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.559744 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 1.25606 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.558976 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.558976 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Concat__954:0 -> ) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.568576 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.587008 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.568576 [04/18/2022-02:36:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.575232 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.6592 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.575232 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.568192 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.592512 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.568192 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 1.46342 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.557824 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.557824 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.579712 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.591616 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.579712 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 1.03309 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.56128 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.56128 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.557952 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.777216 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.557952 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.56896 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.593024 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.56896 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 1.12512 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.57024 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.57024 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.560128 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.698624 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.560128 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.562944 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.602368 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.562944 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.553088 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.562944 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.553088 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0) (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 0.561792 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.582144 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 1002 Time: 0.561792 [04/18/2022-02:36:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:19] [V] [TRT] Tactic: 1002 Time: 5.4967 [04/18/2022-02:36:19] [V] [TRT] Tactic: 0 Time: 0.56064 [04/18/2022-02:36:19] [V] [TRT] Fastest Tactic: 0 Time: 0.56064 [04/18/2022-02:36:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:19] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:20] [V] [TRT] Tactic: 1002 Time: 13.7394 [04/18/2022-02:36:20] [V] [TRT] Tactic: 0 Time: 0.56384 [04/18/2022-02:36:20] [V] [TRT] Fastest Tactic: 0 Time: 0.56384 [04/18/2022-02:36:20] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:20] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:20] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:20] [V] [TRT] Tactic: 1002 Time: 13.7457 [04/18/2022-02:36:20] [V] [TRT] Tactic: 0 Time: 0.570752 [04/18/2022-02:36:20] [V] [TRT] Fastest Tactic: 0 Time: 0.570752 [04/18/2022-02:36:20] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:20] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:20] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:20] [V] [TRT] Tactic: 1002 Time: 13.6851 [04/18/2022-02:36:20] [V] [TRT] Tactic: 0 Time: 0.557184 [04/18/2022-02:36:20] [V] [TRT] Fastest Tactic: 0 Time: 0.557184 [04/18/2022-02:36:20] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:20] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:20] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:20] [V] [TRT] Tactic: 1002 Time: 1.48019 [04/18/2022-02:36:20] [V] [TRT] Tactic: 0 Time: 0.6496 [04/18/2022-02:36:20] [V] [TRT] Fastest Tactic: 0 Time: 0.6496 [04/18/2022-02:36:20] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:20] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:20] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:20] [V] [TRT] Tactic: 1002 Time: 12.4765 [04/18/2022-02:36:20] [V] [TRT] Tactic: 0 Time: 0.554624 [04/18/2022-02:36:20] [V] [TRT] Fastest Tactic: 0 Time: 0.554624 [04/18/2022-02:36:20] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:20] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:20] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:21] [V] [TRT] Tactic: 1002 Time: 13.0648 [04/18/2022-02:36:21] [V] [TRT] Tactic: 0 Time: 0.554624 [04/18/2022-02:36:21] [V] [TRT] Fastest Tactic: 0 Time: 0.554624 [04/18/2022-02:36:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:21] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:21] [V] [TRT] Tactic: 1002 Time: 12.4731 [04/18/2022-02:36:21] [V] [TRT] Tactic: 0 Time: 0.698496 [04/18/2022-02:36:21] [V] [TRT] Fastest Tactic: 0 Time: 0.698496 [04/18/2022-02:36:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:21] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:21] [V] [TRT] Tactic: 1002 Time: 1.52832 [04/18/2022-02:36:21] [V] [TRT] Tactic: 0 Time: 0.652288 [04/18/2022-02:36:21] [V] [TRT] Fastest Tactic: 0 Time: 0.652288 [04/18/2022-02:36:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:21] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:21] [V] [TRT] Tactic: 1002 Time: 13.0712 [04/18/2022-02:36:21] [V] [TRT] Tactic: 0 Time: 0.56448 [04/18/2022-02:36:21] [V] [TRT] Fastest Tactic: 0 Time: 0.56448 [04/18/2022-02:36:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:21] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:22] [V] [TRT] Tactic: 1002 Time: 12.4948 [04/18/2022-02:36:22] [V] [TRT] Tactic: 0 Time: 0.572416 [04/18/2022-02:36:22] [V] [TRT] Fastest Tactic: 0 Time: 0.572416 [04/18/2022-02:36:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:22] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:22] [V] [TRT] Tactic: 1002 Time: 13.1059 [04/18/2022-02:36:22] [V] [TRT] Tactic: 0 Time: 0.69696 [04/18/2022-02:36:22] [V] [TRT] Fastest Tactic: 0 Time: 0.69696 [04/18/2022-02:36:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:22] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:22] [V] [TRT] Tactic: 1002 Time: 1.52998 [04/18/2022-02:36:22] [V] [TRT] Tactic: 0 Time: 0.610944 [04/18/2022-02:36:22] [V] [TRT] Fastest Tactic: 0 Time: 0.610944 [04/18/2022-02:36:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:22] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:22] [V] [TRT] Tactic: 1002 Time: 12.4973 [04/18/2022-02:36:22] [V] [TRT] Tactic: 0 Time: 0.559616 [04/18/2022-02:36:22] [V] [TRT] Fastest Tactic: 0 Time: 0.559616 [04/18/2022-02:36:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:22] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:22] [V] [TRT] Tactic: 1002 Time: 13.0807 [04/18/2022-02:36:22] [V] [TRT] Tactic: 0 Time: 0.568576 [04/18/2022-02:36:22] [V] [TRT] Fastest Tactic: 0 Time: 0.568576 [04/18/2022-02:36:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:22] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:23] [V] [TRT] Tactic: 1002 Time: 12.4938 [04/18/2022-02:36:23] [V] [TRT] Tactic: 0 Time: 0.69824 [04/18/2022-02:36:23] [V] [TRT] Fastest Tactic: 0 Time: 0.69824 [04/18/2022-02:36:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:23] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:23] [V] [TRT] Tactic: 1002 Time: 5.47405 [04/18/2022-02:36:23] [V] [TRT] Tactic: 0 Time: 0.650368 [04/18/2022-02:36:23] [V] [TRT] Fastest Tactic: 0 Time: 0.650368 [04/18/2022-02:36:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:23] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:23] [V] [TRT] Tactic: 1002 Time: 13.6996 [04/18/2022-02:36:23] [V] [TRT] Tactic: 0 Time: 0.561664 [04/18/2022-02:36:23] [V] [TRT] Fastest Tactic: 0 Time: 0.561664 [04/18/2022-02:36:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:23] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:23] [V] [TRT] Tactic: 1002 Time: 13.6993 [04/18/2022-02:36:23] [V] [TRT] Tactic: 0 Time: 0.562432 [04/18/2022-02:36:23] [V] [TRT] Fastest Tactic: 0 Time: 0.562432 [04/18/2022-02:36:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:23] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:23] [V] [TRT] Tactic: 1002 Time: 13.0872 [04/18/2022-02:36:23] [V] [TRT] Tactic: 0 Time: 0.559104 [04/18/2022-02:36:23] [V] [TRT] Fastest Tactic: 0 Time: 0.559104 [04/18/2022-02:36:23] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:23] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:23] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:24] [V] [TRT] Tactic: 1002 Time: 2.07642 [04/18/2022-02:36:24] [V] [TRT] Tactic: 0 Time: 0.57792 [04/18/2022-02:36:24] [V] [TRT] Fastest Tactic: 0 Time: 0.57792 [04/18/2022-02:36:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:24] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:24] [V] [TRT] Tactic: 1002 Time: 12.476 [04/18/2022-02:36:24] [V] [TRT] Tactic: 0 Time: 0.549504 [04/18/2022-02:36:24] [V] [TRT] Fastest Tactic: 0 Time: 0.549504 [04/18/2022-02:36:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:24] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:24] [V] [TRT] Tactic: 1002 Time: 13.062 [04/18/2022-02:36:24] [V] [TRT] Tactic: 0 Time: 0.549888 [04/18/2022-02:36:24] [V] [TRT] Fastest Tactic: 0 Time: 0.549888 [04/18/2022-02:36:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:24] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:24] [V] [TRT] Tactic: 1002 Time: 13.0446 [04/18/2022-02:36:24] [V] [TRT] Tactic: 0 Time: 0.698496 [04/18/2022-02:36:24] [V] [TRT] Fastest Tactic: 0 Time: 0.698496 [04/18/2022-02:36:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:24] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:24] [V] [TRT] Tactic: 1002 Time: 2.06566 [04/18/2022-02:36:24] [V] [TRT] Tactic: 0 Time: 0.568704 [04/18/2022-02:36:24] [V] [TRT] Fastest Tactic: 0 Time: 0.568704 [04/18/2022-02:36:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:24] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:25] [V] [TRT] Tactic: 1002 Time: 13.0876 [04/18/2022-02:36:25] [V] [TRT] Tactic: 0 Time: 0.550784 [04/18/2022-02:36:25] [V] [TRT] Fastest Tactic: 0 Time: 0.550784 [04/18/2022-02:36:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:25] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:25] [V] [TRT] Tactic: 1002 Time: 12.508 [04/18/2022-02:36:25] [V] [TRT] Tactic: 0 Time: 0.567936 [04/18/2022-02:36:25] [V] [TRT] Fastest Tactic: 0 Time: 0.567936 [04/18/2022-02:36:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:25] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:25] [V] [TRT] Tactic: 1002 Time: 13.0406 [04/18/2022-02:36:25] [V] [TRT] Tactic: 0 Time: 0.762752 [04/18/2022-02:36:25] [V] [TRT] Fastest Tactic: 0 Time: 0.762752 [04/18/2022-02:36:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:25] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:25] [V] [TRT] Tactic: 1002 Time: 2.14733 [04/18/2022-02:36:25] [V] [TRT] Tactic: 0 Time: 0.55936 [04/18/2022-02:36:25] [V] [TRT] Fastest Tactic: 0 Time: 0.55936 [04/18/2022-02:36:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:25] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:25] [V] [TRT] Tactic: 1002 Time: 12.4968 [04/18/2022-02:36:25] [V] [TRT] Tactic: 0 Time: 0.550656 [04/18/2022-02:36:25] [V] [TRT] Fastest Tactic: 0 Time: 0.550656 [04/18/2022-02:36:25] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:25] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:25] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 13.0845 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 0.550016 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 0 Time: 0.550016 [04/18/2022-02:36:26] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__956:0 copy (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 12.4957 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 0.698112 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 0 Time: 0.698112 [04/18/2022-02:36:26] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:26] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.66208 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.79354 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 1002 Time: 1.66208 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.15098 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.73875 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 1002 Time: 1.15098 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.19898 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.86138 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 1002 Time: 1.19898 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.13139 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.23277 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 1002 Time: 1.13139 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 2.44211 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.19501 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 0 Time: 1.19501 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 2.44403 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.44512 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 0 Time: 1.44512 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.22278 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.79968 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 1002 Time: 1.22278 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:26] [V] [TRT] Tactic: 1002 Time: 1.76538 [04/18/2022-02:36:26] [V] [TRT] Tactic: 0 Time: 1.20166 [04/18/2022-02:36:26] [V] [TRT] Fastest Tactic: 0 Time: 1.20166 [04/18/2022-02:36:26] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:26] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 1.18835 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.43706 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 1002 Time: 1.18835 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 1.21434 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.14086 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 0 Time: 1.14086 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 1.12845 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.89274 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 1002 Time: 1.12845 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/input_1_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Concat__957:0 -> ) (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 1.13242 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.20781 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 1002 Time: 1.13242 [04/18/2022-02:36:27] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 copy (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 12.1736 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.68307 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 0 Time: 1.68307 [04/18/2022-02:36:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 copy (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 2.41216 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.80954 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 0 Time: 1.80954 [04/18/2022-02:36:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 copy (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 2.2729 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.79443 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 0 Time: 1.79443 [04/18/2022-02:36:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__971:0 copy (Reformat) [04/18/2022-02:36:27] [V] [TRT] Tactic: 1002 Time: 2.50125 [04/18/2022-02:36:27] [V] [TRT] Tactic: 0 Time: 1.82259 [04/18/2022-02:36:27] [V] [TRT] Fastest Tactic: 0 Time: 1.82259 [04/18/2022-02:36:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:27] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:27] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 copy (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 12.1637 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.76422 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 0 Time: 1.76422 [04/18/2022-02:36:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 copy (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 2.89062 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.82592 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 0 Time: 1.82592 [04/18/2022-02:36:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 copy (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.81786 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.83322 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.81786 [04/18/2022-02:36:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/stack_Unsqueeze__972:0 copy (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 2.47398 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.83885 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 0 Time: 1.83885 [04/18/2022-02:36:28] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.17184 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.24326 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.17184 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.16902 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.18413 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.16902 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.73773 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.1479 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 0 Time: 1.1479 [04/18/2022-02:36:28] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.19002 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.13587 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 0 Time: 1.13587 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.10106 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.19565 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.10106 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.08941 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 2.22797 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.08941 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.18182 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.20269 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.18182 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.70534 [04/18/2022-02:36:28] [V] [TRT] Tactic: 0 Time: 1.85997 [04/18/2022-02:36:28] [V] [TRT] Fastest Tactic: 1002 Time: 1.70534 [04/18/2022-02:36:28] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:28] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:28] [V] [TRT] Tactic: 1002 Time: 1.17005 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.83514 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.17005 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.16813 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.22022 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.16813 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.08813 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.13088 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.08813 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.16851 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.50374 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.16851 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.07917 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.20691 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.07917 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.58182 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.21293 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 0 Time: 1.21293 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.1785 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.20307 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.1785 [04/18/2022-02:36:29] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.18067 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.2288 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.18067 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.1639 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.11782 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 0 Time: 1.11782 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.08006 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.34925 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.08006 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.2407 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.2023 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 0 Time: 1.2023 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.15302 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.57645 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.15302 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.0857 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.51206 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.0857 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.16723 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.58502 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.16723 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.0665 [04/18/2022-02:36:29] [V] [TRT] Tactic: 0 Time: 1.56954 [04/18/2022-02:36:29] [V] [TRT] Fastest Tactic: 1002 Time: 1.0665 [04/18/2022-02:36:29] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:29] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:29] [V] [TRT] Tactic: 1002 Time: 1.17773 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.12525 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 0 Time: 1.12525 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.79968 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.91411 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.79968 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.16109 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.12832 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 0 Time: 1.12832 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.09978 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.2256 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.09978 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.85958 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.21139 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 0 Time: 1.21139 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.72211 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 5.50682 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.72211 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.15571 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.78074 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.15571 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.16813 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.20589 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.16813 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.08646 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.35194 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.08646 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.16723 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.54509 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.16723 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.18374 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.19168 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.18374 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.08314 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.55341 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.08314 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:30] [V] [TRT] Tactic: 1002 Time: 1.08006 [04/18/2022-02:36:30] [V] [TRT] Tactic: 0 Time: 1.19322 [04/18/2022-02:36:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.08006 [04/18/2022-02:36:30] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:30] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 1.7568 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 1.21946 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 1.21946 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_05/1_dn_lvl_3/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 1.17222 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 1.58989 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 1.17222 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.092288 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.10176 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.092288 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.09024 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.102272 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.09024 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/input_1_dn_lvl_3/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.087936 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.285568 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.087936 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 2.85517 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.564736 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.564736 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.696576 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.644864 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.644864 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.697472 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 1.77741 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.697472 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__982:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.69696 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 1.12218 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.69696 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 2.848 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.572032 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.572032 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.698752 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.56896 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.56896 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.764544 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.562944 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.562944 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__983:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.697856 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.56896 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.56896 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 3.57517 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.636928 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.636928 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 1.40058 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.568832 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.568832 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.69696 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.602112 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.602112 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/combine/stack_Unsqueeze__984:0 copy (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.697344 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.567552 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 0 Time: 0.567552 [04/18/2022-02:36:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.089984 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.113024 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.089984 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.074368 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.108544 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.074368 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.075648 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.362112 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.075648 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.091904 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.132992 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.091904 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.075904 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.094592 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.075904 [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_06/1_up_lvl_4/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.075776 [04/18/2022-02:36:31] [V] [TRT] Tactic: 0 Time: 0.362496 [04/18/2022-02:36:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.075776 [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:31] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:31] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:31] [V] [TRT] Tactic: 1002 Time: 0.048384 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.036352 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.036352 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.048256 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.036864 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.036864 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/input_1_up_lvl_4/downsample_max_x2/MaxPool:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.049408 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.09024 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.049408 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.145408 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.143232 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.336896 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.14272 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.164096 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.09472 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.09472 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.093696 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.093696 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.142336 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.337792 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.142336 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.144128 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.094848 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.142848 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.090368 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.090368 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.148864 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.10496 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.10496 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.7424 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.361088 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.35968 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.036992 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.036992 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__994:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.361344 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.043136 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.043136 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.743168 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.033408 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.033408 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.359808 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.037376 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.037376 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__995:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.36032 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.042752 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.042752 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.744192 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03328 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03328 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.359808 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.036352 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.036352 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.359296 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.037632 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.037632 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/combine/stack_Unsqueeze__996:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.359808 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.043904 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.043904 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.040448 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.040448 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.045696 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.048128 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.111104 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.048128 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.041856 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.041856 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.047232 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_07/1_up_lvl_5/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.048384 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.111616 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.048384 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.030848 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.030848 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.09216 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.16 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.09408 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.033408 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.033408 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.091648 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.033792 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.033792 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.093312 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.159488 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 1002 Time: 0.093312 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.094336 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.036864 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.036864 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0) (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.09152 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.03776 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.03776 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.207616 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.192768 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.19328 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1006:0 copy (Reformat) [04/18/2022-02:36:32] [V] [TRT] Tactic: 1002 Time: 0.193408 [04/18/2022-02:36:32] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:36:32] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:36:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:32] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.208256 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.193152 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.194176 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1007:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.193408 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.024064 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.024064 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.207488 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.02304 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.02304 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.193024 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.193152 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/combine/stack_Unsqueeze__1008:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.193152 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.025216 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.025216 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.046592 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.024064 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.024064 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.045824 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_08/1_up_lvl_6/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.045952 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.083072 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.109056 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.109056 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1018:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.109184 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.082048 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.107264 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.108928 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/stack_Unsqueeze__1019:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.107776 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.028672 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.053888 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.050688 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.050688 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.029056 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.053888 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.050048 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.050048 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.050944 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.050944 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054656 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.049408 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.049408 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.052736 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/MatMul:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/EfficientDet-D0/bifpn/node_09/1_up_lvl_7/combine/Squeeze:0 -> ) (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.082432 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.028672 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:33] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:36:33] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:33] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:33] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:33] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.02496 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.0544 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.056192 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.055808 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.019328 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.019328 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.056704 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.052992 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.082304 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.028928 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05248 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.052096 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.052096 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.052736 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.024576 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.052992 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.056192 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.050688 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.050688 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05504 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.055552 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.051328 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.051328 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05248 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1028:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.05056 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.05056 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.119936 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.21248 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.214272 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.214656 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.02752 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.02752 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.207488 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 1.59066 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.06144 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 2.18829 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.027776 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.027776 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.210304 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 1.59232 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:34] [V] [TRT] Tactic: 1002 Time: 0.064 [04/18/2022-02:36:34] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:34] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:34] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:34] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:34] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.5936 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.028544 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.028544 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.210816 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.02304 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.02304 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.5927 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.06272 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 2.1111 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.02944 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.02944 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.12096 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.21248 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.214784 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.66643 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.08896 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.08896 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.207232 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 2.25677 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.06272 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020736 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020736 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.5895 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.088448 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.088448 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.21056 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.023552 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.023552 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.59386 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.063104 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 2.29171 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.088576 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.088576 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.210816 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.59411 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.062848 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/input_1_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1030:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 1.59475 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.089856 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.089856 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.20864 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.022912 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.022912 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.193664 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.19456 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.195328 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.209536 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.194048 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.194304 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.023296 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.023296 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1035:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.19456 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.025728 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.025728 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.205952 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:35] [V] [TRT] Tactic: 1002 Time: 0.034432 [04/18/2022-02:36:35] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:35] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:35] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:35] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:35] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.034432 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.025472 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059904 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.061312 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.034944 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.034944 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.061184 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.057856 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059264 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.0608 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.204288 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.033024 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.058112 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.089728 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 1002 Time: 0.058112 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.020864 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.020864 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.025216 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.019968 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.019968 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.060544 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059264 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.087552 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 1002 Time: 0.059264 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.06272 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.090368 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 1002 Time: 0.061568 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.024448 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.024448 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.062848 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.0608 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1043:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.092032 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 1002 Time: 0.060672 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:36] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.365056 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.025216 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.025216 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.026368 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.026368 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.396672 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.3968 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.05568 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.05568 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 0.386944 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.027392 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.027392 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 3.13664 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:36] [V] [TRT] Tactic: 1002 Time: 3.136 [04/18/2022-02:36:36] [V] [TRT] Tactic: 0 Time: 0.029056 [04/18/2022-02:36:36] [V] [TRT] Fastest Tactic: 0 Time: 0.029056 [04/18/2022-02:36:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:36] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.13574 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.06016 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.06016 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.3936 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.03008 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.03008 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.1424 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.02944 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.02944 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.14176 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.031616 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.031616 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.14138 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.056192 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.056192 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.393344 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.031616 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.031616 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.14291 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.84742 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.14918 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.064384 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.064384 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.364672 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.027136 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.027136 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.393728 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.028288 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.028288 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.397568 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.028928 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.028928 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 4.04902 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.15616 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.15616 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 0.388864 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.029056 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.029056 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.13715 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:37] [V] [TRT] Tactic: 1002 Time: 3.13638 [04/18/2022-02:36:37] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:37] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:37] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:37] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.136 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.157056 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.157056 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.393984 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.030208 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.030208 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.7481 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.029184 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.029184 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.14163 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.71622 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.157568 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.157568 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.031872 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.031872 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.14202 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.80966 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.03072 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.03072 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/input_2_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1046:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 3.14355 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.164608 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.164608 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 1.38803 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.032512 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.032512 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.361344 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.033536 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.033536 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.361728 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.035712 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.035712 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.35904 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.039552 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.039552 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.748672 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.032896 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.032896 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.36096 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.360576 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.0352 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.0352 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_11/2_dn_lvl_5/combine/stack_Unsqueeze__1051:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.3616 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.039552 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.039552 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.732416 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.032384 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.032384 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.040704 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.033408 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.033408 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.040832 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.036096 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 0 Time: 0.036096 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:38] [V] [TRT] Tactic: 1002 Time: 0.042496 [04/18/2022-02:36:38] [V] [TRT] Tactic: 0 Time: 0.091136 [04/18/2022-02:36:38] [V] [TRT] Fastest Tactic: 1002 Time: 0.042496 [04/18/2022-02:36:38] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:38] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:38] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.095488 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.032512 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.032512 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.097024 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.095616 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035584 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035584 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.03904 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.03904 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.098304 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.038144 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.038144 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.092928 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.097664 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.038656 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.038656 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.105728 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.7328 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.031616 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.031616 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.041216 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.033664 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.033664 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.040704 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.09664 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.156416 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.09664 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.095488 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.032512 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.032512 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.033536 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.032896 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.032896 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.0928 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.160512 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.094848 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.092928 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.039296 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.039296 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.096256 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.160896 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.096256 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.095104 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.038528 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.038528 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.035712 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.035712 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.03904 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.03904 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1060:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.167424 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:39] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 1.35987 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.05504 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.05504 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.75712 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.057472 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.057472 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.765696 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.065408 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.065408 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.764288 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.158848 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.158848 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 0.746368 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.057856 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.057856 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:39] [V] [TRT] Tactic: 1002 Time: 6.24179 [04/18/2022-02:36:39] [V] [TRT] Tactic: 0 Time: 0.059136 [04/18/2022-02:36:39] [V] [TRT] Fastest Tactic: 0 Time: 0.059136 [04/18/2022-02:36:39] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:39] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:39] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.23782 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.06272 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.06272 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.2391 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.174976 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.174976 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 0.75776 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.061824 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.061824 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.25152 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.062976 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.062976 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.25062 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.06912 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.06912 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.80602 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.17536 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.17536 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 0.757504 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.064896 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.064896 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.83866 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.062336 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.062336 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:40] [V] [TRT] Tactic: 1002 Time: 6.24973 [04/18/2022-02:36:40] [V] [TRT] Tactic: 0 Time: 0.072192 [04/18/2022-02:36:40] [V] [TRT] Fastest Tactic: 0 Time: 0.072192 [04/18/2022-02:36:40] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:40] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:40] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.25536 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.197248 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.197248 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 2.00192 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.054656 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.054656 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 0.757888 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.058624 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.058624 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 0.765184 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.061952 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.061952 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.54029 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.287488 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.287488 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 0.7488 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.0544 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.0544 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.24205 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.060544 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.060544 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.23821 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.060928 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.060928 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.23923 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.302464 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.302464 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 0.75776 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.063232 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.063232 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:41] [V] [TRT] Tactic: 1002 Time: 6.25216 [04/18/2022-02:36:41] [V] [TRT] Tactic: 0 Time: 0.061696 [04/18/2022-02:36:41] [V] [TRT] Fastest Tactic: 0 Time: 0.061696 [04/18/2022-02:36:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:41] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:41] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 6.83315 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.070272 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.070272 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 6.84224 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.306304 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.306304 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 0.756864 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.067072 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.067072 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 6.25178 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.062464 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.062464 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 6.25011 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.070272 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.070272 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/input_2_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1062:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 6.25382 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.32256 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.32256 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 2.84621 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.422656 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.422656 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 1.39123 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.432 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.432 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 0.697984 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.4384 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.4384 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 0.728448 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.494464 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.494464 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 2.84595 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.412672 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.412672 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 copy (Reformat) [04/18/2022-02:36:42] [V] [TRT] Tactic: 1002 Time: 0.698368 [04/18/2022-02:36:42] [V] [TRT] Tactic: 0 Time: 0.656256 [04/18/2022-02:36:42] [V] [TRT] Fastest Tactic: 0 Time: 0.656256 [04/18/2022-02:36:42] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:42] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:42] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 1.18861 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.425472 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.425472 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1067:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.745472 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.431104 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.431104 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 2.80397 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.079232 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.079232 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.166528 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.091904 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.091904 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.14592 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.114432 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.114432 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.157568 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.286592 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.157568 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.144768 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.110848 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.110848 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.078976 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.094976 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.078976 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.141952 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.104064 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.104064 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.152064 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.337408 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.152064 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.154496 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.124416 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.124416 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.142336 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.098176 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.098176 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.158336 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.11584 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.11584 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.144896 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.337152 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.144896 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.145152 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.12096 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.12096 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.148992 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.091648 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.091648 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.144256 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.10624 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.10624 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.15424 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.33792 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.15424 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 2.80474 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.105728 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.105728 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.143104 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.09536 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.09536 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.142464 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.097792 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.097792 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.146816 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.289152 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.146816 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.144512 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.116608 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.116608 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.082048 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.107264 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.082048 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.141184 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.09088 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.09088 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.146304 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.337664 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.146304 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.146688 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.108032 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.108032 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.142976 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.12736 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.12736 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.151808 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.107392 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.107392 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.170624 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.338176 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.170624 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.146432 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.108544 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.108544 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.140544 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.11008 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.11008 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.151552 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.108672 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.108672 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1075:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 0.140672 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.33664 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 1002 Time: 0.140672 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:36:43] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:43] [V] [TRT] Tactic: 1002 Time: 6.08 [04/18/2022-02:36:43] [V] [TRT] Tactic: 0 Time: 0.577664 [04/18/2022-02:36:43] [V] [TRT] Fastest Tactic: 0 Time: 0.577664 [04/18/2022-02:36:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:43] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:44] [V] [TRT] Tactic: 1002 Time: 13.6854 [04/18/2022-02:36:44] [V] [TRT] Tactic: 0 Time: 1.07635 [04/18/2022-02:36:44] [V] [TRT] Fastest Tactic: 0 Time: 1.07635 [04/18/2022-02:36:44] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:44] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:44] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:44] [V] [TRT] Tactic: 1002 Time: 13.722 [04/18/2022-02:36:44] [V] [TRT] Tactic: 0 Time: 0.572928 [04/18/2022-02:36:44] [V] [TRT] Fastest Tactic: 0 Time: 0.572928 [04/18/2022-02:36:44] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:44] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:44] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:44] [V] [TRT] Tactic: 1002 Time: 13.7393 [04/18/2022-02:36:44] [V] [TRT] Tactic: 0 Time: 0.575104 [04/18/2022-02:36:44] [V] [TRT] Fastest Tactic: 0 Time: 0.575104 [04/18/2022-02:36:44] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:44] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:44] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:44] [V] [TRT] Tactic: 1002 Time: 1.50861 [04/18/2022-02:36:44] [V] [TRT] Tactic: 0 Time: 1.37472 [04/18/2022-02:36:44] [V] [TRT] Fastest Tactic: 0 Time: 1.37472 [04/18/2022-02:36:44] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:44] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:44] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:45] [V] [TRT] Tactic: 1002 Time: 13.07 [04/18/2022-02:36:45] [V] [TRT] Tactic: 0 Time: 0.624768 [04/18/2022-02:36:45] [V] [TRT] Fastest Tactic: 0 Time: 0.624768 [04/18/2022-02:36:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:45] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:45] [V] [TRT] Tactic: 1002 Time: 12.4726 [04/18/2022-02:36:45] [V] [TRT] Tactic: 0 Time: 0.555392 [04/18/2022-02:36:45] [V] [TRT] Fastest Tactic: 0 Time: 0.555392 [04/18/2022-02:36:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:45] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:45] [V] [TRT] Tactic: 1002 Time: 13.0193 [04/18/2022-02:36:45] [V] [TRT] Tactic: 0 Time: 0.69888 [04/18/2022-02:36:45] [V] [TRT] Fastest Tactic: 0 Time: 0.69888 [04/18/2022-02:36:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:45] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:45] [V] [TRT] Tactic: 1002 Time: 1.5287 [04/18/2022-02:36:45] [V] [TRT] Tactic: 0 Time: 0.652288 [04/18/2022-02:36:45] [V] [TRT] Fastest Tactic: 0 Time: 0.652288 [04/18/2022-02:36:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:45] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:45] [V] [TRT] Tactic: 1002 Time: 12.4969 [04/18/2022-02:36:45] [V] [TRT] Tactic: 0 Time: 0.55168 [04/18/2022-02:36:45] [V] [TRT] Fastest Tactic: 0 Time: 0.55168 [04/18/2022-02:36:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:45] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:46] [V] [TRT] Tactic: 1002 Time: 13.0894 [04/18/2022-02:36:46] [V] [TRT] Tactic: 0 Time: 0.560896 [04/18/2022-02:36:46] [V] [TRT] Fastest Tactic: 0 Time: 0.560896 [04/18/2022-02:36:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:46] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:46] [V] [TRT] Tactic: 1002 Time: 12.494 [04/18/2022-02:36:46] [V] [TRT] Tactic: 0 Time: 0.695936 [04/18/2022-02:36:46] [V] [TRT] Fastest Tactic: 0 Time: 0.695936 [04/18/2022-02:36:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:46] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:46] [V] [TRT] Tactic: 1002 Time: 1.52934 [04/18/2022-02:36:46] [V] [TRT] Tactic: 0 Time: 0.563968 [04/18/2022-02:36:46] [V] [TRT] Fastest Tactic: 0 Time: 0.563968 [04/18/2022-02:36:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:46] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:46] [V] [TRT] Tactic: 1002 Time: 13.0989 [04/18/2022-02:36:46] [V] [TRT] Tactic: 0 Time: 0.566272 [04/18/2022-02:36:46] [V] [TRT] Fastest Tactic: 0 Time: 0.566272 [04/18/2022-02:36:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:46] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:46] [V] [TRT] Tactic: 1002 Time: 12.4963 [04/18/2022-02:36:46] [V] [TRT] Tactic: 0 Time: 0.558848 [04/18/2022-02:36:46] [V] [TRT] Fastest Tactic: 0 Time: 0.558848 [04/18/2022-02:36:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:46] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:47] [V] [TRT] Tactic: 1002 Time: 12.4968 [04/18/2022-02:36:47] [V] [TRT] Tactic: 0 Time: 0.697856 [04/18/2022-02:36:47] [V] [TRT] Fastest Tactic: 0 Time: 0.697856 [04/18/2022-02:36:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:47] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:47] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:47] [V] [TRT] Tactic: 1002 Time: 5.49978 [04/18/2022-02:36:47] [V] [TRT] Tactic: 0 Time: 0.553344 [04/18/2022-02:36:47] [V] [TRT] Fastest Tactic: 0 Time: 0.553344 [04/18/2022-02:36:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:47] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:47] [V] [TRT] Tactic: 1002 Time: 13.7057 [04/18/2022-02:36:47] [V] [TRT] Tactic: 0 Time: 0.558976 [04/18/2022-02:36:47] [V] [TRT] Fastest Tactic: 0 Time: 0.558976 [04/18/2022-02:36:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:47] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:47] [V] [TRT] Tactic: 1002 Time: 13.0813 [04/18/2022-02:36:47] [V] [TRT] Tactic: 0 Time: 0.559872 [04/18/2022-02:36:47] [V] [TRT] Fastest Tactic: 0 Time: 0.559872 [04/18/2022-02:36:47] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:47] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:47] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 13.7149 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 0.555008 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 0.555008 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 2.1239 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 0.662144 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 0.662144 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 13.0481 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 0.549504 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 0.549504 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 12.4712 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 0.55168 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 0.55168 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 12.473 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 1.18067 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 1.18067 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:48] [V] [TRT] Tactic: 1002 Time: 1.5241 [04/18/2022-02:36:48] [V] [TRT] Tactic: 0 Time: 1.26682 [04/18/2022-02:36:48] [V] [TRT] Fastest Tactic: 0 Time: 1.26682 [04/18/2022-02:36:48] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:48] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:48] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:49] [V] [TRT] Tactic: 1002 Time: 12.4972 [04/18/2022-02:36:49] [V] [TRT] Tactic: 0 Time: 0.633216 [04/18/2022-02:36:49] [V] [TRT] Fastest Tactic: 0 Time: 0.633216 [04/18/2022-02:36:49] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:49] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:49] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:49] [V] [TRT] Tactic: 1002 Time: 13.1103 [04/18/2022-02:36:49] [V] [TRT] Tactic: 0 Time: 0.555904 [04/18/2022-02:36:49] [V] [TRT] Fastest Tactic: 0 Time: 0.555904 [04/18/2022-02:36:49] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:49] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:49] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:49] [V] [TRT] Tactic: 1002 Time: 12.4957 [04/18/2022-02:36:49] [V] [TRT] Tactic: 0 Time: 0.698112 [04/18/2022-02:36:49] [V] [TRT] Fastest Tactic: 0 Time: 0.698112 [04/18/2022-02:36:49] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:49] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:49] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:49] [V] [TRT] Tactic: 1002 Time: 1.52397 [04/18/2022-02:36:49] [V] [TRT] Tactic: 0 Time: 0.561536 [04/18/2022-02:36:49] [V] [TRT] Fastest Tactic: 0 Time: 0.561536 [04/18/2022-02:36:49] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:49] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:49] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:49] [V] [TRT] Tactic: 1002 Time: 13.0714 [04/18/2022-02:36:49] [V] [TRT] Tactic: 0 Time: 0.550784 [04/18/2022-02:36:49] [V] [TRT] Fastest Tactic: 0 Time: 0.550784 [04/18/2022-02:36:49] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:49] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:49] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 12.4966 [04/18/2022-02:36:50] [V] [TRT] Tactic: 0 Time: 0.562944 [04/18/2022-02:36:50] [V] [TRT] Fastest Tactic: 0 Time: 0.562944 [04/18/2022-02:36:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/input_2_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1079:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 13.0884 [04/18/2022-02:36:50] [V] [TRT] Tactic: 0 Time: 0.697472 [04/18/2022-02:36:50] [V] [TRT] Fastest Tactic: 0 Time: 0.697472 [04/18/2022-02:36:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:36:50] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 11.5891 [04/18/2022-02:36:50] [V] [TRT] Tactic: 0 Time: 1.77523 [04/18/2022-02:36:50] [V] [TRT] Fastest Tactic: 0 Time: 1.77523 [04/18/2022-02:36:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 2.28352 [04/18/2022-02:36:50] [V] [TRT] Tactic: 0 Time: 1.82963 [04/18/2022-02:36:50] [V] [TRT] Fastest Tactic: 0 Time: 1.82963 [04/18/2022-02:36:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 2.18816 [04/18/2022-02:36:50] [V] [TRT] Tactic: 0 Time: 2.29466 [04/18/2022-02:36:50] [V] [TRT] Fastest Tactic: 1002 Time: 2.18816 [04/18/2022-02:36:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:50] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1082:0 copy (Reformat) [04/18/2022-02:36:50] [V] [TRT] Tactic: 1002 Time: 1.82259 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 1.84486 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 1002 Time: 1.82259 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 11.5812 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 1.776 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 1.776 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 2.30131 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 2.30131 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 1002 Time: 2.30131 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 2.94054 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 2.46656 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 2.46656 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_13/2_dn_lvl_3/combine/stack_Unsqueeze__1083:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 2.42944 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 1.83296 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 1.83296 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 2.8512 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 0.564864 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 0.564864 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 0.698624 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 1.2087 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 1002 Time: 0.698624 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 0.697856 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 1.28563 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 1002 Time: 0.697856 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_12/2_dn_lvl_4/combine/stack_Unsqueeze__1066:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 0.697344 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 0.567808 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 0.567808 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 2.84851 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 0.567296 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 0.567296 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 0.698368 [04/18/2022-02:36:51] [V] [TRT] Tactic: 0 Time: 0.567296 [04/18/2022-02:36:51] [V] [TRT] Fastest Tactic: 0 Time: 0.567296 [04/18/2022-02:36:51] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:51] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 copy (Reformat) [04/18/2022-02:36:51] [V] [TRT] Tactic: 1002 Time: 0.698624 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.626432 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.626432 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1094:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.736 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.569472 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.569472 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 2.85478 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.563584 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.563584 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.69888 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.572032 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.572032 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.697856 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.601472 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.601472 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_14/2_up_lvl_4/combine/stack_Unsqueeze__1095:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.705152 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.586752 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.586752 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.745088 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.034944 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.034944 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.36096 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.038144 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.038144 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.35904 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.038528 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.038528 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1105:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.3616 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.044672 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.044672 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.743552 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.034944 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.034944 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.361728 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.038016 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.038016 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.361472 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.03904 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.03904 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1106:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.361728 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.04416 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.04416 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 1.33632 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.034432 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.034432 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.038272 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.038272 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.360064 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.03904 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.03904 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_15/2_up_lvl_5/combine/stack_Unsqueeze__1107:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.361344 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.044416 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.044416 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.208256 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194176 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.023552 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.023552 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.193408 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.024576 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.024576 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_10/2_dn_lvl_6/combine/stack_Unsqueeze__1034:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194432 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.209024 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194048 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.025216 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.025216 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194944 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1118:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194304 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.209536 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.023936 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.023936 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.193536 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.193792 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_16/2_up_lvl_6/combine/stack_Unsqueeze__1119:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.194048 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.02496 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.02496 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.083712 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.019968 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.019968 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.110336 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.11008 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1129:0 copy (Reformat) [04/18/2022-02:36:52] [V] [TRT] Tactic: 1002 Time: 0.10944 [04/18/2022-02:36:52] [V] [TRT] Tactic: 0 Time: 0.023296 [04/18/2022-02:36:52] [V] [TRT] Fastest Tactic: 0 Time: 0.023296 [04/18/2022-02:36:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:52] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:52] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.083968 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.11008 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.109184 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_17/2_up_lvl_7/combine/stack_Unsqueeze__1130:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.109312 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(256,64,64:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,64,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(1024,256,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(256,64,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.082048 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.030592 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.0192 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.0192 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.031232 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.026368 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05504 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.01856 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.01856 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.053888 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.056192 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.024064 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.024064 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.053504 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.052864 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.02368 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.02368 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.082688 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.017536 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.017536 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.052992 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.052992 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05504 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.024448 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.055552 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.05248 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.05248 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.057216 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.055168 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.052608 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.052608 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.056448 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1139:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.054528 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.051712 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.051712 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,128,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,1,256,4) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,1:4,64,1) -> Float(512,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(2048,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128:32,64,1) -> Float(512,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(512,128,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(2048,512,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(512,128,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.119424 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.212352 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.214656 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.214656 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.02752 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.02752 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 0.207744 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.01984 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.01984 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:53] [V] [TRT] Tactic: 1002 Time: 1.59155 [04/18/2022-02:36:53] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:36:53] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:36:53] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:53] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:53] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 2.1257 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.028416 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.028416 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.210304 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 2.06925 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.06208 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59258 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.028032 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.028032 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.210176 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59386 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.062592 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 2.17203 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.030848 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.030848 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.118912 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.212992 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.214784 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.66579 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.08832 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.08832 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.209024 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022016 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022016 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59373 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.020352 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.062208 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(2048,512,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 2.31181 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.089984 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.089984 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.21184 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59347 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.06656 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 2.16384 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.089216 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.089216 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.2112 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59667 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.064768 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(512,128,128,128:32,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/input_2_up_lvl_7/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1142:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 1.59322 [04/18/2022-02:36:54] [V] [TRT] Tactic: 0 Time: 0.090496 [04/18/2022-02:36:54] [V] [TRT] Fastest Tactic: 0 Time: 0.090496 [04/18/2022-02:36:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,128,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,1024,512,1,256,4) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,1:4,64,1) -> Float(1024,256,128,128:32,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,128,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(4096,1024,512,1,256,4) *************** [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,128,128:32,64,1) -> Float(1024,256,128,1:4,64,1) *************** [04/18/2022-02:36:54] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:54] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:54] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:36:54] [V] [TRT] Tactic: 1002 Time: 0.208384 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.193664 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.193536 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.025728 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.025728 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.194432 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02688 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02688 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.208256 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.194944 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.023296 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.023296 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.194688 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(8192,1024,128,2,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1146:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.194816 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.026368 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.026368 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(512,64,64:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,64,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(4096,512,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.20608 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021632 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021632 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.03456 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.034816 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.034816 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.036096 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.034816 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022912 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022912 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.027264 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021376 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021376 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059776 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.034816 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.034816 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.058496 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.062592 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059648 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.060416 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.034944 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.034944 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.05824 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.060288 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059008 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.0224 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.0224 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.036992 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.036992 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.20544 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.020224 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.020224 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.03264 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.032896 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02112 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02112 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.087936 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.05888 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.020992 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.020992 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.025856 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02048 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02048 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.06016 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.089472 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.05952 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022528 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022528 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059392 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022784 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022784 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.089856 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.062464 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.058368 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.024576 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.024576 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.060544 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.059136 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1155:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.092032 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 1002 Time: 0.060032 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,128,64,1) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1,512,8) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,1:4,128,2) -> Float(1024,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(8192,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128:32,64,1) -> Float(2048,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(1024,128,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(8192,1024,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.365568 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.024832 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.024832 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.393088 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.025728 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.025728 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:55] [V] [TRT] Tactic: 1002 Time: 0.395904 [04/18/2022-02:36:55] [V] [TRT] Tactic: 0 Time: 0.027648 [04/18/2022-02:36:55] [V] [TRT] Fastest Tactic: 0 Time: 0.027648 [04/18/2022-02:36:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:55] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.396416 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.054016 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.054016 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.385664 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.027392 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.027392 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.13677 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.026368 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.026368 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.7065 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.028672 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.028672 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.136 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.057728 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.057728 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.393472 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.03008 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.03008 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.14342 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.029184 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.029184 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.14074 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.71238 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.057728 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.057728 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.39232 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.032384 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.032384 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.1424 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.029312 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.029312 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.14163 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.031488 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.031488 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 3.14304 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.0672 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.0672 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.365184 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.027776 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.027776 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.393088 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.027136 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.027136 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:56] [V] [TRT] Tactic: 1002 Time: 0.396928 [04/18/2022-02:36:56] [V] [TRT] Tactic: 0 Time: 0.029184 [04/18/2022-02:36:56] [V] [TRT] Fastest Tactic: 0 Time: 0.029184 [04/18/2022-02:36:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:56] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.28461 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.155904 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.155904 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.388864 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.02944 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.02944 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.13779 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.02816 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.02816 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.13587 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.029184 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.029184 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(8192,1024,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.70125 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.158592 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.158592 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.393088 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.029952 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.029952 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.14214 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.029696 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.029696 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.14176 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.14266 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.158592 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.158592 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.392704 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.14202 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.029184 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.029184 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.14125 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.031872 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.031872 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,128,128:32,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/input_3_dn_lvl_6/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1158:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 3.7216 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.164608 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.164608 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,128,64,1) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,2048,1024,1,512,8) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,256,1:4,128,2) -> Float(2048,256,128,128:32,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,128,64,1) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(16384,2048,1024,1,512,8) *************** [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(2048,256,128,128:32,64,1) -> Float(4096,512,256,1:4,128,2) *************** [04/18/2022-02:36:57] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.745856 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.034176 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.034176 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.362112 [04/18/2022-02:36:57] [V] [TRT] Tactic: 0 Time: 0.039936 [04/18/2022-02:36:57] [V] [TRT] Fastest Tactic: 0 Time: 0.039936 [04/18/2022-02:36:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:57] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:57] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 copy (Reformat) [04/18/2022-02:36:57] [V] [TRT] Tactic: 1002 Time: 0.742272 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.36096 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.03392 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.03392 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.361216 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035968 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035968 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(32768,2048,128,2,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1162:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.361216 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.040064 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.040064 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(1024,64,64:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,64,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(16384,1024,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(4096,256,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.733824 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.040832 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.032896 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.032896 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.04032 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035584 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035584 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.041856 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.09152 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.041856 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.095872 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.031872 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.031872 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.03328 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.033408 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.03328 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.095232 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.097664 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.095232 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.095104 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.094208 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035328 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.039168 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.039168 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.091776 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.096384 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.091776 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.095488 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.038272 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.038272 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.105472 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 1.3728 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.040448 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.04096 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.036224 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.036224 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.09536 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.155776 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.09536 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.097024 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.03264 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.03264 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.03392 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.035456 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.035456 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.09408 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.15936 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.09408 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.0352 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.0352 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.09344 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.036096 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.036096 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093696 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.16128 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.093952 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.095232 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.039296 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.092416 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.03584 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.03584 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.039424 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.039424 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1171:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.16832 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 1002 Time: 0.093568 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,128,64,1) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,1,1024,16) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,1:4,256,4) -> Float(2048,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(32768,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128:32,64,1) -> Float(8192,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(2048,128,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(32768,2048,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(8192,512,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 1.3591 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.053376 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.053376 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.757632 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.056576 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.056576 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.766208 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.072704 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.072704 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.764032 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.158848 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.158848 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:58] [V] [TRT] Tactic: 1002 Time: 0.747776 [04/18/2022-02:36:58] [V] [TRT] Tactic: 0 Time: 0.0544 [04/18/2022-02:36:58] [V] [TRT] Fastest Tactic: 0 Time: 0.0544 [04/18/2022-02:36:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:58] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.24192 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.05632 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.05632 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.2391 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.060672 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.060672 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.83456 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.174848 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.174848 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 0.756608 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.060416 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.060416 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.84365 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.062592 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.062592 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.25011 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.072704 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.072704 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.25024 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.174976 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.174976 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 0.75648 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.06528 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.06528 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:36:59] [V] [TRT] Tactic: 1002 Time: 6.25203 [04/18/2022-02:36:59] [V] [TRT] Tactic: 0 Time: 0.061696 [04/18/2022-02:36:59] [V] [TRT] Fastest Tactic: 0 Time: 0.061696 [04/18/2022-02:36:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:36:59] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:36:59] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 7.37472 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.069376 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.069376 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 6.84698 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.19776 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.19776 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 1.92115 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.056064 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.056064 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 0.75776 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.06016 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.06016 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 0.765568 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.066048 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.066048 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 6.54016 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.284416 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.284416 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 1.20474 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.054144 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.054144 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 6.24192 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.057344 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.057344 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 6.23936 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.061184 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.061184 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(32768,2048,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:00] [V] [TRT] Tactic: 1002 Time: 6.84813 [04/18/2022-02:37:00] [V] [TRT] Tactic: 0 Time: 0.301824 [04/18/2022-02:37:00] [V] [TRT] Fastest Tactic: 0 Time: 0.301824 [04/18/2022-02:37:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:00] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:00] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 0.756096 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.061824 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.061824 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 6.25318 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.06272 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.06272 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 6.25229 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.070144 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.070144 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(8192,512,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 6.25088 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.30208 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.30208 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 0.756736 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.080768 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.080768 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 6.25165 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.061568 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.061568 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 6.2505 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.070656 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.070656 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(2048,128,128,128:32,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/input_3_dn_lvl_5/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1174:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 7.55392 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.323072 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.323072 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,128,64,1) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,4096,2048,1,1024,16) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,512,1:4,256,4) -> Float(4096,256,128,128:32,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,128,64,1) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(65536,4096,2048,1,1024,16) *************** [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,128,128:32,64,1) -> Float(16384,1024,512,1:4,256,4) *************** [04/18/2022-02:37:01] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 2.84749 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.473216 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.473216 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:01] [V] [TRT] Tactic: 1002 Time: 1.19142 [04/18/2022-02:37:01] [V] [TRT] Tactic: 0 Time: 0.432896 [04/18/2022-02:37:01] [V] [TRT] Fastest Tactic: 0 Time: 0.432896 [04/18/2022-02:37:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:01] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.73024 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.427392 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.427392 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.697856 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.433152 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.433152 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 2.84646 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.418816 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.418816 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.721024 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.430464 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.430464 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.69824 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.431872 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.431872 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(131072,4096,128,2,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1178:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.698624 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.512128 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.512128 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(2048,64,64:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,64,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(65536,2048,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(16384,512,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 3.40211 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.082944 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.082944 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.157056 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.092288 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.092288 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.158976 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.099072 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.099072 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.157952 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.286592 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.157952 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.14464 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.119552 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.119552 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.092032 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.105856 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.092032 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.153472 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.091392 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.091392 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.16704 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.33984 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.16704 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.157952 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.093184 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.093184 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.142336 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.09088 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.09088 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.147968 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.1056 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.1056 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.144512 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.337408 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.144512 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.14464 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.108928 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.108928 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.14464 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.107648 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.107648 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.143232 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.104704 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.104704 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.152576 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.338304 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.152576 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 2.80346 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.094848 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.094848 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.141824 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.112 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.112 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.142592 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.100352 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.100352 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.143616 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.287232 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.143616 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.149888 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.09408 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.09408 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.078208 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.081408 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.078208 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.140544 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.09344 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.09344 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.141184 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.337152 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.141184 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.146304 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.098816 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.098816 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.153856 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.094336 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.094336 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.141056 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.121984 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.121984 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.141312 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.339584 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 1002 Time: 0.141312 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.169984 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.098944 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.098944 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:02] [V] [TRT] Tactic: 1002 Time: 0.15872 [04/18/2022-02:37:02] [V] [TRT] Tactic: 0 Time: 0.11072 [04/18/2022-02:37:02] [V] [TRT] Fastest Tactic: 0 Time: 0.11072 [04/18/2022-02:37:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:02] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:02] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 0.1408 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.106624 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 0 Time: 0.106624 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_Unsqueeze__1186:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 0.140416 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.339456 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 1002 Time: 0.140416 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,128,64,1) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,1,2048,32) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1:4,512,8) -> Float(4096,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(131072,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128:32,64,1) -> Float(32768,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(4096,128,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(131072,4096,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(32768,1024,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 5.48966 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.642304 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 0 Time: 0.642304 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 13.6893 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.568448 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 0 Time: 0.568448 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 13.0888 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.56768 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 0 Time: 0.56768 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:03] [V] [TRT] Tactic: 1002 Time: 13.6933 [04/18/2022-02:37:03] [V] [TRT] Tactic: 0 Time: 0.555008 [04/18/2022-02:37:03] [V] [TRT] Fastest Tactic: 0 Time: 0.555008 [04/18/2022-02:37:03] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:03] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:03] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:04] [V] [TRT] Tactic: 1002 Time: 1.92768 [04/18/2022-02:37:04] [V] [TRT] Tactic: 0 Time: 0.57984 [04/18/2022-02:37:04] [V] [TRT] Fastest Tactic: 0 Time: 0.57984 [04/18/2022-02:37:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:04] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:04] [V] [TRT] Tactic: 1002 Time: 13.0606 [04/18/2022-02:37:04] [V] [TRT] Tactic: 0 Time: 0.555648 [04/18/2022-02:37:04] [V] [TRT] Fastest Tactic: 0 Time: 0.555648 [04/18/2022-02:37:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:04] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:04] [V] [TRT] Tactic: 1002 Time: 13.0345 [04/18/2022-02:37:04] [V] [TRT] Tactic: 0 Time: 0.632192 [04/18/2022-02:37:04] [V] [TRT] Fastest Tactic: 0 Time: 0.632192 [04/18/2022-02:37:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:04] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:04] [V] [TRT] Tactic: 1002 Time: 12.4732 [04/18/2022-02:37:04] [V] [TRT] Tactic: 0 Time: 0.699264 [04/18/2022-02:37:04] [V] [TRT] Fastest Tactic: 0 Time: 0.699264 [04/18/2022-02:37:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:04] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:04] [V] [TRT] Tactic: 1002 Time: 1.5415 [04/18/2022-02:37:04] [V] [TRT] Tactic: 0 Time: 0.561792 [04/18/2022-02:37:04] [V] [TRT] Fastest Tactic: 0 Time: 0.561792 [04/18/2022-02:37:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:04] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:05] [V] [TRT] Tactic: 1002 Time: 12.4965 [04/18/2022-02:37:05] [V] [TRT] Tactic: 0 Time: 0.55424 [04/18/2022-02:37:05] [V] [TRT] Fastest Tactic: 0 Time: 0.55424 [04/18/2022-02:37:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:05] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:05] [V] [TRT] Tactic: 1002 Time: 13.0683 [04/18/2022-02:37:05] [V] [TRT] Tactic: 0 Time: 0.573568 [04/18/2022-02:37:05] [V] [TRT] Fastest Tactic: 0 Time: 0.573568 [04/18/2022-02:37:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:05] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:05] [V] [TRT] Tactic: 1002 Time: 13.0996 [04/18/2022-02:37:05] [V] [TRT] Tactic: 0 Time: 0.69632 [04/18/2022-02:37:05] [V] [TRT] Fastest Tactic: 0 Time: 0.69632 [04/18/2022-02:37:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:05] [V] [TRT] Tactic: 1002 Time: 1.52934 [04/18/2022-02:37:05] [V] [TRT] Tactic: 0 Time: 0.565376 [04/18/2022-02:37:05] [V] [TRT] Fastest Tactic: 0 Time: 0.565376 [04/18/2022-02:37:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:05] [V] [TRT] Tactic: 1002 Time: 13.038 [04/18/2022-02:37:05] [V] [TRT] Tactic: 0 Time: 0.566528 [04/18/2022-02:37:05] [V] [TRT] Fastest Tactic: 0 Time: 0.566528 [04/18/2022-02:37:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:05] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:06] [V] [TRT] Tactic: 1002 Time: 12.4959 [04/18/2022-02:37:06] [V] [TRT] Tactic: 0 Time: 0.55616 [04/18/2022-02:37:06] [V] [TRT] Fastest Tactic: 0 Time: 0.55616 [04/18/2022-02:37:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:06] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:06] [V] [TRT] Tactic: 1002 Time: 13.0761 [04/18/2022-02:37:06] [V] [TRT] Tactic: 0 Time: 0.774016 [04/18/2022-02:37:06] [V] [TRT] Fastest Tactic: 0 Time: 0.774016 [04/18/2022-02:37:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:06] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:06] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:06] [V] [TRT] Tactic: 1002 Time: 5.48211 [04/18/2022-02:37:06] [V] [TRT] Tactic: 0 Time: 0.55552 [04/18/2022-02:37:06] [V] [TRT] Fastest Tactic: 0 Time: 0.55552 [04/18/2022-02:37:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:06] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:06] [V] [TRT] Tactic: 1002 Time: 13.6934 [04/18/2022-02:37:06] [V] [TRT] Tactic: 0 Time: 0.670592 [04/18/2022-02:37:06] [V] [TRT] Fastest Tactic: 0 Time: 0.670592 [04/18/2022-02:37:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:06] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:06] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:07] [V] [TRT] Tactic: 1002 Time: 13.0829 [04/18/2022-02:37:07] [V] [TRT] Tactic: 0 Time: 0.566912 [04/18/2022-02:37:07] [V] [TRT] Fastest Tactic: 0 Time: 0.566912 [04/18/2022-02:37:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:07] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:07] [V] [TRT] Tactic: 1002 Time: 13.7298 [04/18/2022-02:37:07] [V] [TRT] Tactic: 0 Time: 0.585728 [04/18/2022-02:37:07] [V] [TRT] Fastest Tactic: 0 Time: 0.585728 [04/18/2022-02:37:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:07] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:07] [V] [TRT] Tactic: 1002 Time: 2.11802 [04/18/2022-02:37:07] [V] [TRT] Tactic: 0 Time: 0.572544 [04/18/2022-02:37:07] [V] [TRT] Fastest Tactic: 0 Time: 0.572544 [04/18/2022-02:37:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:07] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:07] [V] [TRT] Tactic: 1002 Time: 13.0319 [04/18/2022-02:37:07] [V] [TRT] Tactic: 0 Time: 1.21664 [04/18/2022-02:37:07] [V] [TRT] Fastest Tactic: 0 Time: 1.21664 [04/18/2022-02:37:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:07] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:07] [V] [TRT] Tactic: 1002 Time: 13.0609 [04/18/2022-02:37:07] [V] [TRT] Tactic: 0 Time: 0.575872 [04/18/2022-02:37:07] [V] [TRT] Fastest Tactic: 0 Time: 0.575872 [04/18/2022-02:37:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:07] [V] [TRT] *************** Autotuning Reformat: Float(131072,4096,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 12.4735 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.69696 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.69696 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 2.02547 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.562816 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.562816 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 12.4975 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.550656 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.550656 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 13.0605 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.565504 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.565504 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(32768,1024,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 13.0666 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.693888 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.693888 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:08] [V] [TRT] Tactic: 1002 Time: 1.99014 [04/18/2022-02:37:08] [V] [TRT] Tactic: 0 Time: 0.560896 [04/18/2022-02:37:08] [V] [TRT] Fastest Tactic: 0 Time: 0.560896 [04/18/2022-02:37:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:08] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:09] [V] [TRT] Tactic: 1002 Time: 13.0703 [04/18/2022-02:37:09] [V] [TRT] Tactic: 0 Time: 0.552576 [04/18/2022-02:37:09] [V] [TRT] Fastest Tactic: 0 Time: 0.552576 [04/18/2022-02:37:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:09] [V] [TRT] Tactic: 1002 Time: 12.4972 [04/18/2022-02:37:09] [V] [TRT] Tactic: 0 Time: 0.566784 [04/18/2022-02:37:09] [V] [TRT] Fastest Tactic: 0 Time: 0.566784 [04/18/2022-02:37:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(4096,128,128,128:32,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/input_3_dn_lvl_4/nearest_neighbor_upsampling_x2/nearest_neighbor_upsampling/stack_1_Unsqueeze__1190:0 copy (Reformat) [04/18/2022-02:37:09] [V] [TRT] Tactic: 1002 Time: 13.0935 [04/18/2022-02:37:09] [V] [TRT] Tactic: 0 Time: 0.694272 [04/18/2022-02:37:09] [V] [TRT] Fastest Tactic: 0 Time: 0.694272 [04/18/2022-02:37:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:09] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,128,64,1) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,8192,4096,1,2048,32) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1024,1:4,512,8) -> Float(8192,256,128,128:32,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,128,64,1) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(262144,8192,4096,1,2048,32) *************** [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(8192,256,128,128:32,64,1) -> Float(65536,2048,1024,1:4,512,8) *************** [04/18/2022-02:37:09] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 copy (Reformat) [04/18/2022-02:37:09] [V] [TRT] Tactic: 1002 Time: 12.1559 [04/18/2022-02:37:09] [V] [TRT] Tactic: 0 Time: 1.7847 [04/18/2022-02:37:09] [V] [TRT] Fastest Tactic: 0 Time: 1.7847 [04/18/2022-02:37:09] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:09] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:09] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 1.8103 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.81018 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.81018 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.49869 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.83002 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.83002 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1193:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.32371 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.8345 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.8345 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 11.5887 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.78253 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.78253 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.6752 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.81811 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.81811 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.85798 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.80531 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.80531 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(524288,8192,128,2,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_21/3_dn_lvl_3/combine/stack_Unsqueeze__1194:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.42202 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 1.84512 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 1.84512 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1,1) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,1,64,64) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,1:4,16,16) -> Float(8192,128,64:32,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,64,1,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(262144,4096,1,64,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,128,64:32,1,1) -> Float(65536,1024,1:4,16,16) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:10] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 2.84659 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 0.566016 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 0.566016 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 0.699008 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 0.568064 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 0.568064 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:10] [V] [TRT] Tactic: 1002 Time: 0.6976 [04/18/2022-02:37:10] [V] [TRT] Tactic: 0 Time: 0.571136 [04/18/2022-02:37:10] [V] [TRT] Fastest Tactic: 0 Time: 0.571136 [04/18/2022-02:37:10] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:10] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:10] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_20/3_dn_lvl_4/combine/stack_Unsqueeze__1177:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.764288 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.568704 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.568704 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 2.85658 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.582016 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.582016 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.698368 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.95104 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.698368 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.699648 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.585216 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.585216 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1251:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.71424 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.567808 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.567808 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 2.84941 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.568448 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.568448 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.6976 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.650624 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.650624 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.699136 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 1.70496 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 1002 Time: 0.699136 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(196608,6144,192,3,1) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_22/3_up_lvl_4/combine/stack_Unsqueeze__1252:0 copy (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 0.699008 [04/18/2022-02:37:11] [V] [TRT] Tactic: 0 Time: 0.567936 [04/18/2022-02:37:11] [V] [TRT] Fastest Tactic: 0 Time: 0.567936 [04/18/2022-02:37:11] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1,1) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,1,32,32) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,512,1:4,8,8) -> Float(2048,64,64:32,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,64,1,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(65536,2048,1,32,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,64,64:32,1,1) -> Float(16384,512,1:4,8,8) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(1:4,4096,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,2048,64,1) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,32) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,8) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,2048:32,64,1) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,2048,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(65536,1,2048,32) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(16384,1:4,512,8) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,4096,128,2) -> Float(2048,2048:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(1:4,8192,128,2) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8192,128,2) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,4096,64,1) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(262144,1,4096,64) -> Float(65536,1:4,1024,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,4096,64,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1:4,1024,16) -> Float(262144,1,4096,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:11] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:11] [V] [TRT] *************** Autotuning Reformat: Float(3317760,4096,64,1) -> Float(3317760,1,51840,810) *************** [04/18/2022-02:37:11] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:11] [V] [TRT] Tactic: 1002 Time: 14.3965 [04/18/2022-02:37:12] [V] [TRT] Tactic: 0 Time: 15.2637 [04/18/2022-02:37:12] [V] [TRT] Fastest Tactic: 1002 Time: 14.3965 [04/18/2022-02:37:12] [V] [TRT] *************** Autotuning Reformat: Float(3317760,4096,64,1) -> Float(831488,1:4,12992,203) *************** [04/18/2022-02:37:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:12] [V] [TRT] Tactic: 1002 Time: 14.9402 [04/18/2022-02:37:12] [V] [TRT] Tactic: 0 Time: 15.4227 [04/18/2022-02:37:12] [V] [TRT] Fastest Tactic: 1002 Time: 14.9402 [04/18/2022-02:37:12] [V] [TRT] *************** Autotuning Reformat: Float(3317760,4096,64,1) -> Float(106496,4096:32,64,1) *************** [04/18/2022-02:37:12] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:12] [V] [TRT] Tactic: 1002 Time: 14.8975 [04/18/2022-02:37:13] [V] [TRT] Tactic: 0 Time: 16.8128 [04/18/2022-02:37:13] [V] [TRT] Fastest Tactic: 1002 Time: 14.8975 [04/18/2022-02:37:13] [V] [TRT] *************** Autotuning Reformat: Float(3317760,1,51840,810) -> Float(3317760,4096,64,1) *************** [04/18/2022-02:37:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:13] [V] [TRT] Tactic: 1002 Time: 15.4153 [04/18/2022-02:37:13] [V] [TRT] Tactic: 0 Time: 15.5572 [04/18/2022-02:37:13] [V] [TRT] Fastest Tactic: 1002 Time: 15.4153 [04/18/2022-02:37:13] [V] [TRT] *************** Autotuning Reformat: Float(3317760,1,51840,810) -> Float(831488,1:4,12992,203) *************** [04/18/2022-02:37:13] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:13] [V] [TRT] Tactic: 1002 Time: 15.1327 [04/18/2022-02:37:14] [V] [TRT] Tactic: 0 Time: 15.1558 [04/18/2022-02:37:14] [V] [TRT] Fastest Tactic: 1002 Time: 15.1327 [04/18/2022-02:37:14] [V] [TRT] *************** Autotuning Reformat: Float(3317760,1,51840,810) -> Float(106496,4096:32,64,1) *************** [04/18/2022-02:37:14] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:14] [V] [TRT] Tactic: 1002 Time: 15.285 [04/18/2022-02:37:14] [V] [TRT] Tactic: 0 Time: 22.0622 [04/18/2022-02:37:14] [V] [TRT] Fastest Tactic: 1002 Time: 15.285 [04/18/2022-02:37:14] [V] [TRT] *************** Autotuning Reformat: Float(831488,1:4,12992,203) -> Float(3317760,4096,64,1) *************** [04/18/2022-02:37:14] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:15] [V] [TRT] Tactic: 1002 Time: 15.3823 [04/18/2022-02:37:15] [V] [TRT] Tactic: 0 Time: 15.2931 [04/18/2022-02:37:15] [V] [TRT] Fastest Tactic: 0 Time: 15.2931 [04/18/2022-02:37:15] [V] [TRT] *************** Autotuning Reformat: Float(831488,1:4,12992,203) -> Float(3317760,1,51840,810) *************** [04/18/2022-02:37:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:15] [V] [TRT] Tactic: 1002 Time: 15.0099 [04/18/2022-02:37:15] [V] [TRT] Tactic: 0 Time: 14.6496 [04/18/2022-02:37:15] [V] [TRT] Fastest Tactic: 0 Time: 14.6496 [04/18/2022-02:37:15] [V] [TRT] *************** Autotuning Reformat: Float(831488,1:4,12992,203) -> Float(106496,4096:32,64,1) *************** [04/18/2022-02:37:15] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:16] [V] [TRT] Tactic: 1002 Time: 15.1322 [04/18/2022-02:37:16] [V] [TRT] Tactic: 0 Time: 21.5788 [04/18/2022-02:37:16] [V] [TRT] Fastest Tactic: 1002 Time: 15.1322 [04/18/2022-02:37:16] [V] [TRT] *************** Autotuning Reformat: Float(106496,4096:32,64,1) -> Float(3317760,4096,64,1) *************** [04/18/2022-02:37:16] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:16] [V] [TRT] Tactic: 1002 Time: 14.6287 [04/18/2022-02:37:17] [V] [TRT] Tactic: 0 Time: 14.7796 [04/18/2022-02:37:17] [V] [TRT] Fastest Tactic: 1002 Time: 14.6287 [04/18/2022-02:37:17] [V] [TRT] *************** Autotuning Reformat: Float(106496,4096:32,64,1) -> Float(3317760,1,51840,810) *************** [04/18/2022-02:37:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:17] [V] [TRT] Tactic: 1002 Time: 15.092 [04/18/2022-02:37:17] [V] [TRT] Tactic: 0 Time: 14.8086 [04/18/2022-02:37:17] [V] [TRT] Fastest Tactic: 0 Time: 14.8086 [04/18/2022-02:37:17] [V] [TRT] *************** Autotuning Reformat: Float(106496,4096:32,64,1) -> Float(831488,1:4,12992,203) *************** [04/18/2022-02:37:17] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:17] [V] [TRT] Tactic: 1002 Time: 15.1798 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 15.3133 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 15.1798 [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,4096,64,1) -> Float(147456,1,2304,36) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.63616 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.65216 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.63616 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,4096,64,1) -> Float(36864,1:4,576,9) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 1.42938 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.64192 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 0 Time: 0.64192 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,4096,64,1) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.878464 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.29472 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.878464 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,1,2304,36) -> Float(147456,4096,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.633088 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.28384 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.633088 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,1,2304,36) -> Float(36864,1:4,576,9) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.624384 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.637056 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.624384 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(147456,1,2304,36) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.94464 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.47878 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.94464 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(36864,1:4,576,9) -> Float(147456,4096,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.626944 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.07456 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.626944 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(36864,1:4,576,9) -> Float(147456,1,2304,36) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 1.40838 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.52614 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 1.40838 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(36864,1:4,576,9) -> Float(8192,4096:32,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.9408 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 1.96314 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.9408 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(147456,4096,64,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.693376 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.708992 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.693376 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(147456,1,2304,36) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.673664 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.677504 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.673664 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(8192,4096:32,64,1) -> Float(36864,1:4,576,9) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.685568 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.75968 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 1002 Time: 0.685568 [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.19584 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.117376 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 0 Time: 0.117376 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_1/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.445184 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.096384 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 0 Time: 0.096384 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:18] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.744192 [04/18/2022-02:37:18] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:37:18] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:37:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:18] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:18] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:37:18] [V] [TRT] Tactic: 1002 Time: 0.361088 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.036608 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.036608 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.360192 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.037504 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.037504 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_19/3_dn_lvl_5/combine/stack_Unsqueeze__1161:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.359552 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.043392 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.043392 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.746752 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.03328 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.03328 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.35968 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.360448 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.037376 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.037376 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1309:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.360064 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.043136 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.043136 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 1.6672 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.03328 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.03328 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.361344 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.360704 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.037376 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.037376 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(49152,3072,192,3,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_23/3_up_lvl_5/combine/stack_Unsqueeze__1310:0 copy (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 0.361088 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 0.043648 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 0.043648 [04/18/2022-02:37:19] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1,1) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,1,16,16) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,256,1:4,4,4) -> Float(1024,64,64:32,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,64,1,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(16384,1024,1,16,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,64,64:32,1,1) -> Float(4096,256,1:4,4,4) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(1:4,2048,128,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1024,64,1) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,16) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,4) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1024,1024:32,64,1) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1024,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(16384,1,1024,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(4096,1:4,256,4) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,128,2) -> Float(1024,1024:32,64,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(1:4,2048,64,2) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(1:4,2048,64,2) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1024,32,1) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(65536,1,2048,64) -> Float(16384,1:4,512,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1:4,512,16) -> Float(65536,1,2048,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:19] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(829440,1024,32,1) -> Float(829440,1,25920,810) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 3.6192 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 4.0224 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 1002 Time: 3.6192 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(829440,1024,32,1) -> Float(207872,1:4,6496,203) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 3.59219 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 3.58938 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 0 Time: 3.58938 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(829440,1024,32,1) -> Float(26624,1024:32,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 3.63162 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 3.65466 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 1002 Time: 3.63162 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(829440,1,25920,810) -> Float(829440,1024,32,1) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:19] [V] [TRT] Tactic: 1002 Time: 3.61766 [04/18/2022-02:37:19] [V] [TRT] Tactic: 0 Time: 3.71571 [04/18/2022-02:37:19] [V] [TRT] Fastest Tactic: 1002 Time: 3.61766 [04/18/2022-02:37:19] [V] [TRT] *************** Autotuning Reformat: Float(829440,1,25920,810) -> Float(207872,1:4,6496,203) *************** [04/18/2022-02:37:19] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 3.54342 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 3.6704 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 1002 Time: 3.54342 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(829440,1,25920,810) -> Float(26624,1024:32,32,1) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 3.72147 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 4.89587 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 1002 Time: 3.72147 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(207872,1:4,6496,203) -> Float(829440,1024,32,1) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 4.0905 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 3.76512 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 0 Time: 3.76512 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(207872,1:4,6496,203) -> Float(829440,1,25920,810) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 4.19187 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 3.7408 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 0 Time: 3.7408 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(207872,1:4,6496,203) -> Float(26624,1024:32,32,1) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 4.22438 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 4.97818 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 1002 Time: 4.22438 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(26624,1024:32,32,1) -> Float(829440,1024,32,1) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:20] [V] [TRT] Tactic: 1002 Time: 4.3561 [04/18/2022-02:37:20] [V] [TRT] Tactic: 0 Time: 3.78688 [04/18/2022-02:37:20] [V] [TRT] Fastest Tactic: 0 Time: 3.78688 [04/18/2022-02:37:20] [V] [TRT] *************** Autotuning Reformat: Float(26624,1024:32,32,1) -> Float(829440,1,25920,810) *************** [04/18/2022-02:37:20] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 4.19699 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 4.14874 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 4.14874 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(26624,1024:32,32,1) -> Float(207872,1:4,6496,203) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 3.85408 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 3.77958 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 3.77958 [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1024,32,1) -> Float(36864,1,1152,36) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.095872 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.058752 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.058752 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1024,32,1) -> Float(9216,1:4,288,9) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.094592 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.061952 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.061952 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1024,32,1) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.085632 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.282624 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 1002 Time: 0.085632 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1,1152,36) -> Float(36864,1024,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.096512 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.0704 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.0704 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1,1152,36) -> Float(9216,1:4,288,9) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.093568 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.05696 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.05696 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(36864,1,1152,36) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.082432 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.322816 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 1002 Time: 0.082432 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(9216,1:4,288,9) -> Float(36864,1024,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.104448 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.071552 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.071552 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(9216,1:4,288,9) -> Float(36864,1,1152,36) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.058112 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.058112 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(9216,1:4,288,9) -> Float(2048,1024:32,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.084224 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.326912 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 1002 Time: 0.084224 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(36864,1024,32,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.095616 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.071168 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.071168 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(36864,1,1152,36) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.092544 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.05696 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.05696 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(2048,1024:32,32,1) -> Float(9216,1:4,288,9) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_1:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.093056 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.067968 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.067968 [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.06336 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_2/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.142464 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.03648 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.03648 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.206976 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.193024 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.19392 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_18/3_dn_lvl_6/combine/stack_Unsqueeze__1145:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.19328 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.024192 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.024192 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.208 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.191872 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.193664 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023168 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023168 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1367:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.19328 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023808 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023808 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.207488 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.02176 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.02176 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.193024 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.022912 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.022912 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.193024 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023424 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023424 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(12288,1536,192,3,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_24/3_up_lvl_6/combine/stack_Unsqueeze__1368:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.192768 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.023936 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.023936 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1,1) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,1,8,8) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,128,1:4,2,2) -> Float(512,64,64:32,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,64,1,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(4096,512,1,8,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,64,64:32,1,1) -> Float(1024,128,1:4,2,2) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1:4,1024,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,512,64,1) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,8) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,512:32,64,1) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,512,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(4096,1,512,8) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(1024,1:4,128,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,1024,128,2) -> Float(512,512:32,64,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(1:4,512,32,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,32,2) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,256,16,1) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(16384,1,1024,64) -> Float(4096,1:4,256,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,256,16,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1:4,256,16) -> Float(16384,1,1024,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.032384 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_3/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.058112 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.02432 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.02432 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:21] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.08192 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.109184 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.109184 [04/18/2022-02:37:21] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:37:21] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:37:21] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:21] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:21] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1424:0 copy (Reformat) [04/18/2022-02:37:21] [V] [TRT] Tactic: 1002 Time: 0.109184 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.020096 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.020096 [04/18/2022-02:37:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 copy (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.083584 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:37:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 copy (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.110592 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:37:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 copy (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.1088 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:37:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(2048,512,128,2,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/combine/stack_Unsqueeze__1425:0 copy (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.109696 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:37:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,1,4,4) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,1:4,1,1) -> Float(256,64,64:32,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,64,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(1024,256,1,4,4) *************** [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(256,64,64:32,1,1) -> Float(256,64,1:4,1,1) *************** [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,256,16,1) -> Float(207360,1,12960,810) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 2.16256 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.909696 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.909696 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,256,16,1) -> Float(51968,1:4,3248,203) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.867072 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.06816 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.867072 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,256,16,1) -> Float(6656,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.969216 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.66029 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.969216 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,1,12960,810) -> Float(207360,256,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.955776 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.992 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.955776 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,1,12960,810) -> Float(51968,1:4,3248,203) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.982272 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.88832 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.88832 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(207360,1,12960,810) -> Float(6656,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.956416 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.6928 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.956416 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(51968,1:4,3248,203) -> Float(207360,256,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.892416 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.93459 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.892416 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(51968,1:4,3248,203) -> Float(207360,1,12960,810) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.920448 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.88896 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.88896 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(51968,1:4,3248,203) -> Float(6656,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 1.6599 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.28256 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 1.28256 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(6656,256:32,16,1) -> Float(207360,256,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.886272 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.9984 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.886272 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(6656,256:32,16,1) -> Float(207360,1,12960,810) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.893056 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.928128 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.893056 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(6656,256:32,16,1) -> Float(51968,1:4,3248,203) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.998784 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 1.14099 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.998784 [04/18/2022-02:37:22] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,256,16,1) -> Float(9216,1,576,36) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.028672 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.028672 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,256,16,1) -> Float(2304,1:4,144,9) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.05504 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.029056 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.029056 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,256,16,1) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.050176 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.091264 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.050176 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,1,576,36) -> Float(9216,256,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.057216 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,1,576,36) -> Float(2304,1:4,144,9) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(9216,1,576,36) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.050432 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.102912 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 1002 Time: 0.050432 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(2304,1:4,144,9) -> Float(9216,256,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.056448 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(2304,1:4,144,9) -> Float(9216,1,576,36) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.054144 [04/18/2022-02:37:22] [V] [TRT] Tactic: 0 Time: 0.026112 [04/18/2022-02:37:22] [V] [TRT] Fastest Tactic: 0 Time: 0.026112 [04/18/2022-02:37:22] [V] [TRT] *************** Autotuning Reformat: Float(2304,1:4,144,9) -> Float(512,256:32,16,1) *************** [04/18/2022-02:37:22] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:22] [V] [TRT] Tactic: 1002 Time: 0.05248 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.103168 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.05248 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(9216,256,16,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.03136 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.03136 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(9216,1,576,36) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.055552 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.026624 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.026624 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(512,256:32,16,1) -> Float(2304,1:4,144,9) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_2:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.052992 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.028416 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.028416 [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1:4,512,128,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1:4,512,128,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,256,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,4) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,1) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,256:32,64,1) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,256,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(1024,1,256,4) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,1:4,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,512,128,2) -> Float(256,256:32,64,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.048 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.0288 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat( -> StatefulPartitionedCall/EfficientDet-D0/bifpn/node_25/3_up_lvl_7/post_combine/batchnorm/FusedBatchNormV3:0) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.047744 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.018432 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018432 [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1:4,128,16,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,128,16,2) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,64,8,1) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(4096,1,512,64) -> Float(1024,1:4,128,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1:4,128,16) -> Float(4096,1,512,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.030464 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019328 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019328 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/ClassPredictionTower/conv2d_0/BatchNorm/feature_4/FusedBatchNormV3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.018944 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.018944 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,64,8,1) -> Float(51840,1,6480,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.098304 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.09856 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.098304 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,64,8,1) -> Float(12992,1:4,1624,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.07232 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.104832 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.07232 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,64,8,1) -> Float(1664,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.068736 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.237824 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.068736 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,1,6480,810) -> Float(51840,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.080896 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.099968 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.080896 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,1,6480,810) -> Float(12992,1:4,1624,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.07104 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.077952 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.07104 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(51840,1,6480,810) -> Float(1664,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.067328 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.29696 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.067328 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12992,1:4,1624,203) -> Float(51840,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.078208 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.084224 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.078208 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12992,1:4,1624,203) -> Float(51840,1,6480,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.111616 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.076928 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.076928 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12992,1:4,1624,203) -> Float(1664,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.063744 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.297344 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.063744 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1664,64:32,8,1) -> Float(51840,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.075392 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.078848 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.075392 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1664,64:32,8,1) -> Float(51840,1,6480,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.11008 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.075904 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.075904 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1664,64:32,8,1) -> Float(12992,1:4,1624,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.071936 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.108032 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 1002 Time: 0.071936 [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,64,8,1) -> Float(2304,1,288,36) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.02112 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.02112 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,64,8,1) -> Float(576,1:4,72,9) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.021888 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.021888 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,64,8,1) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.051072 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.033408 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.033408 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,1,288,36) -> Float(2304,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.055936 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019584 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019584 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,1,288,36) -> Float(576,1:4,72,9) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019456 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019456 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(2304,1,288,36) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.05056 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(576,1:4,72,9) -> Float(2304,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.054784 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.021504 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.021504 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(576,1:4,72,9) -> Float(2304,1,288,36) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.056064 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019328 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019328 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(576,1:4,72,9) -> Float(128,64:32,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.0512 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.034432 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.034432 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(2304,64,8,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.055424 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.022272 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.022272 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(2304,1,288,36) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.054272 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.020608 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.020608 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(128,64:32,8,1) -> Float(576,1:4,72,9) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_3:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.05376 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.019712 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.019712 [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1:4,32,8,2) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1:4,32,8,2) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,16,4,1) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(1024,1,256,64) -> Float(256,1:4,64,16) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(256,1:4,64,16) -> Float(1024,1,256,64) *************** [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,16,4,1) -> Float(12960,1,3240,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.081792 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.031872 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.031872 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,16,4,1) -> Float(3248,1:4,812,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.072448 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.032768 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.032768 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,16,4,1) -> Float(416,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.070656 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.03712 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.03712 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,1,3240,810) -> Float(12960,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.040576 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.027392 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.027392 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,1,3240,810) -> Float(3248,1:4,812,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.068352 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.030464 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.030464 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(12960,1,3240,810) -> Float(416,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.066944 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.048384 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.048384 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(3248,1:4,812,203) -> Float(12960,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.039808 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.029824 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.029824 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(3248,1:4,812,203) -> Float(12960,1,3240,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.073856 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.030464 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.030464 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(3248,1:4,812,203) -> Float(416,16:32,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.066944 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.044416 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.044416 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(416,16:32,4,1) -> Float(12960,16,4,1) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.03968 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.030208 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.030208 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(416,16:32,4,1) -> Float(12960,1,3240,810) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.073216 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.030592 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.030592 [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(416,16:32,4,1) -> Float(3248,1:4,812,203) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/ClassPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.066688 [04/18/2022-02:37:23] [V] [TRT] Tactic: 0 Time: 0.033152 [04/18/2022-02:37:23] [V] [TRT] Fastest Tactic: 0 Time: 0.033152 [04/18/2022-02:37:23] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:23] [V] [TRT] *************** Autotuning Reformat: Float(576,16,4,1) -> Float(576,1,144,36) *************** [04/18/2022-02:37:23] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:23] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(576,16,4,1) -> Float(144,1:4,36,9) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.053376 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(576,16,4,1) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.052608 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(576,1,144,36) -> Float(576,16,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.029568 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(576,1,144,36) -> Float(144,1:4,36,9) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.04992 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(576,1,144,36) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.049408 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1:4,36,9) -> Float(576,16,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1:4,36,9) -> Float(576,1,144,36) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.05184 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.017792 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017792 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(144,1:4,36,9) -> Float(32,16:32,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.048512 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.018176 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(576,16,4,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.029824 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.01792 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.01792 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(576,1,144,36) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.050432 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.017408 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.017408 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(32,16:32,4,1) -> Float(144,1:4,36,9) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: Optimizer Reformat(StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/BoxPredictor/BiasAdd_4:0 -> ) (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 0.052096 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 0.018048 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 0 Time: 0.018048 [04/18/2022-02:37:24] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(3317760,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:24] [V] [TRT] Tactic: 1002 Time: 13.972 [04/18/2022-02:37:24] [V] [TRT] Tactic: 0 Time: 15.1283 [04/18/2022-02:37:24] [V] [TRT] Fastest Tactic: 1002 Time: 13.972 [04/18/2022-02:37:24] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:24] [V] [TRT] *************** Autotuning Reformat: Float(1,720,8) -> Float(4419360,90,1) *************** [04/18/2022-02:37:24] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:25] [V] [TRT] Tactic: 1002 Time: 21.7222 [04/18/2022-02:37:26] [V] [TRT] Tactic: 0 Time: 65.7638 [04/18/2022-02:37:26] [V] [TRT] Fastest Tactic: 1002 Time: 21.7222 [04/18/2022-02:37:26] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:26] [V] [TRT] *************** Autotuning Reformat: Float(1:4,180,2) -> Float(4419360,90,1) *************** [04/18/2022-02:37:26] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:26] [V] [TRT] Tactic: 1002 Time: 21.718 [04/18/2022-02:37:27] [V] [TRT] Tactic: 0 Time: 66.0723 [04/18/2022-02:37:27] [V] [TRT] Fastest Tactic: 1002 Time: 21.718 [04/18/2022-02:37:27] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:27] [V] [TRT] *************** Autotuning Reformat: Float(3317760:32,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:27] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:28] [V] [TRT] Tactic: 1002 Time: 21.7787 [04/18/2022-02:37:29] [V] [TRT] Tactic: 0 Time: 67.9749 [04/18/2022-02:37:29] [V] [TRT] Fastest Tactic: 1002 Time: 21.7787 [04/18/2022-02:37:29] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:29] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:29] [V] [TRT] *************** Autotuning Reformat: Float(829440,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:29] [V] [TRT] Tactic: 1002 Time: 3.53472 [04/18/2022-02:37:29] [V] [TRT] Tactic: 0 Time: 3.66016 [04/18/2022-02:37:29] [V] [TRT] Fastest Tactic: 1002 Time: 3.53472 [04/18/2022-02:37:29] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:29] [V] [TRT] *************** Autotuning Reformat: Float(1,720,8) -> Float(4419360,90,1) *************** [04/18/2022-02:37:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:29] [V] [TRT] Tactic: 1002 Time: 5.34605 [04/18/2022-02:37:29] [V] [TRT] Tactic: 0 Time: 16.3744 [04/18/2022-02:37:29] [V] [TRT] Fastest Tactic: 1002 Time: 5.34605 [04/18/2022-02:37:29] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:29] [V] [TRT] *************** Autotuning Reformat: Float(1:4,180,2) -> Float(4419360,90,1) *************** [04/18/2022-02:37:29] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:29] [V] [TRT] Tactic: 1002 Time: 5.92371 [04/18/2022-02:37:30] [V] [TRT] Tactic: 0 Time: 15.9201 [04/18/2022-02:37:30] [V] [TRT] Fastest Tactic: 1002 Time: 5.92371 [04/18/2022-02:37:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:30] [V] [TRT] *************** Autotuning Reformat: Float(829440:32,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:30] [V] [TRT] Tactic: 1002 Time: 5.37882 [04/18/2022-02:37:30] [V] [TRT] Tactic: 0 Time: 17.1096 [04/18/2022-02:37:30] [V] [TRT] Fastest Tactic: 1002 Time: 5.37882 [04/18/2022-02:37:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:30] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:30] [V] [TRT] *************** Autotuning Reformat: Float(207360,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:30] [V] [TRT] Tactic: 1002 Time: 0.855424 [04/18/2022-02:37:30] [V] [TRT] Tactic: 0 Time: 0.883456 [04/18/2022-02:37:30] [V] [TRT] Fastest Tactic: 1002 Time: 0.855424 [04/18/2022-02:37:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:30] [V] [TRT] *************** Autotuning Reformat: Float(1,720,8) -> Float(4419360,90,1) *************** [04/18/2022-02:37:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:30] [V] [TRT] Tactic: 1002 Time: 1.37882 [04/18/2022-02:37:30] [V] [TRT] Tactic: 0 Time: 4.07296 [04/18/2022-02:37:30] [V] [TRT] Fastest Tactic: 1002 Time: 1.37882 [04/18/2022-02:37:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:30] [V] [TRT] *************** Autotuning Reformat: Float(1:4,180,2) -> Float(4419360,90,1) *************** [04/18/2022-02:37:30] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:30] [V] [TRT] Tactic: 1002 Time: 1.37792 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 4.00333 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 1.37792 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(207360:32,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 1.38982 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 4.56448 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 1.38982 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(51840,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.06784 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.06336 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.06336 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1,720,8) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.365568 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.07744 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.07744 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,180,2) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.364288 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.101632 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.101632 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(51840:32,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.37312 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 1.01645 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.37312 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(12960,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.03456 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.027136 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.027136 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1,720,8) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.128512 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.032256 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.032256 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,180,2) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.12608 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.03264 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.03264 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(12960:32,90,1) -> Float(4419360,90,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalClassHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.12672 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.037504 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.037504 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(147456,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.68864 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.633472 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.633472 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1,32,8) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 2.27853 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 2.82163 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 2.27853 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8,2) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.952704 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 2.72614 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.952704 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(147456:32,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 1.02541 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 3.43859 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 1.02541 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(36864,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.054016 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.048768 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.048768 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1,32,8) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.270208 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.059392 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.059392 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8,2) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.269312 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.061056 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 0 Time: 0.061056 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(36864:32,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_1:0 copy (Reformat) [04/18/2022-02:37:31] [V] [TRT] Tactic: 1002 Time: 0.283904 [04/18/2022-02:37:31] [V] [TRT] Tactic: 0 Time: 0.425216 [04/18/2022-02:37:31] [V] [TRT] Fastest Tactic: 1002 Time: 0.283904 [04/18/2022-02:37:31] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 1002 [04/18/2022-02:37:31] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:31] [V] [TRT] *************** Autotuning Reformat: Float(9216,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:31] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.030464 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.024576 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.024576 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1,32,8) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.093824 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.028544 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.028544 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8,2) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.093184 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.029568 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.029568 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(9216:32,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_2:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.092672 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.031744 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.031744 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(2304,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.025728 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.018816 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.018816 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1,32,8) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.021248 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.021248 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8,2) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.053632 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.022144 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.022144 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(2304:32,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_3:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.05568 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.022656 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.022656 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(576,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.0256 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.017664 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.017664 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1,32,8) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.055296 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.018304 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.018304 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(1:4,8,2) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.052864 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.018688 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.018688 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning Reformat: Float(576:32,4,1) -> Float(196416,4,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/WeightSharedConvolutionalBoxPredictor/WeightSharedConvolutionalBoxHead/Reshape_4:0 copy (Reformat) [04/18/2022-02:37:32] [V] [TRT] Tactic: 1002 Time: 0.05312 [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 0.019072 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 0.019072 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing reformatting costs [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(3,3,3,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: -> Float(2,2,2,1,1) *************** [04/18/2022-02:37:32] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: Float(786432,1536,3,1) -> Float(786432,262144,512,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: preprocessor/transpose (Shuffle) [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 5.1913 [04/18/2022-02:37:32] [V] [TRT] Tactic: 1 Time: 12.8831 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 0 Time: 5.1913 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: Float(786432,1,1536,512) -> Float(786432,1,1536,3) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: preprocessor/transpose (Shuffle) [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 3.83654 [04/18/2022-02:37:32] [V] [TRT] Tactic: 1 Time: 3.39392 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 1 Time: 3.39392 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 1 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: Float(196608,1:4,384,128) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: preprocessor/transpose (Shuffle) [04/18/2022-02:37:32] [V] [TRT] Tactic: 0 Time: 6.34662 [04/18/2022-02:37:32] [V] [TRT] Tactic: 1 Time: 4.04979 [04/18/2022-02:37:32] [V] [TRT] Fastest Tactic: 1 Time: 4.04979 [04/18/2022-02:37:32] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 1 [04/18/2022-02:37:32] [V] [TRT] *************** Autotuning format combination: Float(24576,1536:32,3,1) -> Float(262144,262144:32,512,1) *************** [04/18/2022-02:37:32] [V] [TRT] --------------- Timing Runner: preprocessor/transpose (Shuffle) [04/18/2022-02:37:36] [V] [TRT] Tactic: 0 Time: 191.129 [04/18/2022-02:37:36] [V] [TRT] Tactic: 1 Time: 20.3864 [04/18/2022-02:37:36] [V] [TRT] Fastest Tactic: 1 Time: 20.3864 [04/18/2022-02:37:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 1 [04/18/2022-02:37:36] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:36] [V] [TRT] *************** Autotuning format combination: Float(786432,262144,512,1) -> Float(786432,262144,512,1) *************** [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: preprocessor/scale_value:0 + preprocessor/scale + preprocessor/mean_value:0 + preprocessor/mean (Scale) [04/18/2022-02:37:36] [V] [TRT] Tactic: 0 Time: 3.36269 [04/18/2022-02:37:36] [V] [TRT] Fastest Tactic: 0 Time: 3.36269 [04/18/2022-02:37:36] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Scale Tactic: 0 [04/18/2022-02:37:36] [V] [TRT] *************** Autotuning format combination: Float(786432,1,1536,3) -> Float(786432,1,1536,3) *************** [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: preprocessor/scale_value:0 + preprocessor/scale + preprocessor/mean_value:0 + preprocessor/mean (Scale) [04/18/2022-02:37:36] [V] [TRT] Scale has no valid tactics for this config, skipping [04/18/2022-02:37:36] [V] [TRT] *************** Autotuning format combination: Float(262144,1:4,512,1) -> Float(262144,1:4,512,1) *************** [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: preprocessor/scale_value:0 + preprocessor/scale + preprocessor/mean_value:0 + preprocessor/mean (Scale) [04/18/2022-02:37:36] [V] [TRT] Scale has no valid tactics for this config, skipping [04/18/2022-02:37:36] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:36] [V] [TRT] *************** Autotuning format combination: Float(786432,262144,512,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CudaDepthwiseConvolution) [04/18/2022-02:37:36] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (FusedConvActConvolution) [04/18/2022-02:37:36] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [04/18/2022-02:37:36] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:37:38] [V] [TRT] Tactic: 0 Time: 20.3702 [04/18/2022-02:37:38] [V] [TRT] Tactic: 1 Time: 19.4447 [04/18/2022-02:37:39] [V] [TRT] Tactic: 2 Time: 28.1263 [04/18/2022-02:37:40] [V] [TRT] Tactic: 5 Time: 68.0892 [04/18/2022-02:37:40] [V] [TRT] Tactic: 56 Time: 19.4675 [04/18/2022-02:37:41] [V] [TRT] Tactic: 57 Time: 19.4452 [04/18/2022-02:37:41] [V] [TRT] Tactic: 58 Time: 27.7678 [04/18/2022-02:37:42] [V] [TRT] Tactic: 61 Time: 66.1613 [04/18/2022-02:37:42] [V] [TRT] Tactic: 112 Time: 18.9274 [04/18/2022-02:37:43] [V] [TRT] Tactic: 113 Time: 19.4975 [04/18/2022-02:37:43] [V] [TRT] Tactic: 114 Time: 27.7551 [04/18/2022-02:37:44] [V] [TRT] Tactic: 117 Time: 67.3317 [04/18/2022-02:37:44] [V] [TRT] Fastest Tactic: 112 Time: 18.9274 [04/18/2022-02:37:44] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:37:44] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:37:44] [V] [TRT] Tactic: 4549827808004681195 Time: 6.9015 [04/18/2022-02:37:44] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:37:45] [V] [TRT] Tactic: 5779835512569528575 Time: 8.32294 [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_xregs_large_nn_v1 Tactic: 6053873026024413720 [04/18/2022-02:37:45] [V] [TRT] Tactic: 6053873026024413720 Time: 9.05306 [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_xregs_large_nn_v1 Tactic: 6767548733843469815 [04/18/2022-02:37:45] [V] [TRT] Tactic: 6767548733843469815 Time: 7.19731 [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:37:45] [V] [TRT] Tactic: -6313876406580483184 Time: 7.03846 [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:37:45] [V] [TRT] Tactic: -1123676555321336786 Time: 8.76864 [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:37:45] [V] [TRT] Tactic: -701551393537224327 Time: 6.80845 [04/18/2022-02:37:45] [V] [TRT] Fastest Tactic: -701551393537224327 Time: 6.80845 [04/18/2022-02:37:45] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -701551393537224327 [04/18/2022-02:37:45] [V] [TRT] *************** Autotuning format combination: Float(786432,1,1536,3) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:37:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:37:45] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:37:45] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:37:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:37:46] [V] [TRT] Tactic: 5778138195697110003 Time: 8.95168 [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_large_nhwc_tn_v1 Tactic: -3855385237722507464 [04/18/2022-02:37:46] [V] [TRT] Tactic: -3855385237722507464 Time: 9.0391 [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:37:46] [V] [TRT] Tactic: -2809379259463049391 Time: 9.04192 [04/18/2022-02:37:46] [V] [TRT] Fastest Tactic: 5778138195697110003 Time: 8.95168 [04/18/2022-02:37:46] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5778138195697110003 [04/18/2022-02:37:46] [V] [TRT] *************** Autotuning format combination: Float(262144,1:4,512,1) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:37:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:37:46] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:37:46] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r3s3 Tactic: 2086609538387166260 [04/18/2022-02:37:46] [V] [TRT] Tactic: 2086609538387166260 Time: 11.1949 [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:37:46] [V] [TRT] Tactic: 2860655430572478466 Time: 10.1235 [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma Tactic: 3239733199291090177 [04/18/2022-02:37:46] [V] [TRT] Tactic: 3239733199291090177 Time: 11.1905 [04/18/2022-02:37:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:37:47] [V] [TRT] Tactic: 4474630279712975759 Time: 10.6612 [04/18/2022-02:37:47] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:37:47] [V] [TRT] Tactic: 4479823862704990365 Time: 10.0015 [04/18/2022-02:37:47] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma Tactic: 4517590677127196184 [04/18/2022-02:37:47] [V] [TRT] Tactic: 4517590677127196184 Time: 17.9127 [04/18/2022-02:37:47] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r3s3 Tactic: 4634080872644479428 [04/18/2022-02:37:47] [V] [TRT] Tactic: 4634080872644479428 Time: 12.503 [04/18/2022-02:37:47] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:37:48] [V] [TRT] Tactic: 4696204239951173149 Time: 9.60794 [04/18/2022-02:37:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:37:48] [V] [TRT] Tactic: 5778138195697110003 Time: 8.95846 [04/18/2022-02:37:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma Tactic: 6310198979346901507 [04/18/2022-02:37:48] [V] [TRT] Tactic: 6310198979346901507 Time: 12.783 [04/18/2022-02:37:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_large_nhwc_tn_v1 Tactic: 7155825427510256858 [04/18/2022-02:37:48] [V] [TRT] Tactic: 7155825427510256858 Time: 8.74342 [04/18/2022-02:37:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: 7222247112373541608 [04/18/2022-02:37:48] [V] [TRT] Tactic: 7222247112373541608 Time: 7.16851 [04/18/2022-02:37:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8 Tactic: 7342025736444949634 [04/18/2022-02:37:49] [V] [TRT] Tactic: 7342025736444949634 Time: 13.5167 [04/18/2022-02:37:49] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma Tactic: 7472640475524677095 [04/18/2022-02:37:49] [V] [TRT] Tactic: 7472640475524677095 Time: 12.9709 [04/18/2022-02:37:49] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma Tactic: 8498373915030836990 [04/18/2022-02:37:49] [V] [TRT] Tactic: 8498373915030836990 Time: 21.0537 [04/18/2022-02:37:49] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: 8869697132622550639 [04/18/2022-02:37:49] [V] [TRT] Tactic: 8869697132622550639 Time: 12.0091 [04/18/2022-02:37:49] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:37:50] [V] [TRT] Tactic: 8918020581761223752 Time: 8.64858 [04/18/2022-02:37:50] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r3s3 Tactic: -8937725997228636978 [04/18/2022-02:37:50] [V] [TRT] Tactic: -8937725997228636978 Time: 11.746 [04/18/2022-02:37:50] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r3s3 Tactic: -8833858409138163072 [04/18/2022-02:37:50] [V] [TRT] Tactic: -8833858409138163072 Time: 19.8065 [04/18/2022-02:37:50] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r3s3 Tactic: -7989138351613022500 [04/18/2022-02:37:50] [V] [TRT] Tactic: -7989138351613022500 Time: 7.80557 [04/18/2022-02:37:50] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma Tactic: -7872883691240863058 [04/18/2022-02:37:51] [V] [TRT] Tactic: -7872883691240863058 Time: 12.7789 [04/18/2022-02:37:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r3s3 Tactic: -7377458734869418330 [04/18/2022-02:37:51] [V] [TRT] Tactic: -7377458734869418330 Time: 12.6154 [04/18/2022-02:37:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma Tactic: -6729618519651721910 [04/18/2022-02:37:51] [V] [TRT] Tactic: -6729618519651721910 Time: 12.6852 [04/18/2022-02:37:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: -5893833996418445881 [04/18/2022-02:37:51] [V] [TRT] Tactic: -5893833996418445881 Time: 11.5487 [04/18/2022-02:37:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma Tactic: -5701562095007058349 [04/18/2022-02:37:52] [V] [TRT] Tactic: -5701562095007058349 Time: 20.0915 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: -5685503422376017600 [04/18/2022-02:37:52] [V] [TRT] Tactic: -5685503422376017600 Time: 6.99059 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: -5521125187060117489 [04/18/2022-02:37:52] [V] [TRT] Tactic: -5521125187060117489 Time: 7.88646 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8 Tactic: -5457304872213719461 [04/18/2022-02:37:52] [V] [TRT] Tactic: -5457304872213719461 Time: 13.168 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_large_nhwc_tn_v1 Tactic: -4756382386362004279 [04/18/2022-02:37:52] [V] [TRT] Tactic: -4756382386362004279 Time: 10.0512 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma Tactic: -4615000974950361663 [04/18/2022-02:37:52] [V] [TRT] Tactic: -4615000974950361663 Time: 8.18778 [04/18/2022-02:37:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r3s3 Tactic: -4314913710375142296 [04/18/2022-02:37:53] [V] [TRT] Tactic: -4314913710375142296 Time: 9.9593 [04/18/2022-02:37:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_large_nhwc_tn_v1 Tactic: -3855385237722507464 [04/18/2022-02:37:53] [V] [TRT] Tactic: -3855385237722507464 Time: 9.53638 [04/18/2022-02:37:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r3s3 Tactic: -3697587361057948972 [04/18/2022-02:37:53] [V] [TRT] Tactic: -3697587361057948972 Time: 6.88883 [04/18/2022-02:37:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:37:53] [V] [TRT] Tactic: -2809379259463049391 Time: 9.00173 [04/18/2022-02:37:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r3s3 Tactic: -2747929399988666512 [04/18/2022-02:37:53] [V] [TRT] Tactic: -2747929399988666512 Time: 17.4267 [04/18/2022-02:37:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma Tactic: -1472061967969061456 [04/18/2022-02:37:54] [V] [TRT] Tactic: -1472061967969061456 Time: 18.416 [04/18/2022-02:37:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:37:54] [V] [TRT] Tactic: -504296718212024303 Time: 8.6729 [04/18/2022-02:37:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_indexed_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma Tactic: -444093195553988951 [04/18/2022-02:37:54] [V] [TRT] Tactic: -444093195553988951 Time: 10.9363 [04/18/2022-02:37:54] [V] [TRT] Fastest Tactic: -3697587361057948972 Time: 6.88883 [04/18/2022-02:37:54] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -3697587361057948972 [04/18/2022-02:37:54] [V] [TRT] =============== Computing costs for [04/18/2022-02:37:54] [V] [TRT] *************** Autotuning format combination: Float(2097152,65536,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:37:54] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWiseV2) [04/18/2022-02:37:54] [V] [TRT] Tactic: 0 Time: 9.43386 [04/18/2022-02:37:55] [V] [TRT] Tactic: 1 Time: 9.24083 [04/18/2022-02:37:55] [V] [TRT] Tactic: 2 Time: 8.82816 [04/18/2022-02:37:55] [V] [TRT] Tactic: 3 Time: 9.0519 [04/18/2022-02:37:56] [V] [TRT] Tactic: 4 Time: 8.91725 [04/18/2022-02:37:56] [V] [TRT] Tactic: 5 Time: 8.86016 [04/18/2022-02:37:56] [V] [TRT] Tactic: 6 Time: 9.15981 [04/18/2022-02:37:57] [V] [TRT] Tactic: 7 Time: 8.96896 [04/18/2022-02:37:57] [V] [TRT] Tactic: 8 Time: 8.85645 [04/18/2022-02:37:57] [V] [TRT] Tactic: 9 Time: 8.87206 [04/18/2022-02:37:58] [V] [TRT] Tactic: 28 Time: 9.37626 [04/18/2022-02:37:58] [V] [TRT] Fastest Tactic: 2 Time: 8.82816 [04/18/2022-02:37:58] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWise) [04/18/2022-02:37:58] [V] [TRT] Tactic: 128 Time: 9.40557 [04/18/2022-02:37:58] [V] [TRT] Tactic: 256 Time: 9.34515 [04/18/2022-02:37:58] [V] [TRT] Tactic: 512 Time: 9.12819 [04/18/2022-02:37:58] [V] [TRT] Tactic: -32 Time: 8.896 [04/18/2022-02:37:59] [V] [TRT] Tactic: -64 Time: 9.09632 [04/18/2022-02:37:59] [V] [TRT] Tactic: -128 Time: 9.14534 [04/18/2022-02:37:59] [V] [TRT] Fastest Tactic: -32 Time: 8.896 [04/18/2022-02:37:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 2 [04/18/2022-02:37:59] [V] [TRT] *************** Autotuning format combination: Float(2097152,1,8192,32) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:37:59] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWiseV2) [04/18/2022-02:37:59] [V] [TRT] Tactic: 0 Time: 9.41248 [04/18/2022-02:37:59] [V] [TRT] Tactic: 1 Time: 9.18848 [04/18/2022-02:37:59] [V] [TRT] Tactic: 2 Time: 8.90573 [04/18/2022-02:38:00] [V] [TRT] Tactic: 3 Time: 9.15341 [04/18/2022-02:38:00] [V] [TRT] Tactic: 4 Time: 10.0132 [04/18/2022-02:38:00] [V] [TRT] Tactic: 5 Time: 8.94899 [04/18/2022-02:38:00] [V] [TRT] Tactic: 6 Time: 9.08902 [04/18/2022-02:38:00] [V] [TRT] Tactic: 7 Time: 8.9449 [04/18/2022-02:38:00] [V] [TRT] Tactic: 8 Time: 8.96154 [04/18/2022-02:38:01] [V] [TRT] Tactic: 9 Time: 9.85549 [04/18/2022-02:38:01] [V] [TRT] Tactic: 28 Time: 9.31072 [04/18/2022-02:38:01] [V] [TRT] Fastest Tactic: 2 Time: 8.90573 [04/18/2022-02:38:01] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWise) [04/18/2022-02:38:01] [V] [TRT] Tactic: 128 Time: 9.43987 [04/18/2022-02:38:01] [V] [TRT] Tactic: 256 Time: 9.30304 [04/18/2022-02:38:01] [V] [TRT] Tactic: 512 Time: 9.30662 [04/18/2022-02:38:01] [V] [TRT] Tactic: -32 Time: 8.99123 [04/18/2022-02:38:02] [V] [TRT] Tactic: -64 Time: 9.06854 [04/18/2022-02:38:02] [V] [TRT] Tactic: -128 Time: 9.03386 [04/18/2022-02:38:02] [V] [TRT] Fastest Tactic: -32 Time: 8.99123 [04/18/2022-02:38:02] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 2 [04/18/2022-02:38:02] [V] [TRT] *************** Autotuning format combination: Float(524288,1:4,2048,8) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:38:02] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWiseV2) [04/18/2022-02:38:02] [V] [TRT] Tactic: 0 Time: 10.9663 [04/18/2022-02:38:03] [V] [TRT] Tactic: 1 Time: 11.058 [04/18/2022-02:38:03] [V] [TRT] Tactic: 2 Time: 14.2638 [04/18/2022-02:38:03] [V] [TRT] Tactic: 3 Time: 10.9606 [04/18/2022-02:38:04] [V] [TRT] Tactic: 4 Time: 13.5039 [04/18/2022-02:38:04] [V] [TRT] Tactic: 5 Time: 15.1656 [04/18/2022-02:38:05] [V] [TRT] Tactic: 6 Time: 10.9649 [04/18/2022-02:38:05] [V] [TRT] Tactic: 7 Time: 13.3158 [04/18/2022-02:38:05] [V] [TRT] Tactic: 8 Time: 14.7849 [04/18/2022-02:38:06] [V] [TRT] Tactic: 9 Time: 15.5444 [04/18/2022-02:38:06] [V] [TRT] Tactic: 10 Time: 9.40506 [04/18/2022-02:38:07] [V] [TRT] Tactic: 11 Time: 9.47072 [04/18/2022-02:38:07] [V] [TRT] Tactic: 12 Time: 10.9169 [04/18/2022-02:38:07] [V] [TRT] Tactic: 13 Time: 9.92243 [04/18/2022-02:38:08] [V] [TRT] Tactic: 14 Time: 10.9472 [04/18/2022-02:38:08] [V] [TRT] Tactic: 15 Time: 13.7324 [04/18/2022-02:38:08] [V] [TRT] Tactic: 16 Time: 9.58477 [04/18/2022-02:38:09] [V] [TRT] Tactic: 17 Time: 11.0435 [04/18/2022-02:38:09] [V] [TRT] Tactic: 18 Time: 13.5962 [04/18/2022-02:38:10] [V] [TRT] Tactic: 19 Time: 14.4982 [04/18/2022-02:38:10] [V] [TRT] Tactic: 20 Time: 9.36794 [04/18/2022-02:38:10] [V] [TRT] Tactic: 21 Time: 9.31226 [04/18/2022-02:38:11] [V] [TRT] Tactic: 22 Time: 9.19923 [04/18/2022-02:38:11] [V] [TRT] Tactic: 23 Time: 9.16915 [04/18/2022-02:38:11] [V] [TRT] Tactic: 28 Time: 10.9234 [04/18/2022-02:38:12] [V] [TRT] Tactic: 29 Time: 9.31379 [04/18/2022-02:38:12] [V] [TRT] Tactic: 30 Time: 9.35718 [04/18/2022-02:38:12] [V] [TRT] Fastest Tactic: 23 Time: 9.16915 [04/18/2022-02:38:12] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWise) [04/18/2022-02:38:12] [V] [TRT] Tactic: 128 Time: 9.92512 [04/18/2022-02:38:12] [V] [TRT] Tactic: 256 Time: 9.85766 [04/18/2022-02:38:13] [V] [TRT] Tactic: 512 Time: 9.71814 [04/18/2022-02:38:13] [V] [TRT] Tactic: -32 Time: 9.19821 [04/18/2022-02:38:13] [V] [TRT] Tactic: -64 Time: 9.40608 [04/18/2022-02:38:13] [V] [TRT] Tactic: -128 Time: 9.62637 [04/18/2022-02:38:13] [V] [TRT] Fastest Tactic: -32 Time: 9.19821 [04/18/2022-02:38:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 23 [04/18/2022-02:38:13] [V] [TRT] *************** Autotuning format combination: Float(65536,65536:32,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:38:13] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWiseV2) [04/18/2022-02:38:13] [V] [TRT] Tactic: 24 Time: 10.9572 [04/18/2022-02:38:14] [V] [TRT] Tactic: 25 Time: 11.0765 [04/18/2022-02:38:14] [V] [TRT] Tactic: 26 Time: 11.0436 [04/18/2022-02:38:15] [V] [TRT] Tactic: 27 Time: 11.0697 [04/18/2022-02:38:15] [V] [TRT] Tactic: 31 Time: 10.9523 [04/18/2022-02:38:15] [V] [TRT] Fastest Tactic: 31 Time: 10.9523 [04/18/2022-02:38:15] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWise) [04/18/2022-02:38:15] [V] [TRT] Tactic: 128 Time: 9.33043 [04/18/2022-02:38:15] [V] [TRT] Tactic: 256 Time: 9.36243 [04/18/2022-02:38:15] [V] [TRT] Tactic: 512 Time: 9.27411 [04/18/2022-02:38:16] [V] [TRT] Tactic: -32 Time: 8.8928 [04/18/2022-02:38:16] [V] [TRT] Tactic: -64 Time: 9.1232 [04/18/2022-02:38:16] [V] [TRT] Tactic: -128 Time: 9.13178 [04/18/2022-02:38:16] [V] [TRT] Fastest Tactic: -32 Time: 8.8928 [04/18/2022-02:38:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWise Tactic: -32 [04/18/2022-02:38:16] [V] [TRT] *************** Autotuning format combination: Float(1:4,131072,512,2) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:38:16] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stem_activation/mul) (PointWiseV2) [04/18/2022-02:38:16] [V] [TRT] Tactic: 0 Time: 10.9267 [04/18/2022-02:38:16] [V] [TRT] Tactic: 1 Time: 11.5141 [04/18/2022-02:38:17] [V] [TRT] Tactic: 2 Time: 14.2586 [04/18/2022-02:38:17] [V] [TRT] Tactic: 3 Time: 10.9574 [04/18/2022-02:38:17] [V] [TRT] Tactic: 4 Time: 13.9875 [04/18/2022-02:38:17] [V] [TRT] Tactic: 5 Time: 15.2581 [04/18/2022-02:38:18] [V] [TRT] Tactic: 6 Time: 10.9285 [04/18/2022-02:38:18] [V] [TRT] Tactic: 7 Time: 13.8467 [04/18/2022-02:38:18] [V] [TRT] Tactic: 8 Time: 14.8577 [04/18/2022-02:38:18] [V] [TRT] Tactic: 9 Time: 15.6178 [04/18/2022-02:38:18] [V] [TRT] Tactic: 10 Time: 9.74707 [04/18/2022-02:38:19] [V] [TRT] Tactic: 11 Time: 9.99206 [04/18/2022-02:38:19] [V] [TRT] Tactic: 12 Time: 11.3192 [04/18/2022-02:38:19] [V] [TRT] Tactic: 13 Time: 9.92589 [04/18/2022-02:38:19] [V] [TRT] Tactic: 14 Time: 10.9723 [04/18/2022-02:38:20] [V] [TRT] Tactic: 15 Time: 13.839 [04/18/2022-02:38:20] [V] [TRT] Tactic: 16 Time: 9.52026 [04/18/2022-02:38:20] [V] [TRT] Tactic: 17 Time: 11.5711 [04/18/2022-02:38:20] [V] [TRT] Tactic: 18 Time: 13.8273 [04/18/2022-02:38:20] [V] [TRT] Tactic: 19 Time: 14.4604 [04/18/2022-02:38:21] [V] [TRT] Tactic: 20 Time: 9.83961 [04/18/2022-02:38:21] [V] [TRT] Tactic: 21 Time: 9.0441 [04/18/2022-02:38:21] [V] [TRT] Tactic: 22 Time: 9.60768 [04/18/2022-02:38:21] [V] [TRT] Tactic: 23 Time: 9.22253 [04/18/2022-02:38:21] [V] [TRT] Tactic: 28 Time: 11.3523 [04/18/2022-02:38:21] [V] [TRT] Tactic: 29 Time: 9.8048 [04/18/2022-02:38:22] [V] [TRT] Tactic: 30 Time: 9.6649 [04/18/2022-02:38:22] [V] [TRT] Fastest Tactic: 21 Time: 9.0441 [04/18/2022-02:38:22] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 21 [04/18/2022-02:38:22] [V] [TRT] =============== Computing costs for [04/18/2022-02:38:22] [V] [TRT] *************** Autotuning format combination: Float(2097152,65536,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:38:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CudaDepthwiseConvolution) [04/18/2022-02:38:22] [V] [TRT] Tactic: -1 Time: 9.91872 [04/18/2022-02:38:22] [V] [TRT] Fastest Tactic: -1 Time: 9.91872 [04/18/2022-02:38:22] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CudnnConvolution) [04/18/2022-02:38:22] [V] [TRT] Tactic: 0 Time: 18.558 [04/18/2022-02:38:23] [V] [TRT] Tactic: 1 Time: 18.7274 [04/18/2022-02:38:23] [V] [TRT] Tactic: 2 Time: 17.8035 [04/18/2022-02:38:35] [V] [TRT] Tactic: 5 Time: 741.543 [04/18/2022-02:38:46] [V] [TRT] Tactic: 6 Time: 721.607 [04/18/2022-02:38:47] [V] [TRT] Tactic: 56 Time: 18.7094 [04/18/2022-02:38:47] [V] [TRT] Tactic: 57 Time: 18.7992 [04/18/2022-02:38:47] [V] [TRT] Tactic: 58 Time: 17.8519 [04/18/2022-02:39:00] [V] [TRT] Tactic: 61 Time: 739.969 [04/18/2022-02:39:11] [V] [TRT] Tactic: 62 Time: 721.826 [04/18/2022-02:39:11] [V] [TRT] Tactic: 112 Time: 18.6373 [04/18/2022-02:39:12] [V] [TRT] Tactic: 113 Time: 19.6726 [04/18/2022-02:39:12] [V] [TRT] Tactic: 114 Time: 17.7887 [04/18/2022-02:39:24] [V] [TRT] Tactic: 117 Time: 737.744 [04/18/2022-02:39:37] [V] [TRT] Tactic: 118 Time: 790.855 [04/18/2022-02:39:37] [V] [TRT] Fastest Tactic: 114 Time: 17.7887 [04/18/2022-02:39:37] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CaskConvolution) [04/18/2022-02:39:37] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:39:38] [V] [TRT] Tactic: 4549827808004681195 Time: 69.8418 [04/18/2022-02:39:38] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:39:41] [V] [TRT] Tactic: 5779835512569528575 Time: 149.79 [04/18/2022-02:39:41] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_xregs_large_nn_v1 Tactic: 6053873026024413720 [04/18/2022-02:39:44] [V] [TRT] Tactic: 6053873026024413720 Time: 215.276 [04/18/2022-02:39:44] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x64_relu_xregs_large_nn_v1 Tactic: 6767548733843469815 [04/18/2022-02:39:45] [V] [TRT] Tactic: 6767548733843469815 Time: 80.1367 [04/18/2022-02:39:45] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:39:46] [V] [TRT] Tactic: -6313876406580483184 Time: 35.0656 [04/18/2022-02:39:46] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:39:48] [V] [TRT] Tactic: -1123676555321336786 Time: 150.528 [04/18/2022-02:39:48] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:39:50] [V] [TRT] Tactic: -701551393537224327 Time: 70.8381 [04/18/2022-02:39:50] [V] [TRT] Fastest Tactic: -6313876406580483184 Time: 35.0656 [04/18/2022-02:39:50] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudaDepthwiseConvolution Tactic: -1 [04/18/2022-02:39:50] [V] [TRT] *************** Autotuning format combination: Float(2097152,1,8192,32) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:39:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CudnnConvolution) [04/18/2022-02:39:50] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:39:50] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CaskConvolution) [04/18/2022-02:39:50] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:39:54] [V] [TRT] Tactic: 5778138195697110003 Time: 298.287 [04/18/2022-02:39:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_exp_large_nhwc_tn_v1 Tactic: -3855385237722507464 [04/18/2022-02:39:59] [V] [TRT] Tactic: -3855385237722507464 Time: 302.727 [04/18/2022-02:39:59] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:04] [V] [TRT] Tactic: -2809379259463049391 Time: 299.383 [04/18/2022-02:40:04] [V] [TRT] Fastest Tactic: 5778138195697110003 Time: 298.287 [04/18/2022-02:40:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 5778138195697110003 [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(524288,1:4,2048,8) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CudnnConvolution) [04/18/2022-02:40:04] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/depthwise_conv2d/depthwise (CaskConvolution) [04/18/2022-02:40:04] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:04] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(2097152,65536,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(2097152,1,8192,32) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(524288,1:4,2048,8) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(65536,65536:32,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(1:4,131072,512,2) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:40:04] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(2097152,65536,256,1) -> Float(32,1,1,1) *************** [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean (TiledPooling) [04/18/2022-02:40:04] [V] [TRT] TiledPooling has no valid tactics for this config, skipping [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_squeeze/Mean (CudnnPooling) [04/18/2022-02:40:04] [V] [TRT] Tactic: -1 Time: 3.95648 [04/18/2022-02:40:04] [V] [TRT] Fastest Tactic: -1 Time: 3.95648 [04/18/2022-02:40:04] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnPooling Tactic: -1 [04/18/2022-02:40:04] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:04] [V] [TRT] *************** Autotuning format combination: Float(32,1,1,1) -> Float(8,1,1,1) *************** [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CudaDepthwiseConvolution) [04/18/2022-02:40:04] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (FusedConvActConvolution) [04/18/2022-02:40:04] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:04] [V] [TRT] Tactic: 0 Time: 0.046464 [04/18/2022-02:40:04] [V] [TRT] Tactic: 1 Time: 0.033536 [04/18/2022-02:40:04] [V] [TRT] Tactic: 2 Time: 0.075904 [04/18/2022-02:40:04] [V] [TRT] Tactic: 4 Time: 0.281856 [04/18/2022-02:40:04] [V] [TRT] Tactic: 5 Time: 0.208768 [04/18/2022-02:40:04] [V] [TRT] Tactic: 56 Time: 0.046592 [04/18/2022-02:40:04] [V] [TRT] Tactic: 57 Time: 0.032512 [04/18/2022-02:40:04] [V] [TRT] Tactic: 58 Time: 0.075776 [04/18/2022-02:40:04] [V] [TRT] Tactic: 60 Time: 0.279168 [04/18/2022-02:40:04] [V] [TRT] Tactic: 61 Time: 0.20992 [04/18/2022-02:40:04] [V] [TRT] Tactic: 112 Time: 0.046848 [04/18/2022-02:40:04] [V] [TRT] Tactic: 113 Time: 0.055552 [04/18/2022-02:40:04] [V] [TRT] Tactic: 114 Time: 0.07616 [04/18/2022-02:40:04] [V] [TRT] Tactic: 116 Time: 0.2784 [04/18/2022-02:40:04] [V] [TRT] Tactic: 117 Time: 0.219776 [04/18/2022-02:40:04] [V] [TRT] Fastest Tactic: 57 Time: 0.032512 [04/18/2022-02:40:04] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:05] [V] [TRT] Tactic: 0 Time: 0.036352 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1 Time: 0.036352 [04/18/2022-02:40:05] [V] [TRT] Tactic: 2 Time: 0.03776 [04/18/2022-02:40:05] [V] [TRT] Tactic: 3 Time: 0.035328 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: 3 Time: 0.035328 [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4549827808004681195 Time: 0.08064 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5779835512569528575 Time: 0.095232 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1_aligna4_alignc4 Tactic: 9151672657204310840 [04/18/2022-02:40:05] [V] [TRT] Tactic: 9151672657204310840 Time: 0.144768 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_relu_interior_nn_v1 Tactic: -7491730084094677098 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7491730084094677098 Time: 0.07744 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1_aligna4_alignc4 Tactic: -6622064180404051845 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6622064180404051845 Time: 0.140928 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6313876406580483184 Time: 0.080512 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_interior_nn_v1 Tactic: -6273689210331812572 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6273689210331812572 Time: 0.095616 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm70_xmma_fprop_conv1x1_f32f32_f32_f32_nchwkcrs_nchw_simt_small_batch_bias_relu Tactic: -6194327789991425125 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6194327789991425125 Time: 0.048512 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_interior_nn_v1 Tactic: -4337126844824617177 [04/18/2022-02:40:05] [V] [TRT] Tactic: -4337126844824617177 Time: 0.081792 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:40:05] [V] [TRT] Tactic: -1123676555321336786 Time: 0.098432 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:40:05] [V] [TRT] Tactic: -701551393537224327 Time: 0.085248 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: -6194327789991425125 Time: 0.048512 [04/18/2022-02:40:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnConvolution Tactic: 57 [04/18/2022-02:40:05] [V] [TRT] *************** Autotuning format combination: Float(32,1,32,32) -> Float(8,1,8,8) *************** [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:05] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:05] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:05] [V] [TRT] Tactic: 676988335020687107 Time: 0.128 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1149579359391877453 Time: 0.120448 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1663866669559596164 Time: 0.083072 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1995961315573863697 Time: 0.045824 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:05] [V] [TRT] Tactic: 2860655430572478466 Time: 0.065792 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4232768147062126270 Time: 0.117248 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4474630279712975759 Time: 0.059008 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4479823862704990365 Time: 0.057344 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4696204239951173149 Time: 0.066816 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5061046663754203417 Time: 0.062336 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5660369513040054181 Time: 0.121472 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5778138195697110003 Time: 0.084864 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:05] [V] [TRT] Tactic: 6002893715742835901 Time: 0.082304 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:05] [V] [TRT] Tactic: 8918020581761223752 Time: 0.083584 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:05] [V] [TRT] Tactic: 9016055318246906759 Time: 0.12736 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7609160790790750215 Time: 0.056192 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7054781547842146201 Time: 0.054144 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6773414409150198858 Time: 0.044416 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5980517219165853661 Time: 0.059392 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5910172158931405628 Time: 0.077824 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5905193483742532701 Time: 0.065152 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:05] [V] [TRT] Tactic: -4196636767445012021 Time: 0.117632 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:05] [V] [TRT] Tactic: -4035591156787122265 Time: 0.056704 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:05] [V] [TRT] Tactic: -3829074795144908279 Time: 0.083328 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:05] [V] [TRT] Tactic: -2809379259463049391 Time: 0.084224 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:05] [V] [TRT] Tactic: -1985235291706575900 Time: 0.082048 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:05] [V] [TRT] Tactic: -711510282315844248 Time: 0.082688 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:05] [V] [TRT] Tactic: -504296718212024303 Time: 0.083328 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: -6773414409150198858 Time: 0.044416 [04/18/2022-02:40:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6773414409150198858 [04/18/2022-02:40:05] [V] [TRT] *************** Autotuning format combination: Float(8,1:4,8,8) -> Float(2,1:4,2,2) *************** [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:05] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:05] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:05] [V] [TRT] Tactic: 676988335020687107 Time: 0.139904 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1149579359391877453 Time: 0.12032 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1 Tactic: 1373022415249282411 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1373022415249282411 Time: 0.062336 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1663866669559596164 Time: 0.08192 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1995961315573863697 Time: 0.04544 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:05] [V] [TRT] Tactic: 2860655430572478466 Time: 0.067584 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4232768147062126270 Time: 0.134144 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4474630279712975759 Time: 0.057088 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4479823862704990365 Time: 0.057472 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4696204239951173149 Time: 0.06528 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5061046663754203417 Time: 0.06336 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5660369513040054181 Time: 0.122624 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5778138195697110003 Time: 0.083712 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:05] [V] [TRT] Tactic: 6002893715742835901 Time: 0.096 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:05] [V] [TRT] Tactic: 8918020581761223752 Time: 0.083456 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:05] [V] [TRT] Tactic: 9016055318246906759 Time: 0.140288 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7609160790790750215 Time: 0.05696 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1 Tactic: -7067026478815706014 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7067026478815706014 Time: 0.06144 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:05] [V] [TRT] Tactic: -7054781547842146201 Time: 0.052736 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:05] [V] [TRT] Tactic: -6773414409150198858 Time: 0.044672 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5980517219165853661 Time: 0.059392 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5910172158931405628 Time: 0.076416 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:05] [V] [TRT] Tactic: -5905193483742532701 Time: 0.063872 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:05] [V] [TRT] Tactic: -4196636767445012021 Time: 0.119936 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:05] [V] [TRT] Tactic: -4035591156787122265 Time: 0.056064 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:05] [V] [TRT] Tactic: -3829074795144908279 Time: 0.083328 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:05] [V] [TRT] Tactic: -2809379259463049391 Time: 0.086016 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:05] [V] [TRT] Tactic: -1985235291706575900 Time: 0.082432 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:05] [V] [TRT] Tactic: -711510282315844248 Time: 0.08192 [04/18/2022-02:40:05] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:05] [V] [TRT] Tactic: -504296718212024303 Time: 0.08192 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: -6773414409150198858 Time: 0.044672 [04/18/2022-02:40:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6773414409150198858 [04/18/2022-02:40:05] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:05] [V] [TRT] *************** Autotuning format combination: Float(8,1,1,1) -> Float(8,1,1,1) *************** [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWiseV2) [04/18/2022-02:40:05] [V] [TRT] Tactic: 0 Time: 0.016256 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1 Time: 0.017408 [04/18/2022-02:40:05] [V] [TRT] Tactic: 2 Time: 0.016256 [04/18/2022-02:40:05] [V] [TRT] Tactic: 3 Time: 0.018688 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4 Time: 0.017152 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5 Time: 0.01664 [04/18/2022-02:40:05] [V] [TRT] Tactic: 6 Time: 0.02048 [04/18/2022-02:40:05] [V] [TRT] Tactic: 7 Time: 0.018176 [04/18/2022-02:40:05] [V] [TRT] Tactic: 8 Time: 0.01728 [04/18/2022-02:40:05] [V] [TRT] Tactic: 9 Time: 0.016384 [04/18/2022-02:40:05] [V] [TRT] Tactic: 28 Time: 0.016512 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: 0 Time: 0.016256 [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWise) [04/18/2022-02:40:05] [V] [TRT] Tactic: 128 Time: 0.022912 [04/18/2022-02:40:05] [V] [TRT] Tactic: 256 Time: 0.022144 [04/18/2022-02:40:05] [V] [TRT] Tactic: 512 Time: 0.023296 [04/18/2022-02:40:05] [V] [TRT] Tactic: -32 Time: 0.028416 [04/18/2022-02:40:05] [V] [TRT] Tactic: -64 Time: 0.02304 [04/18/2022-02:40:05] [V] [TRT] Tactic: -128 Time: 0.022528 [04/18/2022-02:40:05] [V] [TRT] Fastest Tactic: 256 Time: 0.022144 [04/18/2022-02:40:05] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 0 [04/18/2022-02:40:05] [V] [TRT] *************** Autotuning format combination: Float(8,1,8,8) -> Float(8,1,8,8) *************** [04/18/2022-02:40:05] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWiseV2) [04/18/2022-02:40:05] [V] [TRT] Tactic: 0 Time: 0.016 [04/18/2022-02:40:05] [V] [TRT] Tactic: 1 Time: 0.017152 [04/18/2022-02:40:05] [V] [TRT] Tactic: 2 Time: 0.016128 [04/18/2022-02:40:05] [V] [TRT] Tactic: 3 Time: 0.018432 [04/18/2022-02:40:05] [V] [TRT] Tactic: 4 Time: 0.016768 [04/18/2022-02:40:05] [V] [TRT] Tactic: 5 Time: 0.01664 [04/18/2022-02:40:06] [V] [TRT] Tactic: 6 Time: 0.020352 [04/18/2022-02:40:06] [V] [TRT] Tactic: 7 Time: 0.018304 [04/18/2022-02:40:06] [V] [TRT] Tactic: 8 Time: 0.017408 [04/18/2022-02:40:06] [V] [TRT] Tactic: 9 Time: 0.016768 [04/18/2022-02:40:06] [V] [TRT] Tactic: 28 Time: 0.016128 [04/18/2022-02:40:06] [V] [TRT] Fastest Tactic: 0 Time: 0.016 [04/18/2022-02:40:06] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWise) [04/18/2022-02:40:06] [V] [TRT] Tactic: 128 Time: 0.02304 [04/18/2022-02:40:06] [V] [TRT] Tactic: 256 Time: 0.025088 [04/18/2022-02:40:06] [V] [TRT] Tactic: 512 Time: 0.021376 [04/18/2022-02:40:06] [V] [TRT] Tactic: -32 Time: 0.027776 [04/18/2022-02:40:06] [V] [TRT] Tactic: -64 Time: 0.023296 [04/18/2022-02:40:06] [V] [TRT] Tactic: -128 Time: 0.022912 [04/18/2022-02:40:06] [V] [TRT] Fastest Tactic: 512 Time: 0.021376 [04/18/2022-02:40:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 0 [04/18/2022-02:40:06] [V] [TRT] *************** Autotuning format combination: Float(2,1:4,2,2) -> Float(2,1:4,2,2) *************** [04/18/2022-02:40:06] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWiseV2) [04/18/2022-02:40:06] [V] [TRT] Tactic: 0 Time: 0.017152 [04/18/2022-02:40:06] [V] [TRT] Tactic: 1 Time: 0.017536 [04/18/2022-02:40:06] [V] [TRT] Tactic: 2 Time: 0.019072 [04/18/2022-02:40:06] [V] [TRT] Tactic: 3 Time: 0.018048 [04/18/2022-02:40:06] [V] [TRT] Tactic: 4 Time: 0.020608 [04/18/2022-02:40:06] [V] [TRT] Tactic: 5 Time: 0.019456 [04/18/2022-02:40:06] [V] [TRT] Tactic: 6 Time: 0.01984 [04/18/2022-02:40:06] [V] [TRT] Tactic: 7 Time: 0.022656 [04/18/2022-02:40:06] [V] [TRT] Tactic: 8 Time: 0.021376 [04/18/2022-02:40:06] [V] [TRT] Tactic: 9 Time: 0.019968 [04/18/2022-02:40:06] [V] [TRT] Tactic: 10 Time: 0.016768 [04/18/2022-02:40:06] [V] [TRT] Tactic: 11 Time: 0.016896 [04/18/2022-02:40:06] [V] [TRT] Tactic: 12 Time: 0.016256 [04/18/2022-02:40:06] [V] [TRT] Tactic: 13 Time: 0.018048 [04/18/2022-02:40:06] [V] [TRT] Tactic: 14 Time: 0.01728 [04/18/2022-02:40:06] [V] [TRT] Tactic: 15 Time: 0.016384 [04/18/2022-02:40:06] [V] [TRT] Tactic: 16 Time: 0.021248 [04/18/2022-02:40:06] [V] [TRT] Tactic: 17 Time: 0.019072 [04/18/2022-02:40:06] [V] [TRT] Tactic: 18 Time: 0.017408 [04/18/2022-02:40:06] [V] [TRT] Tactic: 19 Time: 0.016512 [04/18/2022-02:40:06] [V] [TRT] Tactic: 20 Time: 0.015488 [04/18/2022-02:40:06] [V] [TRT] Tactic: 21 Time: 0.01664 [04/18/2022-02:40:06] [V] [TRT] Tactic: 22 Time: 0.017536 [04/18/2022-02:40:06] [V] [TRT] Tactic: 23 Time: 0.019968 [04/18/2022-02:40:06] [V] [TRT] Tactic: 28 Time: 0.015744 [04/18/2022-02:40:06] [V] [TRT] Tactic: 29 Time: 0.015744 [04/18/2022-02:40:06] [V] [TRT] Tactic: 30 Time: 0.015616 [04/18/2022-02:40:06] [V] [TRT] Fastest Tactic: 20 Time: 0.015488 [04/18/2022-02:40:06] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWise) [04/18/2022-02:40:06] [V] [TRT] Tactic: 128 Time: 0.020224 [04/18/2022-02:40:06] [V] [TRT] Tactic: 256 Time: 0.020352 [04/18/2022-02:40:06] [V] [TRT] Tactic: 512 Time: 0.021248 [04/18/2022-02:40:06] [V] [TRT] Tactic: -32 Time: 0.026112 [04/18/2022-02:40:06] [V] [TRT] Tactic: -64 Time: 0.021376 [04/18/2022-02:40:06] [V] [TRT] Tactic: -128 Time: 0.022912 [04/18/2022-02:40:06] [V] [TRT] Fastest Tactic: 128 Time: 0.020224 [04/18/2022-02:40:06] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 20 [04/18/2022-02:40:06] [V] [TRT] *************** Autotuning format combination: Float(1,1:32,1,1) -> Float(1,1:32,1,1) *************** [04/18/2022-02:40:06] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWiseV2) [04/18/2022-02:40:06] [V] [TRT] Tactic: 24 Time: 0.018816 [04/18/2022-02:40:06] [V] [TRT] Tactic: 25 Time: 0.018688 [04/18/2022-02:40:07] [V] [TRT] Tactic: 26 Time: 0.021632 [04/18/2022-02:40:07] [V] [TRT] Tactic: 27 Time: 0.024832 [04/18/2022-02:40:07] [V] [TRT] Tactic: 31 Time: 0.017024 [04/18/2022-02:40:07] [V] [TRT] Fastest Tactic: 31 Time: 0.017024 [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWise) [04/18/2022-02:40:07] [V] [TRT] Tactic: 128 Time: 0.020992 [04/18/2022-02:40:07] [V] [TRT] Tactic: 256 Time: 0.021504 [04/18/2022-02:40:07] [V] [TRT] Tactic: 512 Time: 0.02176 [04/18/2022-02:40:07] [V] [TRT] Tactic: -32 Time: 0.054016 [04/18/2022-02:40:07] [V] [TRT] Tactic: -64 Time: 0.034944 [04/18/2022-02:40:07] [V] [TRT] Tactic: -128 Time: 0.026368 [04/18/2022-02:40:07] [V] [TRT] Fastest Tactic: 128 Time: 0.020992 [04/18/2022-02:40:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 31 [04/18/2022-02:40:07] [V] [TRT] *************** Autotuning format combination: Float(1:4,2,2,2) -> Float(1:4,2,2,2) *************** [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_reduce_activation/mul) (PointWiseV2) [04/18/2022-02:40:07] [V] [TRT] Tactic: 0 Time: 0.015872 [04/18/2022-02:40:07] [V] [TRT] Tactic: 1 Time: 0.016256 [04/18/2022-02:40:07] [V] [TRT] Tactic: 2 Time: 0.018048 [04/18/2022-02:40:07] [V] [TRT] Tactic: 3 Time: 0.017024 [04/18/2022-02:40:07] [V] [TRT] Tactic: 4 Time: 0.020352 [04/18/2022-02:40:07] [V] [TRT] Tactic: 5 Time: 0.018304 [04/18/2022-02:40:07] [V] [TRT] Tactic: 6 Time: 0.02048 [04/18/2022-02:40:07] [V] [TRT] Tactic: 7 Time: 0.021888 [04/18/2022-02:40:07] [V] [TRT] Tactic: 8 Time: 0.020352 [04/18/2022-02:40:07] [V] [TRT] Tactic: 9 Time: 0.020096 [04/18/2022-02:40:07] [V] [TRT] Tactic: 10 Time: 0.015488 [04/18/2022-02:40:07] [V] [TRT] Tactic: 11 Time: 0.016384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 12 Time: 0.016384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 13 Time: 0.018048 [04/18/2022-02:40:07] [V] [TRT] Tactic: 14 Time: 0.017536 [04/18/2022-02:40:07] [V] [TRT] Tactic: 15 Time: 0.016384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 16 Time: 0.020992 [04/18/2022-02:40:07] [V] [TRT] Tactic: 17 Time: 0.019968 [04/18/2022-02:40:07] [V] [TRT] Tactic: 18 Time: 0.01728 [04/18/2022-02:40:07] [V] [TRT] Tactic: 19 Time: 0.01664 [04/18/2022-02:40:07] [V] [TRT] Tactic: 20 Time: 0.01536 [04/18/2022-02:40:07] [V] [TRT] Tactic: 21 Time: 0.016512 [04/18/2022-02:40:07] [V] [TRT] Tactic: 22 Time: 0.017408 [04/18/2022-02:40:07] [V] [TRT] Tactic: 23 Time: 0.01984 [04/18/2022-02:40:07] [V] [TRT] Tactic: 28 Time: 0.015744 [04/18/2022-02:40:07] [V] [TRT] Tactic: 29 Time: 0.01536 [04/18/2022-02:40:07] [V] [TRT] Tactic: 30 Time: 0.015488 [04/18/2022-02:40:07] [V] [TRT] Fastest Tactic: 20 Time: 0.01536 [04/18/2022-02:40:07] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 20 [04/18/2022-02:40:07] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:07] [V] [TRT] *************** Autotuning format combination: Float(8,1,1,1) -> Float(32,1,1,1) *************** [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CudaDepthwiseConvolution) [04/18/2022-02:40:07] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (FusedConvActConvolution) [04/18/2022-02:40:07] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:07] [V] [TRT] Tactic: 0 Time: 0.038656 [04/18/2022-02:40:07] [V] [TRT] Tactic: 1 Time: 0.033536 [04/18/2022-02:40:07] [V] [TRT] Tactic: 2 Time: 0.042624 [04/18/2022-02:40:07] [V] [TRT] Tactic: 4 Time: 0.25344 [04/18/2022-02:40:07] [V] [TRT] Tactic: 5 Time: 0.188672 [04/18/2022-02:40:07] [V] [TRT] Tactic: 56 Time: 0.0384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 57 Time: 0.032384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 58 Time: 0.042624 [04/18/2022-02:40:07] [V] [TRT] Tactic: 60 Time: 0.25216 [04/18/2022-02:40:07] [V] [TRT] Tactic: 61 Time: 0.20544 [04/18/2022-02:40:07] [V] [TRT] Tactic: 112 Time: 0.0384 [04/18/2022-02:40:07] [V] [TRT] Tactic: 113 Time: 0.049024 [04/18/2022-02:40:07] [V] [TRT] Tactic: 114 Time: 0.042752 [04/18/2022-02:40:07] [V] [TRT] Tactic: 116 Time: 0.24832 [04/18/2022-02:40:07] [V] [TRT] Tactic: 117 Time: 0.201088 [04/18/2022-02:40:07] [V] [TRT] Fastest Tactic: 57 Time: 0.032384 [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:07] [V] [TRT] Tactic: 0 Time: 0.034048 [04/18/2022-02:40:07] [V] [TRT] Tactic: 1 Time: 0.035072 [04/18/2022-02:40:07] [V] [TRT] Tactic: 2 Time: 0.034048 [04/18/2022-02:40:07] [V] [TRT] Tactic: 3 Time: 0.034304 [04/18/2022-02:40:07] [V] [TRT] Fastest Tactic: 0 Time: 0.034048 [04/18/2022-02:40:07] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:40:07] [V] [TRT] Tactic: 4549827808004681195 Time: 0.060544 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:40:07] [V] [TRT] Tactic: 5779835512569528575 Time: 0.06976 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1_aligna4_alignc4 Tactic: 9151672657204310840 [04/18/2022-02:40:07] [V] [TRT] Tactic: 9151672657204310840 Time: 0.135168 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_relu_interior_nn_v1 Tactic: -7491730084094677098 [04/18/2022-02:40:07] [V] [TRT] Tactic: -7491730084094677098 Time: 0.054912 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1_aligna4_alignc4 Tactic: -6622064180404051845 [04/18/2022-02:40:07] [V] [TRT] Tactic: -6622064180404051845 Time: 0.130432 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:40:07] [V] [TRT] Tactic: -6313876406580483184 Time: 0.05824 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_interior_nn_v1 Tactic: -6273689210331812572 [04/18/2022-02:40:07] [V] [TRT] Tactic: -6273689210331812572 Time: 0.06784 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm70_xmma_fprop_conv1x1_f32f32_f32_f32_nchwkcrs_nchw_simt_small_batch_bias_relu Tactic: -6194327789991425125 [04/18/2022-02:40:07] [V] [TRT] Tactic: -6194327789991425125 Time: 0.06336 [04/18/2022-02:40:07] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_interior_nn_v1 Tactic: -4337126844824617177 [04/18/2022-02:40:08] [V] [TRT] Tactic: -4337126844824617177 Time: 0.060032 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:40:08] [V] [TRT] Tactic: -1123676555321336786 Time: 0.06976 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:40:08] [V] [TRT] Tactic: -701551393537224327 Time: 0.0608 [04/18/2022-02:40:08] [V] [TRT] Fastest Tactic: -7491730084094677098 Time: 0.054912 [04/18/2022-02:40:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudnnConvolution Tactic: 57 [04/18/2022-02:40:08] [V] [TRT] *************** Autotuning format combination: Float(8,1,8,8) -> Float(32,1,32,32) *************** [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:08] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:08] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:08] [V] [TRT] Tactic: 676988335020687107 Time: 0.07488 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1149579359391877453 Time: 0.068864 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1663866669559596164 Time: 0.055552 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1995961315573863697 Time: 0.032256 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:08] [V] [TRT] Tactic: 2860655430572478466 Time: 0.056448 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4232768147062126270 Time: 0.069888 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4474630279712975759 Time: 0.056704 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4479823862704990365 Time: 0.055296 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4696204239951173149 Time: 0.05568 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5061046663754203417 Time: 0.040448 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5660369513040054181 Time: 0.077568 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5778138195697110003 Time: 0.056064 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:08] [V] [TRT] Tactic: 6002893715742835901 Time: 0.05184 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:08] [V] [TRT] Tactic: 8918020581761223752 Time: 0.05504 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:08] [V] [TRT] Tactic: 9016055318246906759 Time: 0.070912 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:08] [V] [TRT] Tactic: -7609160790790750215 Time: 0.036992 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:08] [V] [TRT] Tactic: -7054781547842146201 Time: 0.034816 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:08] [V] [TRT] Tactic: -6773414409150198858 Time: 0.030848 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5980517219165853661 Time: 0.038016 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5910172158931405628 Time: 0.045824 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5905193483742532701 Time: 0.056832 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:08] [V] [TRT] Tactic: -4196636767445012021 Time: 0.064512 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:08] [V] [TRT] Tactic: -4035591156787122265 Time: 0.056832 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:08] [V] [TRT] Tactic: -3829074795144908279 Time: 0.04672 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:08] [V] [TRT] Tactic: -2809379259463049391 Time: 0.05632 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:08] [V] [TRT] Tactic: -1985235291706575900 Time: 0.054784 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:08] [V] [TRT] Tactic: -711510282315844248 Time: 0.046464 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:08] [V] [TRT] Tactic: -504296718212024303 Time: 0.05696 [04/18/2022-02:40:08] [V] [TRT] Fastest Tactic: -6773414409150198858 Time: 0.030848 [04/18/2022-02:40:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6773414409150198858 [04/18/2022-02:40:08] [V] [TRT] *************** Autotuning format combination: Float(2,1:4,2,2) -> Float(8,1:4,8,8) *************** [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CudnnConvolution) [04/18/2022-02:40:08] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CublasConvolution) [04/18/2022-02:40:08] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd (CaskConvolution) [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:08] [V] [TRT] Tactic: 676988335020687107 Time: 0.107776 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1149579359391877453 Time: 0.078976 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1 Tactic: 1373022415249282411 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1373022415249282411 Time: 0.060928 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1663866669559596164 Time: 0.05568 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:08] [V] [TRT] Tactic: 1995961315573863697 Time: 0.03264 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:08] [V] [TRT] Tactic: 2860655430572478466 Time: 0.054656 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4232768147062126270 Time: 0.087168 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4474630279712975759 Time: 0.057088 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4479823862704990365 Time: 0.055936 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:08] [V] [TRT] Tactic: 4696204239951173149 Time: 0.055168 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5061046663754203417 Time: 0.04096 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5660369513040054181 Time: 0.077056 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:08] [V] [TRT] Tactic: 5778138195697110003 Time: 0.055808 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:08] [V] [TRT] Tactic: 6002893715742835901 Time: 0.064384 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:08] [V] [TRT] Tactic: 8918020581761223752 Time: 0.054656 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:08] [V] [TRT] Tactic: 9016055318246906759 Time: 0.083712 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:08] [V] [TRT] Tactic: -7609160790790750215 Time: 0.03712 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1 Tactic: -7067026478815706014 [04/18/2022-02:40:08] [V] [TRT] Tactic: -7067026478815706014 Time: 0.048512 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:08] [V] [TRT] Tactic: -7054781547842146201 Time: 0.035072 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:08] [V] [TRT] Tactic: -6773414409150198858 Time: 0.030592 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5980517219165853661 Time: 0.038272 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5910172158931405628 Time: 0.045824 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:08] [V] [TRT] Tactic: -5905193483742532701 Time: 0.054016 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:08] [V] [TRT] Tactic: -4196636767445012021 Time: 0.068992 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:08] [V] [TRT] Tactic: -4035591156787122265 Time: 0.054272 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:08] [V] [TRT] Tactic: -3829074795144908279 Time: 0.046976 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:08] [V] [TRT] Tactic: -2809379259463049391 Time: 0.060288 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:08] [V] [TRT] Tactic: -1985235291706575900 Time: 0.054272 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:08] [V] [TRT] Tactic: -711510282315844248 Time: 0.04608 [04/18/2022-02:40:08] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_conv2d/BiasAdd Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:08] [V] [TRT] Tactic: -504296718212024303 Time: 0.054656 [04/18/2022-02:40:08] [V] [TRT] Fastest Tactic: -6773414409150198858 Time: 0.030592 [04/18/2022-02:40:08] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6773414409150198858 [04/18/2022-02:40:08] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:08] [V] [TRT] *************** Autotuning format combination: Float(32,1,1,1), Float(2097152,65536,256,1) -> Float(2097152,65536,256,1) *************** [04/18/2022-02:40:08] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWiseV2) [04/18/2022-02:40:09] [V] [TRT] Tactic: 0 Time: 9.81325 [04/18/2022-02:40:09] [V] [TRT] Tactic: 1 Time: 9.1744 [04/18/2022-02:40:10] [V] [TRT] Tactic: 2 Time: 9.86125 [04/18/2022-02:40:10] [V] [TRT] Tactic: 3 Time: 9.58963 [04/18/2022-02:40:10] [V] [TRT] Tactic: 4 Time: 8.92838 [04/18/2022-02:40:11] [V] [TRT] Tactic: 5 Time: 8.83943 [04/18/2022-02:40:11] [V] [TRT] Tactic: 6 Time: 9.21254 [04/18/2022-02:40:11] [V] [TRT] Tactic: 7 Time: 8.84314 [04/18/2022-02:40:12] [V] [TRT] Tactic: 8 Time: 8.84326 [04/18/2022-02:40:12] [V] [TRT] Tactic: 9 Time: 8.9385 [04/18/2022-02:40:12] [V] [TRT] Tactic: 28 Time: 9.3408 [04/18/2022-02:40:12] [V] [TRT] Fastest Tactic: 5 Time: 8.83943 [04/18/2022-02:40:12] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWise) [04/18/2022-02:40:13] [V] [TRT] Tactic: 128 Time: 9.30522 [04/18/2022-02:40:13] [V] [TRT] Tactic: 256 Time: 9.33107 [04/18/2022-02:40:13] [V] [TRT] Tactic: 512 Time: 9.3687 [04/18/2022-02:40:13] [V] [TRT] Tactic: -32 Time: 9.00723 [04/18/2022-02:40:13] [V] [TRT] Tactic: -64 Time: 9.10656 [04/18/2022-02:40:13] [V] [TRT] Tactic: -128 Time: 9.04998 [04/18/2022-02:40:13] [V] [TRT] Fastest Tactic: -32 Time: 9.00723 [04/18/2022-02:40:13] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 5 [04/18/2022-02:40:13] [V] [TRT] *************** Autotuning format combination: Float(32,1,32,32), Float(2097152,1,8192,32) -> Float(2097152,1,8192,32) *************** [04/18/2022-02:40:13] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWiseV2) [04/18/2022-02:40:14] [V] [TRT] Tactic: 0 Time: 9.33581 [04/18/2022-02:40:14] [V] [TRT] Tactic: 1 Time: 9.26848 [04/18/2022-02:40:14] [V] [TRT] Tactic: 2 Time: 9.39904 [04/18/2022-02:40:15] [V] [TRT] Tactic: 3 Time: 9.33824 [04/18/2022-02:40:15] [V] [TRT] Tactic: 4 Time: 9.2713 [04/18/2022-02:40:15] [V] [TRT] Tactic: 5 Time: 9.30573 [04/18/2022-02:40:16] [V] [TRT] Tactic: 6 Time: 9.26438 [04/18/2022-02:40:16] [V] [TRT] Tactic: 7 Time: 9.38675 [04/18/2022-02:40:16] [V] [TRT] Tactic: 8 Time: 9.37843 [04/18/2022-02:40:17] [V] [TRT] Tactic: 9 Time: 9.3943 [04/18/2022-02:40:17] [V] [TRT] Tactic: 28 Time: 9.25722 [04/18/2022-02:40:17] [V] [TRT] Fastest Tactic: 28 Time: 9.25722 [04/18/2022-02:40:17] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWise) [04/18/2022-02:40:17] [V] [TRT] Tactic: 128 Time: 9.85203 [04/18/2022-02:40:17] [V] [TRT] Tactic: 256 Time: 9.34234 [04/18/2022-02:40:18] [V] [TRT] Tactic: 512 Time: 9.19053 [04/18/2022-02:40:18] [V] [TRT] Tactic: -32 Time: 10.0216 [04/18/2022-02:40:18] [V] [TRT] Tactic: -64 Time: 10.2405 [04/18/2022-02:40:18] [V] [TRT] Tactic: -128 Time: 9.90579 [04/18/2022-02:40:18] [V] [TRT] Fastest Tactic: 512 Time: 9.19053 [04/18/2022-02:40:18] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWise Tactic: 512 [04/18/2022-02:40:18] [V] [TRT] *************** Autotuning format combination: Float(8,1:4,8,8), Float(524288,1:4,2048,8) -> Float(524288,1:4,2048,8) *************** [04/18/2022-02:40:18] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWiseV2) [04/18/2022-02:40:19] [V] [TRT] Tactic: 0 Time: 11.6269 [04/18/2022-02:40:19] [V] [TRT] Tactic: 1 Time: 11.5096 [04/18/2022-02:40:19] [V] [TRT] Tactic: 2 Time: 11.3951 [04/18/2022-02:40:20] [V] [TRT] Tactic: 3 Time: 11.5914 [04/18/2022-02:40:20] [V] [TRT] Tactic: 4 Time: 10.3571 [04/18/2022-02:40:20] [V] [TRT] Tactic: 5 Time: 11.7183 [04/18/2022-02:40:21] [V] [TRT] Tactic: 6 Time: 11.6431 [04/18/2022-02:40:21] [V] [TRT] Tactic: 7 Time: 10.2276 [04/18/2022-02:40:22] [V] [TRT] Tactic: 8 Time: 9.71072 [04/18/2022-02:40:22] [V] [TRT] Tactic: 9 Time: 9.19181 [04/18/2022-02:40:23] [V] [TRT] Tactic: 10 Time: 18.5199 [04/18/2022-02:40:23] [V] [TRT] Tactic: 11 Time: 12.3781 [04/18/2022-02:40:23] [V] [TRT] Tactic: 12 Time: 9.78816 [04/18/2022-02:40:24] [V] [TRT] Tactic: 13 Time: 9.55354 [04/18/2022-02:40:24] [V] [TRT] Tactic: 14 Time: 10.7713 [04/18/2022-02:40:25] [V] [TRT] Tactic: 15 Time: 9.38624 [04/18/2022-02:40:25] [V] [TRT] Tactic: 16 Time: 12.0027 [04/18/2022-02:40:25] [V] [TRT] Tactic: 17 Time: 9.6887 [04/18/2022-02:40:26] [V] [TRT] Tactic: 18 Time: 9.54138 [04/18/2022-02:40:26] [V] [TRT] Tactic: 19 Time: 9.65901 [04/18/2022-02:40:27] [V] [TRT] Tactic: 20 Time: 9.22509 [04/18/2022-02:40:27] [V] [TRT] Tactic: 21 Time: 9.15891 [04/18/2022-02:40:27] [V] [TRT] Tactic: 22 Time: 9.38355 [04/18/2022-02:40:28] [V] [TRT] Tactic: 23 Time: 9.11078 [04/18/2022-02:40:28] [V] [TRT] Tactic: 28 Time: 16.3589 [04/18/2022-02:40:28] [V] [TRT] Tactic: 29 Time: 10.4838 [04/18/2022-02:40:29] [V] [TRT] Tactic: 30 Time: 9.36013 [04/18/2022-02:40:29] [V] [TRT] Fastest Tactic: 23 Time: 9.11078 [04/18/2022-02:40:29] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWise) [04/18/2022-02:40:29] [V] [TRT] Tactic: 128 Time: 9.168 [04/18/2022-02:40:29] [V] [TRT] Tactic: 256 Time: 9.74502 [04/18/2022-02:40:29] [V] [TRT] Tactic: 512 Time: 9.72736 [04/18/2022-02:40:29] [V] [TRT] Tactic: -32 Time: 10.3072 [04/18/2022-02:40:30] [V] [TRT] Tactic: -64 Time: 10.2865 [04/18/2022-02:40:30] [V] [TRT] Tactic: -128 Time: 10.2824 [04/18/2022-02:40:30] [V] [TRT] Fastest Tactic: 128 Time: 9.168 [04/18/2022-02:40:30] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 23 [04/18/2022-02:40:30] [V] [TRT] *************** Autotuning format combination: Float(1,1:32,1,1), Float(65536,65536:32,256,1) -> Float(65536,65536:32,256,1) *************** [04/18/2022-02:40:30] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWiseV2) [04/18/2022-02:40:30] [V] [TRT] Tactic: 24 Time: 16.3849 [04/18/2022-02:40:31] [V] [TRT] Tactic: 25 Time: 15.2342 [04/18/2022-02:40:31] [V] [TRT] Tactic: 26 Time: 15.4231 [04/18/2022-02:40:32] [V] [TRT] Tactic: 27 Time: 14.588 [04/18/2022-02:40:32] [V] [TRT] Tactic: 31 Time: 16.3715 [04/18/2022-02:40:32] [V] [TRT] Fastest Tactic: 27 Time: 14.588 [04/18/2022-02:40:32] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWise) [04/18/2022-02:40:32] [V] [TRT] Tactic: 128 Time: 9.83565 [04/18/2022-02:40:32] [V] [TRT] Tactic: 256 Time: 9.87738 [04/18/2022-02:40:33] [V] [TRT] Tactic: 512 Time: 9.61997 [04/18/2022-02:40:33] [V] [TRT] Tactic: -32 Time: 10.3144 [04/18/2022-02:40:33] [V] [TRT] Tactic: -64 Time: 10.3735 [04/18/2022-02:40:33] [V] [TRT] Tactic: -128 Time: 9.9968 [04/18/2022-02:40:33] [V] [TRT] Fastest Tactic: 512 Time: 9.61997 [04/18/2022-02:40:33] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWise Tactic: 512 [04/18/2022-02:40:33] [V] [TRT] *************** Autotuning format combination: Float(1:4,2,2,2), Float(1:4,131072,512,2) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:40:33] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/se_excite/mul) (PointWiseV2) [04/18/2022-02:40:33] [V] [TRT] Tactic: 0 Time: 17.0008 [04/18/2022-02:40:34] [V] [TRT] Tactic: 1 Time: 19.7866 [04/18/2022-02:40:34] [V] [TRT] Tactic: 2 Time: 20.6683 [04/18/2022-02:40:35] [V] [TRT] Tactic: 3 Time: 22.2047 [04/18/2022-02:40:35] [V] [TRT] Tactic: 4 Time: 25.8275 [04/18/2022-02:40:35] [V] [TRT] Tactic: 5 Time: 20.4614 [04/18/2022-02:40:36] [V] [TRT] Tactic: 6 Time: 27.0833 [04/18/2022-02:40:36] [V] [TRT] Tactic: 7 Time: 29.994 [04/18/2022-02:40:37] [V] [TRT] Tactic: 8 Time: 28.9728 [04/18/2022-02:40:37] [V] [TRT] Tactic: 9 Time: 28.9806 [04/18/2022-02:40:38] [V] [TRT] Tactic: 10 Time: 15.1575 [04/18/2022-02:40:38] [V] [TRT] Tactic: 11 Time: 19.5005 [04/18/2022-02:40:38] [V] [TRT] Tactic: 12 Time: 16.6091 [04/18/2022-02:40:39] [V] [TRT] Tactic: 13 Time: 24.2355 [04/18/2022-02:40:39] [V] [TRT] Tactic: 14 Time: 21.2855 [04/18/2022-02:40:39] [V] [TRT] Tactic: 15 Time: 16.8006 [04/18/2022-02:40:40] [V] [TRT] Tactic: 16 Time: 33.7985 [04/18/2022-02:40:40] [V] [TRT] Tactic: 17 Time: 25.4033 [04/18/2022-02:40:41] [V] [TRT] Tactic: 18 Time: 23.4927 [04/18/2022-02:40:41] [V] [TRT] Tactic: 19 Time: 19.0977 [04/18/2022-02:40:41] [V] [TRT] Tactic: 20 Time: 13.7949 [04/18/2022-02:40:42] [V] [TRT] Tactic: 21 Time: 17.8468 [04/18/2022-02:40:42] [V] [TRT] Tactic: 22 Time: 22.7592 [04/18/2022-02:40:43] [V] [TRT] Tactic: 23 Time: 33.2102 [04/18/2022-02:40:43] [V] [TRT] Tactic: 28 Time: 16.3832 [04/18/2022-02:40:43] [V] [TRT] Tactic: 29 Time: 10.4536 [04/18/2022-02:40:43] [V] [TRT] Tactic: 30 Time: 9.35578 [04/18/2022-02:40:43] [V] [TRT] Fastest Tactic: 30 Time: 9.35578 [04/18/2022-02:40:43] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 30 [04/18/2022-02:40:43] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:43] [V] [TRT] *************** Autotuning format combination: Float(2097152,65536,256,1) -> Float(1048576,65536,256,1) *************** [04/18/2022-02:40:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CudaDepthwiseConvolution) [04/18/2022-02:40:43] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (FusedConvActConvolution) [04/18/2022-02:40:43] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:43] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:40:43] [V] [TRT] Tactic: 0 Time: 11.5887 [04/18/2022-02:40:44] [V] [TRT] Tactic: 1 Time: 11.7558 [04/18/2022-02:40:44] [V] [TRT] Tactic: 2 Time: 20.8182 [04/18/2022-02:40:45] [V] [TRT] Tactic: 4 Time: 74.5987 [04/18/2022-02:40:46] [V] [TRT] Tactic: 5 Time: 25.5135 [04/18/2022-02:40:46] [V] [TRT] Tactic: 56 Time: 11.0248 [04/18/2022-02:40:46] [V] [TRT] Tactic: 57 Time: 11.7691 [04/18/2022-02:40:46] [V] [TRT] Tactic: 58 Time: 20.8687 [04/18/2022-02:40:48] [V] [TRT] Tactic: 60 Time: 75.3356 [04/18/2022-02:40:48] [V] [TRT] Tactic: 61 Time: 25.9709 [04/18/2022-02:40:48] [V] [TRT] Tactic: 112 Time: 11.6389 [04/18/2022-02:40:49] [V] [TRT] Tactic: 113 Time: 11.69 [04/18/2022-02:40:49] [V] [TRT] Tactic: 114 Time: 20.7345 [04/18/2022-02:40:50] [V] [TRT] Tactic: 116 Time: 75.1283 [04/18/2022-02:40:51] [V] [TRT] Tactic: 117 Time: 25.5488 [04/18/2022-02:40:51] [V] [TRT] Fastest Tactic: 56 Time: 11.0248 [04/18/2022-02:40:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CublasConvolution) [04/18/2022-02:40:51] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:51] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:40:51] [V] [TRT] Tactic: 4549827808004681195 Time: 7.32275 [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:40:51] [V] [TRT] Tactic: 5779835512569528575 Time: 7.22214 [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1_aligna4_alignc4 Tactic: 9151672657204310840 [04/18/2022-02:40:51] [V] [TRT] Tactic: 9151672657204310840 Time: 9.9671 [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_relu_interior_nn_v1 Tactic: -7491730084094677098 [04/18/2022-02:40:51] [V] [TRT] Tactic: -7491730084094677098 Time: 6.96077 [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1_aligna4_alignc4 Tactic: -6622064180404051845 [04/18/2022-02:40:51] [V] [TRT] Tactic: -6622064180404051845 Time: 9.4464 [04/18/2022-02:40:51] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:40:52] [V] [TRT] Tactic: -6313876406580483184 Time: 6.95066 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_interior_nn_v1 Tactic: -6273689210331812572 [04/18/2022-02:40:52] [V] [TRT] Tactic: -6273689210331812572 Time: 7.05178 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_interior_nn_v1 Tactic: -4337126844824617177 [04/18/2022-02:40:52] [V] [TRT] Tactic: -4337126844824617177 Time: 7.35258 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:40:52] [V] [TRT] Tactic: -1123676555321336786 Time: 7.6608 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:40:52] [V] [TRT] Tactic: -701551393537224327 Time: 6.97843 [04/18/2022-02:40:52] [V] [TRT] Fastest Tactic: -6313876406580483184 Time: 6.95066 [04/18/2022-02:40:52] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -6313876406580483184 [04/18/2022-02:40:52] [V] [TRT] *************** Autotuning format combination: Float(2097152,1,8192,32) -> Float(1048576,1,4096,16) *************** [04/18/2022-02:40:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:40:52] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CublasConvolution) [04/18/2022-02:40:52] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:52] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:52] [V] [TRT] Tactic: 676988335020687107 Time: 7.4592 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:52] [V] [TRT] Tactic: 1149579359391877453 Time: 11.6869 [04/18/2022-02:40:52] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:53] [V] [TRT] Tactic: 1663866669559596164 Time: 6.81011 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:53] [V] [TRT] Tactic: 1995961315573863697 Time: 6.78874 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:53] [V] [TRT] Tactic: 2860655430572478466 Time: 6.91507 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:53] [V] [TRT] Tactic: 4232768147062126270 Time: 9.72838 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:53] [V] [TRT] Tactic: 4474630279712975759 Time: 7.02387 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:53] [V] [TRT] Tactic: 4479823862704990365 Time: 7.39341 [04/18/2022-02:40:53] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:54] [V] [TRT] Tactic: 4696204239951173149 Time: 7.39494 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:54] [V] [TRT] Tactic: 5061046663754203417 Time: 6.57344 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:54] [V] [TRT] Tactic: 5660369513040054181 Time: 9.74682 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:54] [V] [TRT] Tactic: 5778138195697110003 Time: 6.9225 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:54] [V] [TRT] Tactic: 6002893715742835901 Time: 10.2275 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:54] [V] [TRT] Tactic: 8918020581761223752 Time: 6.85709 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:54] [V] [TRT] Tactic: 9016055318246906759 Time: 8.15514 [04/18/2022-02:40:54] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:55] [V] [TRT] Tactic: -7609160790790750215 Time: 6.73254 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:55] [V] [TRT] Tactic: -7054781547842146201 Time: 6.64422 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:55] [V] [TRT] Tactic: -6773414409150198858 Time: 6.7625 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:55] [V] [TRT] Tactic: -5980517219165853661 Time: 8.85734 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:55] [V] [TRT] Tactic: -5910172158931405628 Time: 1.52269 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:55] [V] [TRT] Tactic: -5905193483742532701 Time: 0.472064 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:55] [V] [TRT] Tactic: -4196636767445012021 Time: 1.51846 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:55] [V] [TRT] Tactic: -4035591156787122265 Time: 0.390912 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:55] [V] [TRT] Tactic: -3829074795144908279 Time: 1.0569 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:55] [V] [TRT] Tactic: -2809379259463049391 Time: 0.676736 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:55] [V] [TRT] Tactic: -1985235291706575900 Time: 0.703232 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:55] [V] [TRT] Tactic: -711510282315844248 Time: 0.924672 [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:55] [V] [TRT] Tactic: -504296718212024303 Time: 0.710784 [04/18/2022-02:40:55] [V] [TRT] Fastest Tactic: -4035591156787122265 Time: 0.390912 [04/18/2022-02:40:55] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -4035591156787122265 [04/18/2022-02:40:55] [V] [TRT] *************** Autotuning format combination: Float(524288,1:4,2048,8) -> Float(262144,1:4,1024,4) *************** [04/18/2022-02:40:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CudnnConvolution) [04/18/2022-02:40:55] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CublasConvolution) [04/18/2022-02:40:55] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:55] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D (CaskConvolution) [04/18/2022-02:40:55] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:56] [V] [TRT] Tactic: 676988335020687107 Time: 0.819712 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:56] [V] [TRT] Tactic: 1149579359391877453 Time: 1.51744 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1 Tactic: 1373022415249282411 [04/18/2022-02:40:56] [V] [TRT] Tactic: 1373022415249282411 Time: 0.698752 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:56] [V] [TRT] Tactic: 1663866669559596164 Time: 0.66496 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:56] [V] [TRT] Tactic: 1995961315573863697 Time: 0.506752 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:56] [V] [TRT] Tactic: 2860655430572478466 Time: 0.479488 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:56] [V] [TRT] Tactic: 4232768147062126270 Time: 0.501888 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:56] [V] [TRT] Tactic: 4474630279712975759 Time: 0.402944 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:56] [V] [TRT] Tactic: 4479823862704990365 Time: 0.396672 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:56] [V] [TRT] Tactic: 4696204239951173149 Time: 0.423552 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:56] [V] [TRT] Tactic: 5061046663754203417 Time: 0.425472 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:56] [V] [TRT] Tactic: 5660369513040054181 Time: 0.47424 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:56] [V] [TRT] Tactic: 5778138195697110003 Time: 0.593664 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:56] [V] [TRT] Tactic: 6002893715742835901 Time: 1.14406 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:56] [V] [TRT] Tactic: 8918020581761223752 Time: 0.578808 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:56] [V] [TRT] Tactic: 9016055318246906759 Time: 0.718208 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:56] [V] [TRT] Tactic: -7609160790790750215 Time: 0.638848 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1 Tactic: -7067026478815706014 [04/18/2022-02:40:56] [V] [TRT] Tactic: -7067026478815706014 Time: 0.598528 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:56] [V] [TRT] Tactic: -7054781547842146201 Time: 0.626048 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:56] [V] [TRT] Tactic: -6773414409150198858 Time: 0.445056 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:56] [V] [TRT] Tactic: -5980517219165853661 Time: 0.408576 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:56] [V] [TRT] Tactic: -5910172158931405628 Time: 1.23494 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:56] [V] [TRT] Tactic: -5905193483742532701 Time: 0.414976 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:56] [V] [TRT] Tactic: -4196636767445012021 Time: 1.21178 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:56] [V] [TRT] Tactic: -4035591156787122265 Time: 0.338048 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:56] [V] [TRT] Tactic: -3829074795144908279 Time: 0.827008 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:56] [V] [TRT] Tactic: -2809379259463049391 Time: 0.62976 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:56] [V] [TRT] Tactic: -1985235291706575900 Time: 0.570496 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:56] [V] [TRT] Tactic: -711510282315844248 Time: 0.768128 [04/18/2022-02:40:56] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_0/block_0/project_conv2d/Conv2D Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:56] [V] [TRT] Tactic: -504296718212024303 Time: 0.554616 [04/18/2022-02:40:56] [V] [TRT] Fastest Tactic: -4035591156787122265 Time: 0.338048 [04/18/2022-02:40:56] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -4035591156787122265 [04/18/2022-02:40:56] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:56] [V] [TRT] *************** Autotuning format combination: Float(1048576,65536,256,1) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:40:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CudaDepthwiseConvolution) [04/18/2022-02:40:56] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (FusedConvActConvolution) [04/18/2022-02:40:56] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:56] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CudnnConvolution) [04/18/2022-02:40:56] [V] [TRT] Tactic: 0 Time: 1.92653 [04/18/2022-02:40:56] [V] [TRT] Tactic: 1 Time: 1.54611 [04/18/2022-02:40:56] [V] [TRT] Tactic: 2 Time: 2.13427 [04/18/2022-02:40:56] [V] [TRT] Tactic: 4 Time: 7.97824 [04/18/2022-02:40:56] [V] [TRT] Tactic: 5 Time: 3.01222 [04/18/2022-02:40:56] [V] [TRT] Tactic: 56 Time: 1.89568 [04/18/2022-02:40:56] [V] [TRT] Tactic: 57 Time: 1.54099 [04/18/2022-02:40:57] [V] [TRT] Tactic: 58 Time: 2.04506 [04/18/2022-02:40:57] [V] [TRT] Tactic: 60 Time: 7.97133 [04/18/2022-02:40:57] [V] [TRT] Tactic: 61 Time: 2.99878 [04/18/2022-02:40:57] [V] [TRT] Tactic: 112 Time: 1.89965 [04/18/2022-02:40:57] [V] [TRT] Tactic: 113 Time: 1.71968 [04/18/2022-02:40:57] [V] [TRT] Tactic: 114 Time: 2.04339 [04/18/2022-02:40:57] [V] [TRT] Tactic: 116 Time: 8.18419 [04/18/2022-02:40:57] [V] [TRT] Tactic: 117 Time: 3.06726 [04/18/2022-02:40:57] [V] [TRT] Fastest Tactic: 57 Time: 1.54099 [04/18/2022-02:40:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CublasConvolution) [04/18/2022-02:40:57] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CaskConvolution) [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_relu_small_nn_v1 Tactic: 4549827808004681195 [04/18/2022-02:40:57] [V] [TRT] Tactic: 4549827808004681195 Time: 0.720512 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_small_nn_v1 Tactic: 5779835512569528575 [04/18/2022-02:40:57] [V] [TRT] Tactic: 5779835512569528575 Time: 0.735616 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1_aligna4_alignc4 Tactic: 9151672657204310840 [04/18/2022-02:40:57] [V] [TRT] Tactic: 9151672657204310840 Time: 1.09632 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_relu_interior_nn_v1 Tactic: -7491730084094677098 [04/18/2022-02:40:57] [V] [TRT] Tactic: -7491730084094677098 Time: 0.74816 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nchwkrsc_nchw_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1_aligna4_alignc4 Tactic: -6622064180404051845 [04/18/2022-02:40:57] [V] [TRT] Tactic: -6622064180404051845 Time: 0.985472 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_relu_small_nn_v1 Tactic: -6313876406580483184 [04/18/2022-02:40:57] [V] [TRT] Tactic: -6313876406580483184 Time: 0.754944 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_interior_nn_v1 Tactic: -6273689210331812572 [04/18/2022-02:40:57] [V] [TRT] Tactic: -6273689210331812572 Time: 0.71616 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_relu_interior_nn_v1 Tactic: -4337126844824617177 [04/18/2022-02:40:57] [V] [TRT] Tactic: -4337126844824617177 Time: 0.701312 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_medium_nn_v1 Tactic: -1123676555321336786 [04/18/2022-02:40:57] [V] [TRT] Tactic: -1123676555321336786 Time: 0.730368 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_relu_medium_nn_v1 Tactic: -701551393537224327 [04/18/2022-02:40:57] [V] [TRT] Tactic: -701551393537224327 Time: 0.718336 [04/18/2022-02:40:57] [V] [TRT] Fastest Tactic: -4337126844824617177 Time: 0.701312 [04/18/2022-02:40:57] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: -4337126844824617177 [04/18/2022-02:40:57] [V] [TRT] *************** Autotuning format combination: Float(1048576,1,4096,16) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:40:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CudnnConvolution) [04/18/2022-02:40:57] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CublasConvolution) [04/18/2022-02:40:57] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:57] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CaskConvolution) [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:57] [V] [TRT] Tactic: 676988335020687107 Time: 0.646912 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:57] [V] [TRT] Tactic: 1149579359391877453 Time: 0.946176 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:57] [V] [TRT] Tactic: 1663866669559596164 Time: 0.590848 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:57] [V] [TRT] Tactic: 1995961315573863697 Time: 0.576768 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:57] [V] [TRT] Tactic: 2860655430572478466 Time: 0.77568 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:57] [V] [TRT] Tactic: 4232768147062126270 Time: 0.735616 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:57] [V] [TRT] Tactic: 4474630279712975759 Time: 1.10195 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:57] [V] [TRT] Tactic: 4479823862704990365 Time: 1.04742 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:57] [V] [TRT] Tactic: 4696204239951173149 Time: 0.775424 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:57] [V] [TRT] Tactic: 5061046663754203417 Time: 0.645376 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:57] [V] [TRT] Tactic: 5660369513040054181 Time: 0.738176 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:57] [V] [TRT] Tactic: 5778138195697110003 Time: 0.616832 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:57] [V] [TRT] Tactic: 6002893715742835901 Time: 0.864768 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:57] [V] [TRT] Tactic: 8918020581761223752 Time: 0.60032 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:57] [V] [TRT] Tactic: 9016055318246906759 Time: 0.685568 [04/18/2022-02:40:57] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:58] [V] [TRT] Tactic: -7609160790790750215 Time: 0.58112 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:58] [V] [TRT] Tactic: -7054781547842146201 Time: 0.592 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:58] [V] [TRT] Tactic: -6773414409150198858 Time: 0.622336 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5980517219165853661 Time: 0.6624 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5910172158931405628 Time: 0.8384 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5905193483742532701 Time: 0.793088 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:58] [V] [TRT] Tactic: -4196636767445012021 Time: 1.00826 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:58] [V] [TRT] Tactic: -4035591156787122265 Time: 1.01875 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:58] [V] [TRT] Tactic: -3829074795144908279 Time: 0.69056 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:58] [V] [TRT] Tactic: -2809379259463049391 Time: 0.600576 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:58] [V] [TRT] Tactic: -1985235291706575900 Time: 0.590464 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:58] [V] [TRT] Tactic: -711510282315844248 Time: 0.662784 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:58] [V] [TRT] Tactic: -504296718212024303 Time: 0.591616 [04/18/2022-02:40:58] [V] [TRT] Fastest Tactic: 1995961315573863697 Time: 0.576768 [04/18/2022-02:40:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 1995961315573863697 [04/18/2022-02:40:58] [V] [TRT] *************** Autotuning format combination: Float(262144,1:4,1024,4) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:40:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CudnnConvolution) [04/18/2022-02:40:58] [V] [TRT] CudnnConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CublasConvolution) [04/18/2022-02:40:58] [V] [TRT] CublasConvolution has no valid tactics for this config, skipping [04/18/2022-02:40:58] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 (CaskConvolution) [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 676988335020687107 [04/18/2022-02:40:58] [V] [TRT] Tactic: 676988335020687107 Time: 0.651904 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: 1149579359391877453 [04/18/2022-02:40:58] [V] [TRT] Tactic: 1149579359391877453 Time: 0.95872 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_t1r1s1 Tactic: 1373022415249282411 [04/18/2022-02:40:58] [V] [TRT] Tactic: 1373022415249282411 Time: 0.57792 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_interior_nhwc_tn_v1 Tactic: 1663866669559596164 [04/18/2022-02:40:58] [V] [TRT] Tactic: 1663866669559596164 Time: 0.589056 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 1995961315573863697 [04/18/2022-02:40:58] [V] [TRT] Tactic: 1995961315573863697 Time: 0.573952 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 2860655430572478466 [04/18/2022-02:40:58] [V] [TRT] Tactic: 2860655430572478466 Time: 0.766592 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: 4232768147062126270 [04/18/2022-02:40:58] [V] [TRT] Tactic: 4232768147062126270 Time: 0.736 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4474630279712975759 [04/18/2022-02:40:58] [V] [TRT] Tactic: 4474630279712975759 Time: 1.10118 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: 4479823862704990365 [04/18/2022-02:40:58] [V] [TRT] Tactic: 4479823862704990365 Time: 1.09261 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 4696204239951173149 [04/18/2022-02:40:58] [V] [TRT] Tactic: 4696204239951173149 Time: 0.775552 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5061046663754203417 [04/18/2022-02:40:58] [V] [TRT] Tactic: 5061046663754203417 Time: 0.64384 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x64x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 5660369513040054181 [04/18/2022-02:40:58] [V] [TRT] Tactic: 5660369513040054181 Time: 0.744832 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_small_nhwc_tn_v1 Tactic: 5778138195697110003 [04/18/2022-02:40:58] [V] [TRT] Tactic: 5778138195697110003 Time: 0.595968 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: 6002893715742835901 [04/18/2022-02:40:58] [V] [TRT] Tactic: 6002893715742835901 Time: 0.915456 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_medium_nhwc_tn_v1 Tactic: 8918020581761223752 [04/18/2022-02:40:58] [V] [TRT] Tactic: 8918020581761223752 Time: 0.595968 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize256x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: 9016055318246906759 [04/18/2022-02:40:58] [V] [TRT] Tactic: 9016055318246906759 Time: 0.688256 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_t1r1s1 Tactic: -7609160790790750215 [04/18/2022-02:40:58] [V] [TRT] Tactic: -7609160790790750215 Time: 0.577664 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_tf32f32_f32_nhwckrsc_nhwc_tilesize128x128x16_stage4_warpsize2x2x1_g1_tensor16x8x8_simple_t1r1s1 Tactic: -7067026478815706014 [04/18/2022-02:40:58] [V] [TRT] Tactic: -7067026478815706014 Time: 0.57728 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x128x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -7054781547842146201 [04/18/2022-02:40:58] [V] [TRT] Tactic: -7054781547842146201 Time: 0.576128 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -6773414409150198858 [04/18/2022-02:40:58] [V] [TRT] Tactic: -6773414409150198858 Time: 0.575872 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x64x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5980517219165853661 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5980517219165853661 Time: 0.600064 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize64x256x8_stage3_warpsize1x4x1_g1_ffma_simple_t1r1s1 Tactic: -5910172158931405628 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5910172158931405628 Time: 0.877184 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x64_sliced1x2_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -5905193483742532701 [04/18/2022-02:40:58] [V] [TRT] Tactic: -5905193483742532701 Time: 0.835328 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x256x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -4196636767445012021 [04/18/2022-02:40:58] [V] [TRT] Tactic: -4196636767445012021 Time: 1.01722 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x32_sliced1x4_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -4035591156787122265 [04/18/2022-02:40:58] [V] [TRT] Tactic: -4035591156787122265 Time: 1.00621 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_t1r1s1 Tactic: -3829074795144908279 [04/18/2022-02:40:58] [V] [TRT] Tactic: -3829074795144908279 Time: 0.681216 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_relu_exp_medium_nhwc_tn_v1 Tactic: -2809379259463049391 [04/18/2022-02:40:58] [V] [TRT] Tactic: -2809379259463049391 Time: 0.597504 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_interior_nhwc_tn_v1 Tactic: -1985235291706575900 [04/18/2022-02:40:58] [V] [TRT] Tactic: -1985235291706575900 Time: 0.632064 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: sm80_xmma_fprop_implicit_gemm_f32f32_f32f32_f32_nhwckrsc_nhwc_tilesize128x128x8_stage3_warpsize2x4x1_g1_ffma_simple_t1r1s1 Tactic: -711510282315844248 [04/18/2022-02:40:58] [V] [TRT] Tactic: -711510282315844248 Time: 0.708352 [04/18/2022-02:40:58] [V] [TRT] StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_conv2d/Conv2D + StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_bn/FusedBatchNormV3 Set Tactic Name: ampere_scudnn_128x128_ldg4_relu_exp_small_nhwc_tn_v1 Tactic: -504296718212024303 [04/18/2022-02:40:58] [V] [TRT] Tactic: -504296718212024303 Time: 0.600448 [04/18/2022-02:40:58] [V] [TRT] Fastest Tactic: 1995961315573863697 Time: 0.573952 [04/18/2022-02:40:58] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 1995961315573863697 [04/18/2022-02:40:58] [V] [TRT] =============== Computing costs for [04/18/2022-02:40:58] [V] [TRT] *************** Autotuning format combination: Float(6291456,65536,256,1) -> Float(6291456,65536,256,1) *************** [04/18/2022-02:40:58] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWiseV2) [04/18/2022-02:40:58] [V] [TRT] Tactic: 0 Time: 0.96576 [04/18/2022-02:40:58] [V] [TRT] Tactic: 1 Time: 0.992128 [04/18/2022-02:40:58] [V] [TRT] Tactic: 2 Time: 0.974848 [04/18/2022-02:40:58] [V] [TRT] Tactic: 3 Time: 0.96832 [04/18/2022-02:40:58] [V] [TRT] Tactic: 4 Time: 0.982144 [04/18/2022-02:40:58] [V] [TRT] Tactic: 5 Time: 0.973056 [04/18/2022-02:40:58] [V] [TRT] Tactic: 6 Time: 0.962304 [04/18/2022-02:40:58] [V] [TRT] Tactic: 7 Time: 1.00621 [04/18/2022-02:40:58] [V] [TRT] Tactic: 8 Time: 0.967424 [04/18/2022-02:40:58] [V] [TRT] Tactic: 9 Time: 0.979328 [04/18/2022-02:40:58] [V] [TRT] Tactic: 28 Time: 0.960384 [04/18/2022-02:40:58] [V] [TRT] Fastest Tactic: 28 Time: 0.960384 [04/18/2022-02:40:58] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWise) [04/18/2022-02:40:59] [V] [TRT] Tactic: 128 Time: 1.08966 [04/18/2022-02:40:59] [V] [TRT] Tactic: 256 Time: 1.13549 [04/18/2022-02:40:59] [V] [TRT] Tactic: 512 Time: 1.19347 [04/18/2022-02:40:59] [V] [TRT] Tactic: -32 Time: 2.05952 [04/18/2022-02:40:59] [V] [TRT] Tactic: -64 Time: 1.21075 [04/18/2022-02:40:59] [V] [TRT] Tactic: -128 Time: 1.15584 [04/18/2022-02:40:59] [V] [TRT] Fastest Tactic: 128 Time: 1.08966 [04/18/2022-02:40:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 28 [04/18/2022-02:40:59] [V] [TRT] *************** Autotuning format combination: Float(6291456,1,24576,96) -> Float(6291456,1,24576,96) *************** [04/18/2022-02:40:59] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWiseV2) [04/18/2022-02:40:59] [V] [TRT] Tactic: 0 Time: 0.967552 [04/18/2022-02:40:59] [V] [TRT] Tactic: 1 Time: 0.955776 [04/18/2022-02:40:59] [V] [TRT] Tactic: 2 Time: 0.97536 [04/18/2022-02:40:59] [V] [TRT] Tactic: 3 Time: 0.966784 [04/18/2022-02:40:59] [V] [TRT] Tactic: 4 Time: 1.0464 [04/18/2022-02:40:59] [V] [TRT] Tactic: 5 Time: 0.97216 [04/18/2022-02:40:59] [V] [TRT] Tactic: 6 Time: 0.966784 [04/18/2022-02:40:59] [V] [TRT] Tactic: 7 Time: 0.980992 [04/18/2022-02:40:59] [V] [TRT] Tactic: 8 Time: 0.967168 [04/18/2022-02:40:59] [V] [TRT] Tactic: 9 Time: 1.0144 [04/18/2022-02:40:59] [V] [TRT] Tactic: 28 Time: 0.965504 [04/18/2022-02:40:59] [V] [TRT] Fastest Tactic: 1 Time: 0.955776 [04/18/2022-02:40:59] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWise) [04/18/2022-02:40:59] [V] [TRT] Tactic: 128 Time: 1.04077 [04/18/2022-02:40:59] [V] [TRT] Tactic: 256 Time: 1.1072 [04/18/2022-02:40:59] [V] [TRT] Tactic: 512 Time: 1.26746 [04/18/2022-02:40:59] [V] [TRT] Tactic: -32 Time: 2.04646 [04/18/2022-02:40:59] [V] [TRT] Tactic: -64 Time: 1.2128 [04/18/2022-02:40:59] [V] [TRT] Tactic: -128 Time: 1.16787 [04/18/2022-02:40:59] [V] [TRT] Fastest Tactic: 128 Time: 1.04077 [04/18/2022-02:40:59] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 1 [04/18/2022-02:40:59] [V] [TRT] *************** Autotuning format combination: Float(1572864,1:4,6144,24) -> Float(1572864,1:4,6144,24) *************** [04/18/2022-02:40:59] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWiseV2) [04/18/2022-02:40:59] [V] [TRT] Tactic: 0 Time: 1.16301 [04/18/2022-02:40:59] [V] [TRT] Tactic: 1 Time: 1.1575 [04/18/2022-02:40:59] [V] [TRT] Tactic: 2 Time: 1.46701 [04/18/2022-02:40:59] [V] [TRT] Tactic: 3 Time: 1.1433 [04/18/2022-02:40:59] [V] [TRT] Tactic: 4 Time: 1.35974 [04/18/2022-02:40:59] [V] [TRT] Tactic: 5 Time: 1.5657 [04/18/2022-02:40:59] [V] [TRT] Tactic: 6 Time: 1.14406 [04/18/2022-02:40:59] [V] [TRT] Tactic: 7 Time: 1.34259 [04/18/2022-02:40:59] [V] [TRT] Tactic: 8 Time: 1.46406 [04/18/2022-02:40:59] [V] [TRT] Tactic: 9 Time: 1.53651 [04/18/2022-02:40:59] [V] [TRT] Tactic: 10 Time: 1.01466 [04/18/2022-02:40:59] [V] [TRT] Tactic: 11 Time: 1.03347 [04/18/2022-02:40:59] [V] [TRT] Tactic: 12 Time: 1.15776 [04/18/2022-02:40:59] [V] [TRT] Tactic: 13 Time: 1.03322 [04/18/2022-02:40:59] [V] [TRT] Tactic: 14 Time: 1.15789 [04/18/2022-02:40:59] [V] [TRT] Tactic: 15 Time: 1.43424 [04/18/2022-02:41:00] [V] [TRT] Tactic: 16 Time: 1.04154 [04/18/2022-02:41:00] [V] [TRT] Tactic: 17 Time: 1.14547 [04/18/2022-02:41:00] [V] [TRT] Tactic: 18 Time: 1.35117 [04/18/2022-02:41:00] [V] [TRT] Tactic: 19 Time: 1.47277 [04/18/2022-02:41:00] [V] [TRT] Tactic: 20 Time: 0.963584 [04/18/2022-02:41:00] [V] [TRT] Tactic: 21 Time: 0.9536 [04/18/2022-02:41:00] [V] [TRT] Tactic: 22 Time: 0.967168 [04/18/2022-02:41:00] [V] [TRT] Tactic: 23 Time: 0.962816 [04/18/2022-02:41:00] [V] [TRT] Tactic: 28 Time: 1.16275 [04/18/2022-02:41:00] [V] [TRT] Tactic: 29 Time: 1.01594 [04/18/2022-02:41:00] [V] [TRT] Tactic: 30 Time: 0.96512 [04/18/2022-02:41:00] [V] [TRT] Fastest Tactic: 21 Time: 0.9536 [04/18/2022-02:41:00] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWise) [04/18/2022-02:41:00] [V] [TRT] Tactic: 128 Time: 1.05165 [04/18/2022-02:41:00] [V] [TRT] Tactic: 256 Time: 1.11757 [04/18/2022-02:41:00] [V] [TRT] Tactic: 512 Time: 1.25632 [04/18/2022-02:41:00] [V] [TRT] Tactic: -32 Time: 2.06246 [04/18/2022-02:41:00] [V] [TRT] Tactic: -64 Time: 1.21715 [04/18/2022-02:41:00] [V] [TRT] Tactic: -128 Time: 1.16058 [04/18/2022-02:41:00] [V] [TRT] Fastest Tactic: 128 Time: 1.05165 [04/18/2022-02:41:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 21 [04/18/2022-02:41:00] [V] [TRT] *************** Autotuning format combination: Float(196608,65536:32,256,1) -> Float(196608,65536:32,256,1) *************** [04/18/2022-02:41:00] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWiseV2) [04/18/2022-02:41:00] [V] [TRT] Tactic: 24 Time: 1.16339 [04/18/2022-02:41:00] [V] [TRT] Tactic: 25 Time: 1.15597 [04/18/2022-02:41:00] [V] [TRT] Tactic: 26 Time: 1.15712 [04/18/2022-02:41:00] [V] [TRT] Tactic: 27 Time: 1.17875 [04/18/2022-02:41:00] [V] [TRT] Tactic: 31 Time: 1.16941 [04/18/2022-02:41:00] [V] [TRT] Fastest Tactic: 25 Time: 1.15597 [04/18/2022-02:41:00] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWise) [04/18/2022-02:41:00] [V] [TRT] Tactic: 128 Time: 1.0409 [04/18/2022-02:41:00] [V] [TRT] Tactic: 256 Time: 1.11706 [04/18/2022-02:41:00] [V] [TRT] Tactic: 512 Time: 1.1959 [04/18/2022-02:41:00] [V] [TRT] Tactic: -32 Time: 2.03686 [04/18/2022-02:41:00] [V] [TRT] Tactic: -64 Time: 1.2128 [04/18/2022-02:41:00] [V] [TRT] Tactic: -128 Time: 1.16646 [04/18/2022-02:41:00] [V] [TRT] Fastest Tactic: 128 Time: 1.0409 [04/18/2022-02:41:00] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWise Tactic: 128 [04/18/2022-02:41:00] [V] [TRT] *************** Autotuning format combination: Float(1:4,131072,512,2) -> Float(1:4,131072,512,2) *************** [04/18/2022-02:41:00] [V] [TRT] --------------- Timing Runner: PWN(PWN(StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/Sigmoid), StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/expand_activation/mul) (PointWiseV2) [04/18/2022-02:41:00] [V] [TRT] Tactic: 0 Time: 1.15981 [04/18/2022-02:41:00] [V] [TRT] Tactic: 1 Time: 1.15482 [04/18/2022-02:41:00] [V] [TRT] Tactic: 2 Time: 1.45574 [04/18/2022-02:41:00] [V] [TRT] Tactic: 3 Time: 1.15418 [04/18/2022-02:41:00] [V] [TRT] Tactic: 4 Time: 1.40557 [04/18/2022-02:41:00] [V] [TRT] Tactic: 5 Time: 1.4985 [04/18/2022-02:41:00] [V] [TRT] Tactic: 6 Time: 1.1488 [04/18/2022-02:41:00] [V] [TRT] Tactic: 7 Time: 1.35232 [04/18/2022-02:41:00] [V] [TRT] Tactic: 8 Time: 1.45459 [04/18/2022-02:41:00] [V] [TRT] Tactic: 9 Time: 1.53843 [04/18/2022-02:41:01] [V] [TRT] Tactic: 10 Time: 1.01632 [04/18/2022-02:41:01] [V] [TRT] Tactic: 11 Time: 1.04051 [04/18/2022-02:41:01] [V] [TRT] Tactic: 12 Time: 1.21677 [04/18/2022-02:41:01] [V] [TRT] Tactic: 13 Time: 1.06189 [04/18/2022-02:41:01] [V] [TRT] Tactic: 14 Time: 1.20294 [04/18/2022-02:41:01] [V] [TRT] Tactic: 15 Time: 1.4263 [04/18/2022-02:41:01] [V] [TRT] Tactic: 16 Time: 1.07469 [04/18/2022-02:41:01] [V] [TRT] Tactic: 17 Time: 1.14688 [04/18/2022-02:41:01] [V] [TRT] Tactic: 18 Time: 1.38112 [04/18/2022-02:41:01] [V] [TRT] Tactic: 19 Time: 1.47955 [04/18/2022-02:41:01] [V] [TRT] Tactic: 20 Time: 0.963584 [04/18/2022-02:41:01] [V] [TRT] Tactic: 21 Time: 0.955136 [04/18/2022-02:41:01] [V] [TRT] Tactic: 22 Time: 0.967168 [04/18/2022-02:41:01] [V] [TRT] Tactic: 23 Time: 0.976128 [04/18/2022-02:41:01] [V] [TRT] Tactic: 28 Time: 1.20269 [04/18/2022-02:41:01] [V] [TRT] Tactic: 29 Time: 1.0281 [04/18/2022-02:41:01] [V] [TRT] Tactic: 30 Time: 0.972928 [04/18/2022-02:41:01] [V] [TRT] Fastest Tactic: 21 Time: 0.955136 [04/18/2022-02:41:01] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: PointWiseV2 Tactic: 21 [04/18/2022-02:41:01] [V] [TRT] =============== Computing costs for [04/18/2022-02:41:01] [V] [TRT] *************** Autotuning format combination: Float(6291456,65536,256,1) -> Float(1572864,16384,128,1) *************** [04/18/2022-02:41:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise (CudaDepthwiseConvolution) [04/18/2022-02:41:01] [V] [TRT] Tactic: -1 Time: 0.650368 [04/18/2022-02:41:01] [V] [TRT] Fastest Tactic: -1 Time: 0.650368 [04/18/2022-02:41:01] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/EfficientDet-D0/functional_1/stack_1/block_0/depthwise_conv2d/depthwise (CudnnConvolution) [04/18/2022-02:41:01] [V] [TRT] Tactic: 0 Time: 1.9008 [04/18/2022-02:41:01] [V] [TRT] Tactic: 1 Time: 1.84243 [04/18/2022-02:41:01] [V] [TRT] Tactic: 2 Time: 1.83859