&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/adascoe/Sainath/Model/DMS.model/DMS.onnx --explicitBatch --minShapes=conv2d_input:1x145x145x3 --optShapes=conv2d_input:16x145x145x3 --maxShapes=conv2d_input:145x145x145x3 --shapes=conv2d_input:5x145x145x3 --verbose [03/20/2023-12:48:14] [I] === Model Options === [03/20/2023-12:48:14] [I] Format: ONNX [03/20/2023-12:48:14] [I] Model: /home/adascoe/Sainath/Model/DMS.model/DMS.onnx [03/20/2023-12:48:14] [I] Output: [03/20/2023-12:48:14] [I] === Build Options === [03/20/2023-12:48:14] [I] Max batch: explicit [03/20/2023-12:48:14] [I] Workspace: 16 MB [03/20/2023-12:48:14] [I] minTiming: 1 [03/20/2023-12:48:14] [I] avgTiming: 8 [03/20/2023-12:48:14] [I] Precision: FP32 [03/20/2023-12:48:14] [I] Calibration: [03/20/2023-12:48:14] [I] Safe mode: Disabled [03/20/2023-12:48:14] [I] Save engine: [03/20/2023-12:48:14] [I] Load engine: [03/20/2023-12:48:14] [I] Inputs format: fp32:CHW [03/20/2023-12:48:14] [I] Outputs format: fp32:CHW [03/20/2023-12:48:14] [I] Input build shape: conv2d_input=1x145x145x3+16x145x145x3+145x145x145x3 [03/20/2023-12:48:14] [I] === System Options === [03/20/2023-12:48:14] [I] Device: 0 [03/20/2023-12:48:14] [I] DLACore: [03/20/2023-12:48:14] [I] Plugins: [03/20/2023-12:48:14] [I] === Inference Options === [03/20/2023-12:48:14] [I] Batch: Explicit [03/20/2023-12:48:14] [I] Input inference shape: conv2d_input=5x145x145x3 [03/20/2023-12:48:14] [I] Iterations: 10 (200 ms warm up) [03/20/2023-12:48:14] [I] Duration: 3s [03/20/2023-12:48:14] [I] Sleep time: 0ms [03/20/2023-12:48:14] [I] Streams: 1 [03/20/2023-12:48:14] [I] Spin-wait: Disabled [03/20/2023-12:48:14] [I] Multithreading: Enabled [03/20/2023-12:48:14] [I] CUDA Graph: Disabled [03/20/2023-12:48:14] [I] Skip inference: Disabled [03/20/2023-12:48:14] [I] Separate profiling: Disabled [03/20/2023-12:48:14] [I] Consistency: Disabled [03/20/2023-12:48:14] [I] === Reporting Options === [03/20/2023-12:48:14] [I] Verbose: Enabled [03/20/2023-12:48:14] [I] Averages: 10 inferences [03/20/2023-12:48:14] [I] Percentile: 99 [03/20/2023-12:48:14] [I] Dump output: Disabled [03/20/2023-12:48:14] [I] Profile: Disabled [03/20/2023-12:48:14] [I] Export timing to JSON file: [03/20/2023-12:48:14] [I] Export profile to JSON file: [03/20/2023-12:48:14] [I] [03/20/2023-12:48:14] [I] [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - Region_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - Split [03/20/2023-12:48:14] [V] [TRT] Plugin Creator registration succeeded - InstanceNormalization_TRT ---------------------------------------------------------------- Input filename: /home/adascoe/Sainath/Model/DMS.model/DMS.onnx ONNX IR version: 0.0.4 Opset version: 7 Producer name: tf2onnx Producer version: 1.13.0 2c1db5 Domain: Model version: 0 Doc string: ---------------------------------------------------------------- [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:209: Adding network input: conv2d_input with dtype: float32, dimensions: (-1, 145, 145, 3) [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: conv2d_input for ONNX tensor: conv2d_input [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: const_fold_opt__39 [03/20/2023-12:48:14] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/dense_1/MatMul/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/dense_1/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/dense/MatMul/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/dense/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_3/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_2/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_1/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:91: Importing initializer: StatefulPartitionedCall/sequential/conv2d/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 [Transpose] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: conv2d_input [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 [Transpose] inputs: [conv2d_input -> (-1, 145, 145, 3)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 for ONNX node: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 [Transpose] outputs: [StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0 -> (-1, 3, 145, 145)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d/BiasAdd [Conv] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d/BiasAdd [Conv] inputs: [StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0 -> (-1, 3, 145, 145)], [StatefulPartitionedCall/sequential/conv2d/Conv2D/ReadVariableOp:0 -> (256, 3, 3, 3)], [StatefulPartitionedCall/sequential/conv2d/BiasAdd/ReadVariableOp:0 -> (256)], [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:441: Convolution input dimensions: (-1, 3, 145, 145) [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:523: Using kernel: (3, 3), strides: (1, 1), padding: (0, 0), dilations: (1, 1), numOutputs: 256 [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:524: Convolution output dimensions: (-1, 256, 143, 143) [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/conv2d/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d/BiasAdd [Conv] outputs: [StatefulPartitionedCall/sequential/conv2d/BiasAdd:0 -> (-1, 256, 143, 143)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d/Relu [Relu] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d/Relu [Relu] inputs: [StatefulPartitionedCall/sequential/conv2d/BiasAdd:0 -> (-1, 256, 143, 143)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d/Relu for ONNX node: StatefulPartitionedCall/sequential/conv2d/Relu [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d/Relu:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d/Relu [Relu] outputs: [StatefulPartitionedCall/sequential/conv2d/Relu:0 -> (-1, 256, 143, 143)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool [MaxPool] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/sequential/conv2d/Relu:0 -> (-1, 256, 143, 143)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool for ONNX node: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0 -> (-1, 256, 71, 71)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd [Conv] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_1/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd [Conv] inputs: [StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0 -> (-1, 256, 71, 71)], [StatefulPartitionedCall/sequential/conv2d_1/Conv2D/ReadVariableOp:0 -> (128, 256, 3, 3)], [StatefulPartitionedCall/sequential/conv2d_1/BiasAdd/ReadVariableOp:0 -> (128)], [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:441: Convolution input dimensions: (-1, 256, 71, 71) [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:523: Using kernel: (3, 3), strides: (1, 1), padding: (0, 0), dilations: (1, 1), numOutputs: 128 [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:524: Convolution output dimensions: (-1, 128, 69, 69) [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd [Conv] outputs: [StatefulPartitionedCall/sequential/conv2d_1/BiasAdd:0 -> (-1, 128, 69, 69)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_1/Relu [Relu] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_1/Relu [Relu] inputs: [StatefulPartitionedCall/sequential/conv2d_1/BiasAdd:0 -> (-1, 128, 69, 69)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_1/Relu for ONNX node: StatefulPartitionedCall/sequential/conv2d_1/Relu [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_1/Relu:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_1/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_1/Relu [Relu] outputs: [StatefulPartitionedCall/sequential/conv2d_1/Relu:0 -> (-1, 128, 69, 69)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool [MaxPool] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_1/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/sequential/conv2d_1/Relu:0 -> (-1, 128, 69, 69)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool for ONNX node: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0 -> (-1, 128, 34, 34)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd [Conv] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_2/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd [Conv] inputs: [StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0 -> (-1, 128, 34, 34)], [StatefulPartitionedCall/sequential/conv2d_2/Conv2D/ReadVariableOp:0 -> (64, 128, 3, 3)], [StatefulPartitionedCall/sequential/conv2d_2/BiasAdd/ReadVariableOp:0 -> (64)], [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:441: Convolution input dimensions: (-1, 128, 34, 34) [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:523: Using kernel: (3, 3), strides: (1, 1), padding: (0, 0), dilations: (1, 1), numOutputs: 64 [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:524: Convolution output dimensions: (-1, 64, 32, 32) [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd [Conv] outputs: [StatefulPartitionedCall/sequential/conv2d_2/BiasAdd:0 -> (-1, 64, 32, 32)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_2/Relu [Relu] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_2/Relu [Relu] inputs: [StatefulPartitionedCall/sequential/conv2d_2/BiasAdd:0 -> (-1, 64, 32, 32)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_2/Relu for ONNX node: StatefulPartitionedCall/sequential/conv2d_2/Relu [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_2/Relu:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_2/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_2/Relu [Relu] outputs: [StatefulPartitionedCall/sequential/conv2d_2/Relu:0 -> (-1, 64, 32, 32)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool [MaxPool] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_2/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/sequential/conv2d_2/Relu:0 -> (-1, 64, 32, 32)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool for ONNX node: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0 -> (-1, 64, 16, 16)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd [Conv] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_3/Conv2D/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd [Conv] inputs: [StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0 -> (-1, 64, 16, 16)], [StatefulPartitionedCall/sequential/conv2d_3/Conv2D/ReadVariableOp:0 -> (32, 64, 3, 3)], [StatefulPartitionedCall/sequential/conv2d_3/BiasAdd/ReadVariableOp:0 -> (32)], [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:441: Convolution input dimensions: (-1, 64, 16, 16) [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:523: Using kernel: (3, 3), strides: (1, 1), padding: (0, 0), dilations: (1, 1), numOutputs: 32 [03/20/2023-12:48:14] [V] [TRT] builtin_op_importers.cpp:524: Convolution output dimensions: (-1, 32, 14, 14) [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd [Conv] outputs: [StatefulPartitionedCall/sequential/conv2d_3/BiasAdd:0 -> (-1, 32, 14, 14)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/conv2d_3/Relu [Relu] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/conv2d_3/Relu [Relu] inputs: [StatefulPartitionedCall/sequential/conv2d_3/BiasAdd:0 -> (-1, 32, 14, 14)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/conv2d_3/Relu for ONNX node: StatefulPartitionedCall/sequential/conv2d_3/Relu [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/conv2d_3/Relu:0 for ONNX tensor: StatefulPartitionedCall/sequential/conv2d_3/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/conv2d_3/Relu [Relu] outputs: [StatefulPartitionedCall/sequential/conv2d_3/Relu:0 -> (-1, 32, 14, 14)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool [MaxPool] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/conv2d_3/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool [MaxPool] inputs: [StatefulPartitionedCall/sequential/conv2d_3/Relu:0 -> (-1, 32, 14, 14)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool for ONNX node: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0 for ONNX tensor: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool [MaxPool] outputs: [StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0 -> (-1, 32, 7, 7)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 [Transpose] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 [Transpose] inputs: [StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0 -> (-1, 32, 7, 7)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 for ONNX node: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36:0 for ONNX tensor: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 [Transpose] outputs: [StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36:0 -> (-1, 7, 7, 32)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/flatten/Reshape [Reshape] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: const_fold_opt__39 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/flatten/Reshape [Reshape] inputs: [StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36:0 -> (-1, 7, 7, 32)], [const_fold_opt__39 -> (2)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/flatten/Reshape for ONNX node: StatefulPartitionedCall/sequential/flatten/Reshape [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/flatten/Reshape:0 for ONNX tensor: StatefulPartitionedCall/sequential/flatten/Reshape:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/flatten/Reshape [Reshape] outputs: [StatefulPartitionedCall/sequential/flatten/Reshape:0 -> (-1, 1568)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense/MatMul [MatMul] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/flatten/Reshape:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense/MatMul/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense/MatMul [MatMul] inputs: [StatefulPartitionedCall/sequential/flatten/Reshape:0 -> (-1, 1568)], [StatefulPartitionedCall/sequential/dense/MatMul/ReadVariableOp:0 -> (1568, 64)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense/MatMul for ONNX node: StatefulPartitionedCall/sequential/dense/MatMul [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/dense/MatMul:0 for ONNX tensor: StatefulPartitionedCall/sequential/dense/MatMul:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense/MatMul [MatMul] outputs: [StatefulPartitionedCall/sequential/dense/MatMul:0 -> (-1, 64)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense/BiasAdd [Add] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense/MatMul:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense/BiasAdd [Add] inputs: [StatefulPartitionedCall/sequential/dense/MatMul:0 -> (-1, 64)], [StatefulPartitionedCall/sequential/dense/BiasAdd/ReadVariableOp:0 -> (64)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/dense/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/dense/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/dense/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense/BiasAdd [Add] outputs: [StatefulPartitionedCall/sequential/dense/BiasAdd:0 -> (-1, 64)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense/Relu [Relu] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense/Relu [Relu] inputs: [StatefulPartitionedCall/sequential/dense/BiasAdd:0 -> (-1, 64)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense/Relu for ONNX node: StatefulPartitionedCall/sequential/dense/Relu [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/dense/Relu:0 for ONNX tensor: StatefulPartitionedCall/sequential/dense/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense/Relu [Relu] outputs: [StatefulPartitionedCall/sequential/dense/Relu:0 -> (-1, 64)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense_1/MatMul [MatMul] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense/Relu:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense_1/MatMul/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense_1/MatMul [MatMul] inputs: [StatefulPartitionedCall/sequential/dense/Relu:0 -> (-1, 64)], [StatefulPartitionedCall/sequential/dense_1/MatMul/ReadVariableOp:0 -> (64, 4)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense_1/MatMul for ONNX node: StatefulPartitionedCall/sequential/dense_1/MatMul [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/dense_1/MatMul:0 for ONNX tensor: StatefulPartitionedCall/sequential/dense_1/MatMul:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense_1/MatMul [MatMul] outputs: [StatefulPartitionedCall/sequential/dense_1/MatMul:0 -> (-1, 4)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense_1/BiasAdd [Add] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense_1/MatMul:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense_1/BiasAdd/ReadVariableOp:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense_1/BiasAdd [Add] inputs: [StatefulPartitionedCall/sequential/dense_1/MatMul:0 -> (-1, 4)], [StatefulPartitionedCall/sequential/dense_1/BiasAdd/ReadVariableOp:0 -> (4)], [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense_1/BiasAdd for ONNX node: StatefulPartitionedCall/sequential/dense_1/BiasAdd [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: StatefulPartitionedCall/sequential/dense_1/BiasAdd:0 for ONNX tensor: StatefulPartitionedCall/sequential/dense_1/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense_1/BiasAdd [Add] outputs: [StatefulPartitionedCall/sequential/dense_1/BiasAdd:0 -> (-1, 4)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:108: Parsing node: StatefulPartitionedCall/sequential/dense_1/Softmax [Softmax] [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:124: Searching for input: StatefulPartitionedCall/sequential/dense_1/BiasAdd:0 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:130: StatefulPartitionedCall/sequential/dense_1/Softmax [Softmax] inputs: [StatefulPartitionedCall/sequential/dense_1/BiasAdd:0 -> (-1, 4)], [03/20/2023-12:48:14] [03/20/2023-12:48:14] [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: StatefulPartitionedCall/sequential/dense_1/Softmax for ONNX node: StatefulPartitionedCall/sequential/dense_1/Softmax [03/20/2023-12:48:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: dense_1_1 for ONNX tensor: dense_1 [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:186: StatefulPartitionedCall/sequential/dense_1/Softmax [Softmax] outputs: [dense_1 -> (-1, -1)], [03/20/2023-12:48:14] [V] [TRT] ModelImporter.cpp:500: Marking dense_1_1 as output: dense_1 ----- Parsing of ONNX model /home/adascoe/Sainath/Model/DMS.model/DMS.onnx is Done ---- [03/20/2023-12:48:14] [I] [TRT] [MemUsageSnapshot] Builder begin: CPU 212 MB, GPU 178 MB [03/20/2023-12:48:14] [V] [TRT] Applying generic optimizations to the graph for inference. [03/20/2023-12:48:14] [V] [TRT] Original: 29 layers [03/20/2023-12:48:14] [V] [TRT] After dead-layer removal: 29 layers [03/20/2023-12:48:14] [V] [TRT] After scale fusion: 29 layers [03/20/2023-12:48:14] [V] [TRT] Fusing StatefulPartitionedCall/sequential/conv2d/BiasAdd with StatefulPartitionedCall/sequential/conv2d/Relu [03/20/2023-12:48:14] [V] [TRT] Fusing StatefulPartitionedCall/sequential/conv2d_1/BiasAdd with StatefulPartitionedCall/sequential/conv2d_1/Relu [03/20/2023-12:48:14] [V] [TRT] Fusing StatefulPartitionedCall/sequential/conv2d_2/BiasAdd with StatefulPartitionedCall/sequential/conv2d_2/Relu [03/20/2023-12:48:14] [V] [TRT] Fusing StatefulPartitionedCall/sequential/conv2d_3/BiasAdd with StatefulPartitionedCall/sequential/conv2d_3/Relu [03/20/2023-12:48:14] [V] [TRT] Fusing StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 with StatefulPartitionedCall/sequential/flatten/Reshape [03/20/2023-12:48:14] [V] [TRT] Fusing (Unnamed Layer* 19) [ElementWise] with StatefulPartitionedCall/sequential/dense/Relu [03/20/2023-12:48:14] [V] [TRT] Removing (Unnamed Layer* 33) [Shuffle] [03/20/2023-12:48:14] [V] [TRT] Removing (Unnamed Layer* 35) [Shuffle] [03/20/2023-12:48:14] [V] [TRT] After vertical fusions: 21 layers [03/20/2023-12:48:14] [V] [TRT] After final dead-layer removal: 21 layers [03/20/2023-12:48:14] [V] [TRT] After tensor merging: 21 layers [03/20/2023-12:48:14] [V] [TRT] After concat removal: 21 layers [03/20/2023-12:48:14] [V] [TRT] Graph construction and optimization completed in 0.000575748 seconds. [03/20/2023-12:48:15] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +763, GPU +336, now: CPU 976, GPU 514 (MB) [03/20/2023-12:48:15] [I] [TRT] [MemUsageChange] Init cuBlas: CPU +74, GPU +70, now: CPU 1050, GPU 584 (MB) [03/20/2023-12:48:15] [V] [TRT] Constructing optimization profile number 0 out of 1 [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: -> Float(1,64) *************** [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: -> Float(1) *************** [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: -> Float(1,4) *************** [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: -> Float(1) *************** [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: Float(1,3,435,63075) -> Float(1,145,21025,63075) *************** [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 (Shuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 0 time 0.057312 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 0 Time: 0.057312 [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 (CutensorShuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 10003 time 1.07734 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 10003 Time: 1.07734 [03/20/2023-12:48:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [03/20/2023-12:48:15] [V] [TRT] [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: Float(1) -> Float(1,64) *************** [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 18) [Shuffle] (Shuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 0 time 0.006208 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 0 Time: 0.006208 [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 18) [Shuffle] (CutensorShuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 10002 time 0.013312 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 10002 Time: 0.013312 [03/20/2023-12:48:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [03/20/2023-12:48:15] [V] [TRT] [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: Float(1) -> Float(1,4) *************** [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 24) [Shuffle] (Shuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 0 time 0.006016 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 0 Time: 0.006016 [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 24) [Shuffle] (CutensorShuffle) [03/20/2023-12:48:15] [V] [TRT] Tactic: 10002 time 0.008224 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 10002 Time: 0.008224 [03/20/2023-12:48:15] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [03/20/2023-12:48:15] [V] [TRT] [03/20/2023-12:48:15] [V] [TRT] *************** Autotuning format combination: Float(1,145,21025,63075) -> Float(1,143,20449,5234944) *************** [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (FusedConvActConvolution) [03/20/2023-12:48:15] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (CaskConvolution) [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 1754569683116234317 time 2.83254 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 1825138533642645384 time 2.86515 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 2733356012094739613 time 2.86496 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 2775507031594384867 time 8.44374 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 2842488832350522458 time 3.08534 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 3915320020053085238 time 2.86205 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 6448355332020552203 time 4.09395 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 6808617066150061604 time 2.8447 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: 9091006216302412844 time 2.80371 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: -8060443123034038864 time 2.88147 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: -4420849921117327522 time 3.39437 [03/20/2023-12:48:15] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:15] [V] [TRT] Tactic: -3946921629105938337 time 2.90614 [03/20/2023-12:48:15] [V] [TRT] Fastest Tactic: 9091006216302412844 Time: 2.80371 [03/20/2023-12:48:15] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (CudaConvolution) [03/20/2023-12:48:15] [V] [TRT] Tactic: 0 time 11.8999 [03/20/2023-12:48:15] [V] [TRT] Tactic: 1 time 8.52989 [03/20/2023-12:48:15] [V] [TRT] Tactic: 2 skipped. Scratch requested: 320231424, available: 16777216 [03/20/2023-12:48:15] [V] [TRT] Tactic: 4 skipped. Scratch requested: 19615186944, available: 16777216 [03/20/2023-12:48:15] [V] [TRT] Tactic: 5 skipped. Scratch requested: 330270208, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6 time 11.8907 [03/20/2023-12:48:16] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 1 Time: 8.52989 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (CudaDepthwiseConvolution) [03/20/2023-12:48:16] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 9091006216302412844 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] *************** Autotuning format combination: Float(1,143,20449,5234944) -> Float(1,71,5041,1290496) *************** [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool (Pooling) [03/20/2023-12:48:16] [V] [TRT] Tactic: -1 time 1.56262 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: -1 Time: 1.56262 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool (TiledPooling) [03/20/2023-12:48:16] [V] [TRT] Tactic: 5505281 time 3.32394 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5570817 time 2.7255 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5636353 time 2.64579 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5701889 time 2.6369 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5767425 time 2.41232 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5832961 time 2.42045 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5898497 time 2.41978 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5964033 time 2.42272 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6029569 time 3.20243 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6095105 time 3.08198 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6160641 time 3.10022 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6226177 time 3.10496 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6291713 time 3.10749 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6357249 time 3.10995 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6422785 time 3.11654 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6488321 time 3.11552 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 5767425 Time: 2.41232 [03/20/2023-12:48:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Pooling Tactic: -1 [03/20/2023-12:48:16] [V] [TRT] [03/20/2023-12:48:16] [V] [TRT] *************** Autotuning format combination: Float(1,71,5041,1290496) -> Float(1,69,4761,609408) *************** [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (FusedConvActConvolution) [03/20/2023-12:48:16] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (CaskConvolution) [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1754569683116234317 time 5.70947 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1825138533642645384 time 5.74282 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2733356012094739613 time 7.93568 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2775507031594384867 time 5.2079 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2842488832350522458 time 6.55936 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 3915320020053085238 time 6.53312 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6448355332020552203 time 6.36864 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6808617066150061604 time 6.71334 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 9091006216302412844 time 6.57987 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -8060443123034038864 time 6.91811 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -4420849921117327522 time 9.24826 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -3946921629105938337 time 8.04291 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 2775507031594384867 Time: 5.2079 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (CudaConvolution) [03/20/2023-12:48:16] [V] [TRT] Tactic: 0 time 10.4385 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1 time 7.35773 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2 skipped. Scratch requested: 6362219520, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 4 skipped. Scratch requested: 9304670208, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5 skipped. Scratch requested: 627294208, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6 time 5.9471 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 6 Time: 5.9471 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (CudaDepthwiseConvolution) [03/20/2023-12:48:16] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 2775507031594384867 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] *************** Autotuning format combination: Float(1,69,4761,609408) -> Float(1,34,1156,147968) *************** [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool (Pooling) [03/20/2023-12:48:16] [V] [TRT] Tactic: -1 time 0.190752 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: -1 Time: 0.190752 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool (TiledPooling) [03/20/2023-12:48:16] [V] [TRT] Tactic: 5505281 time 0.43664 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5570817 time 0.294912 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5636353 time 0.27696 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5701889 time 0.26864 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5767425 time 0.266336 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5832961 time 0.264192 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5898497 time 0.264192 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5964033 time 0.270016 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6029569 time 0.385024 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6095105 time 0.360352 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6160641 time 0.357088 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6226177 time 0.359104 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6291713 time 0.36032 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6357249 time 0.361952 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6422785 time 0.361024 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6488321 time 0.364544 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 5832961 Time: 0.264192 [03/20/2023-12:48:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Pooling Tactic: -1 [03/20/2023-12:48:16] [V] [TRT] [03/20/2023-12:48:16] [V] [TRT] *************** Autotuning format combination: Float(1,34,1156,147968) -> Float(1,32,1024,65536) *************** [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (FusedConvActConvolution) [03/20/2023-12:48:16] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (CaskConvolution) [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1754569683116234317 time 0.77824 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1825138533642645384 time 0.781376 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2733356012094739613 time 0.449024 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2775507031594384867 time 0.252448 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2842488832350522458 time 0.412192 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 3915320020053085238 time 0.768512 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6448355332020552203 time 0.794624 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6808617066150061604 time 0.424256 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: 9091006216302412844 time 0.399872 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -8060443123034038864 time 0.431424 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -4420849921117327522 time 0.481792 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] Tactic: -3946921629105938337 time 0.475008 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 2775507031594384867 Time: 0.252448 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (CudaConvolution) [03/20/2023-12:48:16] [V] [TRT] Tactic: 0 time 0.792576 [03/20/2023-12:48:16] [V] [TRT] Tactic: 1 time 0.445952 [03/20/2023-12:48:16] [V] [TRT] Tactic: 2 skipped. Scratch requested: 684195840, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 4 skipped. Scratch requested: 904298496, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 5 skipped. Scratch requested: 278020096, available: 16777216 [03/20/2023-12:48:16] [V] [TRT] Tactic: 6 time 0.302496 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: 6 Time: 0.302496 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (CudaDepthwiseConvolution) [03/20/2023-12:48:16] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:16] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 2775507031594384867 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:16] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:16] [V] [TRT] *************** Autotuning format combination: Float(1,32,1024,65536) -> Float(1,16,256,16384) *************** [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool (Pooling) [03/20/2023-12:48:16] [V] [TRT] Tactic: -1 time 0.022528 [03/20/2023-12:48:16] [V] [TRT] Fastest Tactic: -1 Time: 0.022528 [03/20/2023-12:48:16] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool (TiledPooling) [03/20/2023-12:48:17] [V] [TRT] Tactic: 5505281 time 0.04096 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5570817 time 0.028448 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5636353 time 0.02304 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5701889 time 0.022496 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5767425 time 0.022368 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5832961 time 0.021376 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5898497 time 0.022496 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5964033 time 0.022304 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6029569 time 0.025088 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6095105 time 0.020992 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6160641 time 0.021024 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6226177 time 0.021056 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6291713 time 0.021408 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6357249 time 0.020992 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6422785 time 0.022112 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6488321 time 0.022112 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 6095105 Time: 0.020992 [03/20/2023-12:48:17] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: TiledPooling Tactic: 6095105 [03/20/2023-12:48:17] [V] [TRT] [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,16,256,16384) -> Float(1,14,196,6272) *************** [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (FusedConvActConvolution) [03/20/2023-12:48:17] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (CaskConvolution) [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 1754569683116234317 time 0.102848 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 1825138533642645384 time 0.104448 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 2733356012094739613 time 0.075008 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 2775507031594384867 time 0.02224 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 2842488832350522458 time 0.06544 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 3915320020053085238 time 0.102944 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6448355332020552203 time 0.10496 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6808617066150061604 time 0.064064 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: 9091006216302412844 time 0.059904 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: -8060443123034038864 time 0.06624 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: -4420849921117327522 time 0.059392 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] Tactic: -3946921629105938337 time 0.07728 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 2775507031594384867 Time: 0.02224 [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (CudaConvolution) [03/20/2023-12:48:17] [V] [TRT] Tactic: 0 time 0.055808 [03/20/2023-12:48:17] [V] [TRT] Tactic: 1 time 0.081632 [03/20/2023-12:48:17] [V] [TRT] Tactic: 2 skipped. Scratch requested: 65479680, available: 16777216 [03/20/2023-12:48:17] [V] [TRT] Tactic: 4 skipped. Scratch requested: 26173440, available: 16777216 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5 skipped. Scratch requested: 69517312, available: 16777216 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6 time 0.030272 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 6 Time: 0.030272 [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (CudaDepthwiseConvolution) [03/20/2023-12:48:17] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping [03/20/2023-12:48:17] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CaskConvolution Tactic: 2775507031594384867 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x128_relu_xregs_large_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_medium_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn) Set Tactic Name: volta_scudnn_128x32_relu_small_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,14,196,6272) -> Float(1,7,49,1568) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool (Pooling) [03/20/2023-12:48:17] [V] [TRT] Tactic: -1 time 0.004384 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: -1 Time: 0.004384 [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool (TiledPooling) [03/20/2023-12:48:17] [V] [TRT] Tactic: 5505281 time 0.01024 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5570817 time 0.00784 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5636353 time 0.006656 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5701889 time 0.006176 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5767425 time 0.006144 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5832961 time 0.0064 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5898497 time 0.006016 [03/20/2023-12:48:17] [V] [TRT] Tactic: 5964033 time 0.006144 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6029569 time 0.006688 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6095105 time 0.005952 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6160641 time 0.00464 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6226177 time 0.004576 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6291713 time 0.004608 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6357249 time 0.004736 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6422785 time 0.004608 [03/20/2023-12:48:17] [V] [TRT] Tactic: 6488321 time 0.004544 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 6488321 Time: 0.004544 [03/20/2023-12:48:17] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Pooling Tactic: -1 [03/20/2023-12:48:17] [V] [TRT] [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,7,49,1568) -> Float(1,1568) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 + StatefulPartitionedCall/sequential/flatten/Reshape (Shuffle) [03/20/2023-12:48:17] [V] [TRT] Tactic: 0 time 0.004096 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 0 Time: 0.004096 [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 + StatefulPartitionedCall/sequential/flatten/Reshape (CutensorShuffle) [03/20/2023-12:48:17] [V] [TRT] Tactic: 10001 time 0.011264 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 10001 Time: 0.011264 [03/20/2023-12:48:17] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Shuffle Tactic: 0 [03/20/2023-12:48:17] [V] [TRT] [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,1568), Float(1,64) -> Float(1,64) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 16) [Matrix Multiply] (MatrixMultiply) [03/20/2023-12:48:17] [V] [TRT] Tactic: 0 is the only option, timing skipped [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 0 Time: 0 [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,64), Float(1,64) -> Float(1,64) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 19) [ElementWise] + StatefulPartitionedCall/sequential/dense/Relu (ElementWise) [03/20/2023-12:48:17] [V] [TRT] Tactic: 1 time 0.003936 [03/20/2023-12:48:17] [V] [TRT] Tactic: 2 time 0.004096 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 1 Time: 0.003936 [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,64), Float(1,4) -> Float(1,4) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 22) [Matrix Multiply] (MatrixMultiply) [03/20/2023-12:48:17] [V] [TRT] Tactic: 0 is the only option, timing skipped [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 0 Time: 0 [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,4), Float(1,4) -> Float(1,4) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 25) [ElementWise] (ElementWise) [03/20/2023-12:48:17] [V] [TRT] Tactic: 1 is the only option, timing skipped [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 1 Time: 0 [03/20/2023-12:48:17] [V] [TRT] *************** Autotuning format combination: Float(1,4) -> Float(1,4) *************** [03/20/2023-12:48:17] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Softmax] (SoftMax) [03/20/2023-12:48:17] [V] [TRT] Tactic: 1001 time 0.004928 [03/20/2023-12:48:17] [V] [TRT] Fastest Tactic: 1001 Time: 0.004928 [03/20/2023-12:48:17] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: SoftMax Tactic: 1001 [03/20/2023-12:48:17] [V] [TRT] [03/20/2023-12:48:17] [V] [TRT] Formats and tactics selection completed in 1.5825 seconds. [03/20/2023-12:48:17] [V] [TRT] Builder timing cache: created 4 entries, 0 hit(s) [03/20/2023-12:48:17] [V] [TRT] After reformat layers: 21 layers [03/20/2023-12:48:17] [V] [TRT] Block size 3036267520 [03/20/2023-12:48:17] [V] [TRT] Block size 748487680 [03/20/2023-12:48:17] [V] [TRT] Block size 16777216 [03/20/2023-12:48:17] [V] [TRT] Block size 512 [03/20/2023-12:48:17] [V] [TRT] Block size 512 [03/20/2023-12:48:17] [V] [TRT] Total Activation Memory: 3801533440 [03/20/2023-12:48:17] [I] [TRT] Detected 1 inputs and 1 output network tensors. [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn) Set Tactic Name: volta_scudnn_128x64_relu_interior_nn_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [V] [TRT] StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd) Set Tactic Name: volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1 [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 1051, GPU 614 (MB) [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuBlas: CPU +0, GPU +8, now: CPU 1051, GPU 622 (MB) [03/20/2023-12:48:17] [V] [TRT] Engine generation completed in 2.35255 seconds. [03/20/2023-12:48:17] [V] [TRT] Engine Layer Information: [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/conv2d/BiasAdd__6 (Shuffle), Tactic: 0, conv2d_input[Float(145,145,3)] -> StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0[Float(3,145,145)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 18) [Shuffle] (Shuffle), Tactic: 0, (Unnamed Layer* 17) [Constant]_output[Float()] -> (Unnamed Layer* 18) [Shuffle]_output[Float(64)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 24) [Shuffle] (Shuffle), Tactic: 0, (Unnamed Layer* 23) [Constant]_output[Float()] -> (Unnamed Layer* 24) [Shuffle]_output[Float(4)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/conv2d/BiasAdd + StatefulPartitionedCall/sequential/conv2d/Relu (scudnn), Tactic: 9091006216302412844, StatefulPartitionedCall/sequential/conv2d/BiasAdd__6:0[Float(3,145,145)] -> StatefulPartitionedCall/sequential [03/20/2023-12:48:17] [V] [TRT] /conv2d/Relu:0[Float(256,143,143)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/max_pooling2d/MaxPool (Pooling), Tactic: -1, StatefulPartitionedCall/sequential/conv2d/Relu:0[Float(256,143,143)] -> StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0[Float(256,71,71)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/conv2d_1/BiasAdd + StatefulPartitionedCall/sequential/conv2d_1/Relu (scudnn_winograd), Tactic: 2775507031594384867, StatefulPartitionedCall/sequential/max_pooling2d/MaxPool:0[Float(256,71,71)] -> StatefulPartition [03/20/2023-12:48:17] [V] [TRT] edCall/sequential/conv2d_1/Relu:0[Float(128,69,69)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool (Pooling), Tactic: -1, StatefulPartitionedCall/sequential/conv2d_1/Relu:0[Float(128,69,69)] -> StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0[Float(128,34,34)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/conv2d_2/BiasAdd + StatefulPartitionedCall/sequential/conv2d_2/Relu (scudnn_winograd), Tactic: 2775507031594384867, StatefulPartitionedCall/sequential/max_pooling2d_1/MaxPool:0[Float(128,34,34)] -> StatefulPartiti [03/20/2023-12:48:17] [V] [TRT] onedCall/sequential/conv2d_2/Relu:0[Float(64,32,32)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool (PoolingTiled), Tactic: 6095105, StatefulPartitionedCall/sequential/conv2d_2/Relu:0[Float(64,32,32)] -> StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0[Float(64,16,16)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/conv2d_3/BiasAdd + StatefulPartitionedCall/sequential/conv2d_3/Relu (scudnn_winograd), Tactic: 2775507031594384867, StatefulPartitionedCall/sequential/max_pooling2d_2/MaxPool:0[Float(64,16,16)] -> StatefulPartitio [03/20/2023-12:48:17] [V] [TRT] nedCall/sequential/conv2d_3/Relu:0[Float(32,14,14)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool (Pooling), Tactic: -1, StatefulPartitionedCall/sequential/conv2d_3/Relu:0[Float(32,14,14)] -> StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0[Float(32,7,7)] [03/20/2023-12:48:17] [V] [TRT] Layer: StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool__36 + StatefulPartitionedCall/sequential/flatten/Reshape (Shuffle), Tactic: 0, StatefulPartitionedCall/sequential/max_pooling2d_3/MaxPool:0[Float(32,7,7)] -> StatefulPartitionedCall/sequent [03/20/2023-12:48:17] [V] [TRT] ial/flatten/Reshape:0[Float(1568)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 16) [Matrix Multiply] (MatrixMultiply), Tactic: 0, StatefulPartitionedCall/sequential/flatten/Reshape:0[Float(1568)], (Unnamed Layer* 15) [Constant]_output[Float(64)] -> StatefulPartitionedCall/sequential/dense/MatMul:0[Float(64)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 19) [ElementWise] + StatefulPartitionedCall/sequential/dense/Relu (ElementWise), Tactic: 1, StatefulPartitionedCall/sequential/dense/MatMul:0[Float(64)], (Unnamed Layer* 18) [Shuffle]_output[Float(64)] -> StatefulPartitionedCall/seq [03/20/2023-12:48:17] [V] [TRT] uential/dense/Relu:0[Float(64)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 22) [Matrix Multiply] (MatrixMultiply), Tactic: 0, StatefulPartitionedCall/sequential/dense/Relu:0[Float(64)], (Unnamed Layer* 21) [Constant]_output[Float(4)] -> StatefulPartitionedCall/sequential/dense_1/MatMul:0[Float(4)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 25) [ElementWise] (ElementWise), Tactic: 1, StatefulPartitionedCall/sequential/dense_1/MatMul:0[Float(4)], (Unnamed Layer* 24) [Shuffle]_output[Float(4)] -> StatefulPartitionedCall/sequential/dense_1/BiasAdd:0[Float(4)] [03/20/2023-12:48:17] [V] [TRT] Layer: (Unnamed Layer* 34) [Softmax] (SoftMax), Tactic: 1001, StatefulPartitionedCall/sequential/dense_1/BiasAdd:0[Float(4)] -> dense_1[Float(4)] [03/20/2023-12:48:17] [I] [TRT] [MemUsageSnapshot] Builder end: CPU 1050 MB, GPU 572 MB [03/20/2023-12:48:17] [I] [TRT] Loaded engine size: 4 MB [03/20/2023-12:48:17] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 1059 MB, GPU 566 MB [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 1059, GPU 582 (MB) [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuBlas: CPU +0, GPU +8, now: CPU 1059, GPU 590 (MB) [03/20/2023-12:48:17] [V] [TRT] Deserialize required 4907 microseconds. [03/20/2023-12:48:17] [I] [TRT] [MemUsageSnapshot] deserializeCudaEngine end: CPU 1059 MB, GPU 572 MB [03/20/2023-12:48:17] [I] Engine built in 2.70084 sec. [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 1054, GPU 582 (MB) [03/20/2023-12:48:17] [I] [TRT] [MemUsageChange] Init cuBlas: CPU +1, GPU +8, now: CPU 1055, GPU 590 (MB)