cuda 10.0
cudnn 7.6.5
tensorrt 7.0.11
gpu:p4
hi now I change the Tacotron2 with the Tensorrt code.I sucessed to generate onnx and engine,but when I run the Encoder model.it output the error code like that:
|||| binding 0
|||| binding_is_input True
|||| get_binding_dtype DataType.INT32
|||| get_binding_name sequences
|||| get_binding_shape (-1, -1)
|||| get_binding_vectorized_dim -1
|||| binding 1
|||| binding_is_input True
|||| get_binding_dtype DataType.INT32
|||| get_binding_name sequence_lengths
|||| get_binding_shape (-1,)
|||| get_binding_vectorized_dim -1
|||| binding 2
|||| binding_is_input False
|||| get_binding_dtype DataType.FLOAT
|||| get_binding_name memory
|||| get_binding_shape (-1, -1, 512)
|||| get_binding_vectorized_dim -1
|||| binding 3
|||| binding_is_input False
|||| get_binding_dtype DataType.INT32
|||| get_binding_name lens
|||| get_binding_shape (-1,)
|||| get_binding_vectorized_dim -1
|||| binding 4
|||| binding_is_input False
|||| get_binding_dtype DataType.FLOAT
|||| get_binding_name processed_memory
|||| get_binding_shape (-1, -1, 128)
|||| get_binding_vectorized_dim -1
|||| all_binding_shapes_specified True
|||| all_shape_inputs_specified True
[TensorRT] ERROR: …/rtSafe/cuda/cudaConvolutionRunner.cpp (362) - Cudnn Error in execute: 7 (CUDNN_STATUS_MAPPING_ERROR)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
Hi,
Could you please try to use “trtexec” command to test the model. “–verbose” mode will help you debug the issue.
“trtexec” useful for benchmarking networks and would be faster and easier to debug the issue.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
https://github.com/NVIDIA/TensorRT/blob/release/6.0/samples/opensource/trtexec/README.md
Thanks
hi,
i run the command trtexec --onnx=encoder.onnx --verbose --minShapes=sequences:1x4,sequence_lengths:1 --optShapes=sequences:1x128,sequence_lengths:1 --maxShapes=sequences:4x256,sequence_lengths:4
and get the result:
[01/13/2020-17:31:39] [I] === Model Options ===
[01/13/2020-17:31:39] [I] Format: ONNX
[01/13/2020-17:31:39] [I] Model: encoder.onnx
[01/13/2020-17:31:39] [I] Output:
[01/13/2020-17:31:39] [I] === Build Options ===
[01/13/2020-17:31:39] [I] Max batch: explicit
[01/13/2020-17:31:39] [I] Workspace: 16 MB
[01/13/2020-17:31:39] [I] minTiming: 1
[01/13/2020-17:31:39] [I] avgTiming: 8
[01/13/2020-17:31:39] [I] Precision: FP32
[01/13/2020-17:31:39] [I] Calibration:
[01/13/2020-17:31:39] [I] Safe mode: Disabled
[01/13/2020-17:31:39] [I] Save engine:
[01/13/2020-17:31:39] [I] Load engine:
[01/13/2020-17:31:39] [I] Inputs format: fp32:CHW
[01/13/2020-17:31:39] [I] Outputs format: fp32:CHW
[01/13/2020-17:31:39] [I] Input build shape: sequence_lengths=1+1+4
[01/13/2020-17:31:39] [I] Input build shape: sequences=1x4+1x128+4x256
[01/13/2020-17:31:39] [I] === System Options ===
[01/13/2020-17:31:39] [I] Device: 0
[01/13/2020-17:31:39] [I] DLACore:
[01/13/2020-17:31:39] [I] Plugins:
[01/13/2020-17:31:39] [I] === Inference Options ===
[01/13/2020-17:31:39] [I] Batch: Explicit
[01/13/2020-17:31:39] [I] Iterations: 10
[01/13/2020-17:31:39] [I] Duration: 3s (+ 200ms warm up)
[01/13/2020-17:31:39] [I] Sleep time: 0ms
[01/13/2020-17:31:39] [I] Streams: 1
[01/13/2020-17:31:39] [I] ExposeDMA: Disabled
[01/13/2020-17:31:39] [I] Spin-wait: Disabled
[01/13/2020-17:31:39] [I] Multithreading: Disabled
[01/13/2020-17:31:39] [I] CUDA Graph: Disabled
[01/13/2020-17:31:39] [I] Skip inference: Disabled
[01/13/2020-17:31:39] [I] Inputs:
[01/13/2020-17:31:39] [I] === Reporting Options ===
[01/13/2020-17:31:39] [I] Verbose: Enabled
[01/13/2020-17:31:39] [I] Averages: 10 inferences
[01/13/2020-17:31:39] [I] Percentile: 99
[01/13/2020-17:31:39] [I] Dump output: Disabled
[01/13/2020-17:31:39] [I] Profile: Disabled
[01/13/2020-17:31:39] [I] Export timing to JSON file:
[01/13/2020-17:31:39] [I] Export output to JSON file:
[01/13/2020-17:31:39] [I] Export profile to JSON file:
[01/13/2020-17:31:39] [I]
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::GridAnchor_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::NMS_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Reorg_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Region_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Clip_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::LReLU_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::PriorBox_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Normalize_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::RPROI_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::BatchedNMS_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::FlattenConcat_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::CropAndResize
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::DetectionLayer_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Proposal
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::ProposalLayer_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::PyramidROIAlign_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::ResizeNearest_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::Split
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::SpecialSlice_TRT
[01/13/2020-17:31:39] [V] [TRT] Plugin creator registration succeeded - ::InstanceNormalization_TRT
Input filename: encoder.onnx
ONNX IR version: 0.0.4
Opset version: 10
Producer name: pytorch
Producer version: 1.2
Domain:
Model version: 0
Doc string:
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::GridAnchor_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::NMS_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Reorg_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Region_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Clip_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::LReLU_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::PriorBox_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Normalize_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::RPROI_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::BatchedNMS_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::FlattenConcat_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::CropAndResize
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::DetectionLayer_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Proposal
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::ProposalLayer_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::PyramidROIAlign_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::ResizeNearest_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Split
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::SpecialSlice_TRT
[01/13/2020-17:31:40] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::InstanceNormalization_TRT
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:203: Adding network input: sequences with dtype: int32, dimensions: (-1, -1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: sequences for ONNX tensor: sequences
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:203: Adding network input: sequence_lengths with dtype: int32, dimensions: (-1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: sequence_lengths for ONNX tensor: sequence_lengths
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 247
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 288
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 289
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 290
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 291
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 292
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 293
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.embedding.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.0.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.1.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:90: Importing initializer: tacotron2.encoder.convolutions.2.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Gather]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.embedding.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: sequences
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Gather] inputs: [tacotron2.embedding.weight → (148, 512)], [sequences → (-1, -1)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:942: Using Gather axis: 0
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 0) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 86 for ONNX tensor: 86
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Gather] outputs: [86 → (-1, -1, 512)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Transpose]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 86
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Transpose] inputs: [86 → (-1, -1, 512)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 2) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 87 for ONNX tensor: 87
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Transpose] outputs: [87 → (-1, 512, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Cast]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 87
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Cast] inputs: [87 → (-1, 512, -1)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:315: Casting to type: float32
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 3) [Identity] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 88 for ONNX tensor: 88
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Cast] outputs: [88 → (-1, 512, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Conv]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 88
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Conv] inputs: [88 → (-1, 512, -1)], [tacotron2.encoder.convolutions.0.0.conv.weight → (512, 512, 5)], [tacotron2.encoder.convolutions.0.0.conv.bias → (512)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:442: Convolution input dimensions: (-1, 512, -1)
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (_, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:524: Using kernel: (5, 1), strides: (1, 1), padding: (2, 0), dilations: (1, 1), numOutputs: 512
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:525: Convolution output dimensions: (-1, 512, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 4) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 89 for ONNX tensor: 89
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Conv] outputs: [89 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [BatchNormalization]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 89
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.0.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [BatchNormalization] inputs: [89 → (-1, -1, -1)], [tacotron2.encoder.convolutions.0.1.weight → (512)], [tacotron2.encoder.convolutions.0.1.bias → (512)], [tacotron2.encoder.convolutions.0.1.running_mean → (512)], [tacotron2.encoder.convolutions.0.1.running_var → (512)],
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 15) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 90 for ONNX tensor: 90
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [BatchNormalization] outputs: [90 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Relu]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 90
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Relu] inputs: [90 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 26) [Activation] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 91 for ONNX tensor: 91
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Relu] outputs: [91 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Cast]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 91
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Cast] inputs: [91 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:315: Casting to type: float32
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 27) [Identity] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 92 for ONNX tensor: 92
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Cast] outputs: [92 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Conv]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 92
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Conv] inputs: [92 → (-1, -1, -1)], [tacotron2.encoder.convolutions.1.0.conv.weight → (512, 512, 5)], [tacotron2.encoder.convolutions.1.0.conv.bias → (512)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:442: Convolution input dimensions: (-1, -1, -1)
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:524: Using kernel: (5, 1), strides: (1, 1), padding: (2, 0), dilations: (1, 1), numOutputs: 512
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:525: Convolution output dimensions: (-1, 512, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 28) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 93 for ONNX tensor: 93
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Conv] outputs: [93 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [BatchNormalization]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 93
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.1.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [BatchNormalization] inputs: [93 → (-1, -1, -1)], [tacotron2.encoder.convolutions.1.1.weight → (512)], [tacotron2.encoder.convolutions.1.1.bias → (512)], [tacotron2.encoder.convolutions.1.1.running_mean → (512)], [tacotron2.encoder.convolutions.1.1.running_var → (512)],
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 39) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 94 for ONNX tensor: 94
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [BatchNormalization] outputs: [94 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Relu]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 94
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Relu] inputs: [94 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 50) [Activation] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 95 for ONNX tensor: 95
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Relu] outputs: [95 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Cast]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 95
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Cast] inputs: [95 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:315: Casting to type: float32
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 51) [Identity] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 96 for ONNX tensor: 96
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Cast] outputs: [96 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Conv]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 96
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.0.conv.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.0.conv.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Conv] inputs: [96 → (-1, -1, -1)], [tacotron2.encoder.convolutions.2.0.conv.weight → (512, 512, 5)], [tacotron2.encoder.convolutions.2.0.conv.bias → (512)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:442: Convolution input dimensions: (-1, -1, -1)
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:524: Using kernel: (5, 1), strides: (1, 1), padding: (2, 0), dilations: (1, 1), numOutputs: 512
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:525: Convolution output dimensions: (-1, 512, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 52) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 97 for ONNX tensor: 97
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Conv] outputs: [97 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [BatchNormalization]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 97
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.1.weight
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.1.bias
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.1.running_mean
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: tacotron2.encoder.convolutions.2.1.running_var
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [BatchNormalization] inputs: [97 → (-1, -1, -1)], [tacotron2.encoder.convolutions.2.1.weight → (512)], [tacotron2.encoder.convolutions.2.1.bias → (512)], [tacotron2.encoder.convolutions.2.1.running_mean → (512)], [tacotron2.encoder.convolutions.2.1.running_var → (512)],
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, _, ), unsqueezing to: (, _, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1272: Original shape: (, _, _, ), squeezing to: (, _, )
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 63) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 98 for ONNX tensor: 98
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [BatchNormalization] outputs: [98 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Relu]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 98
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Relu] inputs: [98 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 74) [Activation] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 99 for ONNX tensor: 99
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Relu] outputs: [99 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Transpose]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 99
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Transpose] inputs: [99 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 75) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 101 for ONNX tensor: 101
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Transpose] outputs: [101 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Shape]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 101
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Shape] inputs: [101 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 76) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 218 for ONNX tensor: 218
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Shape] outputs: [218 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [219 → ()],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Gather]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 218
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 219
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Gather] inputs: [218 → (3)], [219 → ()],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:942: Using Gather axis: 0
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 77) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 220 for ONNX tensor: 220
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Gather] outputs: [220 → ()],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Unsqueeze]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 220
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Unsqueeze] inputs: [220 → ()],
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (), unsqueezing to: (1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 79) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 221 for ONNX tensor: 221
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Unsqueeze] outputs: [221 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [222 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Concat]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 291
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 221
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 222
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Concat] inputs: [291 → (1)], [221 → (1)], [222 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 80) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 225 for ONNX tensor: 225
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Concat] outputs: [225 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [ConstantOfShape]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 225
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [ConstantOfShape] inputs: [225 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 83) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 226 for ONNX tensor: 226
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [ConstantOfShape] outputs: [226 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Shape]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 101
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Shape] inputs: [101 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 87) [Shape] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 227 for ONNX tensor: 227
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Shape] outputs: [227 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [228 → ()],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Gather]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 227
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 228
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Gather] inputs: [227 → (3)], [228 → ()],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:942: Using Gather axis: 0
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 88) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 229 for ONNX tensor: 229
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Gather] outputs: [229 → ()],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Unsqueeze]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 229
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Unsqueeze] inputs: [229 → ()],
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (), unsqueezing to: (1)
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 90) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 230 for ONNX tensor: 230
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Unsqueeze] outputs: [230 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [231 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Concat]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 292
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 230
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 231
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Concat] inputs: [292 → (1)], [230 → (1)], [231 → (1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 91) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 234 for ONNX tensor: 234
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Concat] outputs: [234 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [ConstantOfShape]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 234
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [ConstantOfShape] inputs: [234 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 94) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 235 for ONNX tensor: 235
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [ConstantOfShape] outputs: [235 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [LSTM]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 101
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 288
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 289
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 290
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 247
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 226
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 235
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [LSTM] inputs: [101 → (-1, -1, -1)], [288 → (2, 1024, 512)], [289 → (2, 1024, 256)], [290 → (2, 2048)], [247 → (1)], [226 → (-1, -1, -1)], [235 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1757: Bias shape is: (2, 2048)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1761: Reshaping bias to: (2, 2, 1024)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1766: After reduction, bias shape is: (2, 1, 1024)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1775: numDirectionsTensor shape: (1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1779: hiddenSizeTensor shape: (1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1781: batchSizeTensor shape: (1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1788: Gate output rank (equal to initial hidden/cell state rank): (3)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1802: Initial hidden state shape: (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1805: Initial cell state shape: (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1807: Entering Loop
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, ), unsqueezing to: (, _, )
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] onnx2trt_utils.cpp:1411: Original shape: (, ), unsqueezing to: (, _, _)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1840: Input shape: (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1844: Hidden state shape: (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1848: Cell state shape: (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1856: X(t) * W^T → (-1, -1, 1024)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1862: H(t-1) * R^T → (-1, -1, 1024)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1869: intermediate(t) → (2, -1, 1024)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1940: c(t) → (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1949: C(t) → (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1972: H(t) → (-1, -1, -1)
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1128: Concatenated output shape: (-1, -1, -1)
[01/13/2020-17:31:40] [E] [TRT] (Unnamed Layer* 158) [Slice]: slice size must be positive, size = [0,0,0]
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1131: Forward pass shape: ()
[01/13/2020-17:31:40] [E] [TRT] (Unnamed Layer* 159) [Slice]: slice size must be positive, size = [0,0,0]
[01/13/2020-17:31:40] [V] [TRT] builtin_op_importers.cpp:1137: Reverse pass shape: ()
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 98) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 236 for ONNX tensor: 236
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 237 for ONNX tensor: 237
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 238 for ONNX tensor: 238
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [LSTM] outputs: [236 → (-1, -1, -1, -1)], [237 → (-1, -1, -1)], [238 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Transpose]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 236
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Transpose] inputs: [236 → (-1, -1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 165) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 239 for ONNX tensor: 239
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Transpose] outputs: [239 → (-1, -1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [240 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Reshape]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 239
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 240
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Reshape] inputs: [239 → (-1, -1, -1, -1)], [240 → (3)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 166) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 241 for ONNX tensor: 241
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Reshape] outputs: [241 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Transpose]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 241
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Transpose] inputs: [241 → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 167) [Shuffle] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: memory_1 for ONNX tensor: memory
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Transpose] outputs: [memory → (-1, -1, -1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Constant]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Constant] inputs:
[01/13/2020-17:31:40] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Constant] outputs: [243 → ()],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [Mul]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: sequence_lengths
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 243
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [Mul] inputs: [sequence_lengths → (-1)], [243 → ()],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 168) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: lens_1 for ONNX tensor: lens
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [Mul] outputs: [lens → (-1)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:107: Parsing node: [MatMul]
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: memory
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:123: Searching for input: 293
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:129: [MatMul] inputs: [memory → (-1, -1, -1)], [293 → (512, 128)],
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:122: Registering layer: (Unnamed Layer* 171) [Constant] for ONNX node:
[01/13/2020-17:31:40] [V] [TRT] ImporterContext.hpp:97: Registering tensor: processed_memory_1 for ONNX tensor: processed_memory
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:180: [MatMul] outputs: [processed_memory → (-1, -1, 128)],
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:494: Marking memory_1 as output: memory
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:494: Marking processed_memory_1 as output: processed_memory
[01/13/2020-17:31:40] [V] [TRT] ModelImporter.cpp:494: Marking lens_1 as output: lens
[01/13/2020-17:31:40] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
[01/13/2020-17:31:40] [W] [TRT] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.
----- Parsing of ONNX model encoder.onnx is Done ----
[01/13/2020-17:31:40] [V] [TRT] Applying generic optimizations to the graph for inference.
[01/13/2020-17:31:40] [V] [TRT] Original: 85 layers
[01/13/2020-17:31:40] [V] [TRT] After dead-layer removal: 85 layers
[01/13/2020-17:31:40] [V] [TRT] After Myelin optimization: 57 layers
[01/13/2020-17:31:40] [V] [TRT] After scale fusion: 57 layers
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 83) [Constant] with (Unnamed Layer* 84) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 94) [Constant] with (Unnamed Layer* 95) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 100) [Constant] with (Unnamed Layer* 101) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 14) [Shuffle] with (Unnamed Layer* 20) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Removing (Unnamed Layer* 14) [Shuffle] + (Unnamed Layer* 20) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 38) [Shuffle] with (Unnamed Layer* 44) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Removing (Unnamed Layer* 38) [Shuffle] + (Unnamed Layer* 44) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 62) [Shuffle] with (Unnamed Layer* 68) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Removing (Unnamed Layer* 62) [Shuffle] + (Unnamed Layer* 68) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 168) [Constant] with (Unnamed Layer* 169) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 171) [Constant] with (Unnamed Layer* 172) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 165) [Shuffle] with (Unnamed Layer* 166) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 165) [Shuffle] + (Unnamed Layer* 166) [Shuffle] with (Unnamed Layer* 167) [Shuffle]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 10) [Convolution] with (Unnamed Layer* 21) [Scale]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 34) [Convolution] with (Unnamed Layer* 45) [Scale]
[01/13/2020-17:31:40] [V] [TRT] Fusing (Unnamed Layer* 58) [Convolution] with (Unnamed Layer* 69) [Scale]
[01/13/2020-17:31:40] [V] [TRT] After vertical fusions: 41 layers
[01/13/2020-17:31:40] [V] [TRT] After final dead-layer removal: 39 layers
[01/13/2020-17:31:40] [V] [TRT] After tensor merging: 39 layers
[01/13/2020-17:31:40] [V] [TRT] Eliminating concatenation (Unnamed Layer* 162) [Concatenation]
[01/13/2020-17:31:40] [V] [TRT] Generating copy for (Unnamed Layer* 160) [LoopOutput]_output to 236
[01/13/2020-17:31:40] [V] [TRT] Generating copy for (Unnamed Layer* 161) [LoopOutput]_output to 236
[01/13/2020-17:31:40] [V] [TRT] After concat removal: 40 layers
[01/13/2020-17:31:40] [V] [TRT] Graph construction and optimization completed in 0.0116339 seconds.
[01/13/2020-17:31:41] [V] [TRT] Constructing optimization profile number 0 out of 1
*************** Autotuning format combination: → Int32(1) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 161) [LoopOutput][HostToDeviceLogicalLen] (ShapeHostToDevice)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Int32(1) → Int32() ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 161) [LoopOutput][ShuffleLogicalLen] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Int32(1) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 114) [Iterator][HostToDeviceLogicalLen] (ShapeHostToDevice)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Int32(1) → Int32() ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 114) [Iterator][ShuffleLogicalLen] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Float(1,512) ***************
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Float(1,1,1) ***************
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Float(1,1,1) ***************
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Float(1,1024,2048) ***************
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: → Int32() ***************
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,512), Int32(1,(# 1 (SHAPE sequences))) → Float(1,512,(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 1) [Gather] (Gather)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 1 time 0.018432
[01/13/2020-17:31:41] [V] [TRT] Tactic: 2 time 0.028672
[01/13/2020-17:31:41] [V] [TRT] Tactic: 3 time 0.074752
[01/13/2020-17:31:41] [V] [TRT] Tactic: 4 time 0.180224
[01/13/2020-17:31:41] [V] [TRT] Tactic: 6 time 0.195584
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 1 Time: 0.018432
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,512,(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 2) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1024,2048) → Float(1,1024,1024) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 102) [Reduce] (Reduce)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 6 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 6 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 3) [Identity] (Reformat)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.008192
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.008192
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 3) [Identity] (Cast)
[01/13/2020-17:31:41] [V] [TRT] Cast has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 9) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 10) [Convolution] + (Unnamed Layer* 21) [Scale] (LegacySASSConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.5376
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.5376
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 10) [Convolution] + (Unnamed Layer* 21) [Scale] (FusedConvActConvolution)
[01/13/2020-17:31:41] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 10) [Convolution] + (Unnamed Layer* 21) [Scale] (CaskConvolution)
[01/13/2020-17:31:41] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 10) [Convolution] + (Unnamed Layer* 21) [Scale] (CudaConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.621568
[01/13/2020-17:31:41] [V] [TRT] Tactic: 1 time 0.485344
[01/13/2020-17:31:41] [V] [TRT] Tactic: 2 time 0.384
[01/13/2020-17:31:41] [V] [TRT] Tactic: 5 skipped. Scratch requested: 36766720, available: 16777216
[01/13/2020-17:31:41] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 2 Time: 0.384
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 10) [Convolution] + (Unnamed Layer* 21) [Scale] (CudaDepthwiseConvolution)
[01/13/2020-17:31:41] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudaConvolution Tactic: 2
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 25) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 26) [Activation] (Activation)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 27) [Identity] (Reformat)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.008192
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.008192
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 27) [Identity] (Cast)
[01/13/2020-17:31:41] [V] [TRT] Cast has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 33) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Convolution] + (Unnamed Layer* 45) [Scale] (LegacySASSConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.536576
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.536576
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Convolution] + (Unnamed Layer* 45) [Scale] (FusedConvActConvolution)
[01/13/2020-17:31:41] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Convolution] + (Unnamed Layer* 45) [Scale] (CaskConvolution)
[01/13/2020-17:31:41] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Convolution] + (Unnamed Layer* 45) [Scale] (CudaConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.621568
[01/13/2020-17:31:41] [V] [TRT] Tactic: 1 time 0.483328
[01/13/2020-17:31:41] [V] [TRT] Tactic: 2 time 0.385024
[01/13/2020-17:31:41] [V] [TRT] Tactic: 5 skipped. Scratch requested: 36766720, available: 16777216
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 2 Time: 0.385024
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 34) [Convolution] + (Unnamed Layer* 45) [Scale] (CudaDepthwiseConvolution)
[01/13/2020-17:31:41] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudaConvolution Tactic: 2
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 49) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 50) [Activation] (Activation)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 51) [Identity] (Reformat)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.007168
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.007168
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 51) [Identity] (Cast)
[01/13/2020-17:31:41] [V] [TRT] Cast has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: Reformat Tactic: 0
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 57) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 58) [Convolution] + (Unnamed Layer* 69) [Scale] (LegacySASSConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.536576
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time: 0.536576
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 58) [Convolution] + (Unnamed Layer* 69) [Scale] (FusedConvActConvolution)
[01/13/2020-17:31:41] [V] [TRT] FusedConvActConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 58) [Convolution] + (Unnamed Layer* 69) [Scale] (CaskConvolution)
[01/13/2020-17:31:41] [V] [TRT] CaskConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 58) [Convolution] + (Unnamed Layer* 69) [Scale] (CudaConvolution)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 time 0.621568
[01/13/2020-17:31:41] [V] [TRT] Tactic: 1 time 0.482304
[01/13/2020-17:31:41] [V] [TRT] Tactic: 2 time 0.382976
[01/13/2020-17:31:41] [V] [TRT] Tactic: 5 skipped. Scratch requested: 36766720, available: 16777216
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 2 Time: 0.382976
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 58) [Convolution] + (Unnamed Layer* 69) [Scale] (CudaDepthwiseConvolution)
[01/13/2020-17:31:41] [V] [TRT] CudaDepthwiseConvolution has no valid tactics for this config, skipping
[01/13/2020-17:31:41] [V] [TRT] >>>>>>>>>>>>>>> Chose Runner Type: CudaConvolution Tactic: 2
[01/13/2020-17:31:41] [V] [TRT]
[01/13/2020-17:31:41] [V] [TRT] *************** Autotuning format combination: Float(1,1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) → Float(1,(# 1 (SHAPE sequences)),(* 512 (# 1 (SHAPE sequences)))) ***************
[01/13/2020-17:31:41] [V] [TRT] --------------- Timing Runner: (Unnamed Layer* 73) [Shuffle] (Shuffle)
[01/13/2020-17:31:41] [V] [TRT] Tactic: 0 is the only option, timing skipped
[01/13/2020-17:31:41] [V] [TRT] Fastest Tactic: 0 Time:
sorry,the info is to much ,and the final error msg is:
[01/13/2020-17:31:41] [W] [TRT] Myelin graph with multiple dynamic values may have poor performance if they differ. Dynamic values are: (# 0 (SHAPE sequences)) (# 1 (SHAPE sequences))
[01/13/2020-17:31:51] [V] [TRT] Tactic: 0 skipped. Scratch requested: 420478976, available: 16777216
[01/13/2020-17:31:51] [V] [TRT] Fastest Tactic: -3360065831133338131 Time: 3.40282e+38
[01/13/2020-17:31:51] [E] [TRT] Internal error: could not find any implementation for node {(Unnamed Layer* 114) [Iterator],(Unnamed Layer* 113) [Iterator],(Unnamed Layer* 112) [TripLimit],(Unnamed Layer* 126) [Shuffle],(Unnamed Layer* 120) [Shuffle],(Unnamed Layer* 127) [Concatenation],(Unnamed Layer* 128) [Recurrence],(Unnamed Layer* 129) [Recurrence],(Unnamed Layer* 130) [Matrix Multiply],(Unnamed Layer* 131) [Matrix Multiply],(Unnamed Layer* 132) [ElementWise],(Unnamed Layer* 133) [ElementWise],(Unnamed Layer* 140) [Slice],(Unnamed Layer* 146) [Slice],(Unnamed Layer* 137) [Slice],(Unnamed Layer* 134) [Slice],(Unnamed Layer* 142) [Activation],(Unnamed Layer* 148) [Activation],(Unnamed Layer* 139) [Activation],(Unnamed Layer* 136) [Activation],(Unnamed Layer* 144) [ElementWise],(Unnamed Layer* 143) [ElementWise],(Unnamed Layer* 145) [ElementWise],(Unnamed Layer* 149) [Activation],(Unnamed Layer* 150) [ElementWise],(Unnamed Layer* 159) [Slice],(Unnamed Layer* 158) [Slice],(Unnamed Layer* 161) [LoopOutput],(Unnamed Layer* 160) [LoopOutput]}, try increasing the workspace size with IBuilder::setMaxWorkspaceSize()
[01/13/2020-17:31:51] [E] [TRT] …/builder/tacticOptimizer.cpp (1523) - OutOfMemory Error in computeCosts: 0
[01/13/2020-17:31:51] [E] Engine creation failed
[01/13/2020-17:31:51] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=encoder.onnx --verbose --minShapes=sequences:1x4,sequence_lengths:1 --optShapes=sequences:1x128,sequence_lengths:1 --maxShapes=sequences:4x256,sequence_lengths:4
Hi,
Few points based on the logs:
- In TRT 7, ONNX parser supports full-dimensions mode only. Your network definition must be created with the explicitBatch flag set (when using ONNX parser).
--explicitBatch Use explicit batch sizes when building the engine (default = implicit)
- Since you are getting OutOfMemory Error, please try increasing the max workspace size:
--workspace=N Set workspace size in megabytes (default = 16)
- It seems some operations are not supported in ONNX-TRT, please refer below link for supported operations.Any layer that are not supported needs to be replaced by custom plugin.
https://github.com/onnx/onnx/blob/master/docs/Operators.md
Thanks
hi
finally i generate the engine,but its very strange to see the trtexec can run the generated engine,but ptyhon code run the engine will report the error:
my python code is like that:
def run_trt_engine(context, engine, tensors):
print(“start run trt engine”)
print(tensors.keys())
bindings = [None]*engine.num_bindings
for name in tensors.keys():
idx = engine.get_binding_index(name)
tensor = tensors.get(name)
bindings[idx] = tensor.data_ptr()
print(“name:{}\tindex:{}”.format(name, idx))
# print(bindings[idx])
if(engine.binding_is_input(idx)):
if engine.is_shape_binding(idx) and is_shape_dynamic(context.get_shape(idx)):
context.set_shape_input(idx, tensor)
elif is_shape_dynamic(context.get_binding_shape(idx)):
context.set_binding_shape(idx, tuple(tensor.shape))
binding_info(engine, context)
test = context.execute_async(bindings=bindings,stream_handle=1)
Hi,
Could you please share the script and model file along with the error log so we can better help?
Thanks
Hello, I’m new to Tacotron2. I was using it on google colab, because I’m not tech savy enough to understand it on github. And I was enjoying this program, but then It suddenly kept giving me errors during the training mode. I thought something was wrong on my end. I re-transcribed all of my files, converted them to new ones, reformatted my pc with a new SSD card and everything. I just assumed the problem was on my end, but then I found that others are having the same problem. It seems that its no longer working on the training mode. I think there’s also a problem with synthesis part as well, because it kept giving errors there whenever I tried to redownload an already trained model. I was wondering will it be fixed or patched anytime soon? I had a little project that I was very excited about making and now i’m wondering will it ever be possible lol. Thanks, I’d greatly appreciate an update.