========================== === Riva Speech Skills === ========================== NVIDIA Release 22.06 (build 40042668) Riva Speech Server Version 2.3.0 Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. Copyright (c) 2018-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. https://developer.nvidia.com/tensorrt Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved. This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh To install the open-source samples corresponding to this TensorRT release version run /opt/tensorrt/install_opensource.sh. To build the open source parsers, plugins, and samples for current top-of-tree on master or a different branch, run /opt/tensorrt/install_opensource.sh -b See https://github.com/NVIDIA/TensorRT for more information. [NeMo W 2022-07-13 10:47:50 optimizers:55] Apex was not found. Using the lamb or fused_adam optimizer will error out. 2022-07-13 10:47:50,600 [INFO] Writing Riva model repository to '/data/models/'... 2022-07-13 10:47:50,600 [INFO] The riva model repo target directory is /data/models/ 2022-07-13 10:48:01,272 [INFO] Using obey-precision pass with fp16 TRT 2022-07-13 10:48:01,272 [INFO] Extract_binaries for nn -> /data/models/riva-trt-conformer-en-US-asr-offline-am-streaming/1 2022-07-13 10:48:01,272 [INFO] extracting {'onnx': ('nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE', 'model_graph.onnx')} -> /data/models/riva-trt-conformer-en-US-asr-offline-am-streaming/1 2022-07-13 10:48:01,703 [INFO] Printing copied artifacts: 2022-07-13 10:48:01,703 [INFO] {'onnx': '/data/models/riva-trt-conformer-en-US-asr-offline-am-streaming/1/model_graph.onnx'} 2022-07-13 10:48:01,703 [INFO] Building TRT engine from ONNX file [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped [07/13/2022-10:48:04] [TRT] [E] parsers/onnx/ModelImporter.cpp:780: While parsing node number 203 [Where -> "1137"]: [07/13/2022-10:48:04] [TRT] [E] parsers/onnx/ModelImporter.cpp:781: --- Begin node --- [07/13/2022-10:48:04] [TRT] [E] parsers/onnx/ModelImporter.cpp:782: input: "1135" input: "1136" input: "1134" output: "1137" name: "Where_301" op_type: "Where" [07/13/2022-10:48:04] [TRT] [E] parsers/onnx/ModelImporter.cpp:783: --- End node --- [07/13/2022-10:48:04] [TRT] [E] parsers/onnx/ModelImporter.cpp:785: ERROR: parsers/onnx/builtin_op_importers.cpp:4705 In function importWhere: [8] Assertion failed: (x->getType() == y->getType() && x->getType() != nvinfer1::DataType::kBOOL) && "This version of TensorRT requires input x and y to have the same data type. BOOL is unsupported." 2022-07-13 10:48:04,571 [INFO] Mixed-precision net: 482 layers, 482 tensors, 0 outputs... 2022-07-13 10:48:04,577 [INFO] Mixed-precision net: 0 layers / 0 outputs fixed [07/13/2022-10:48:04] [TRT] [E] 4: [network.cpp::validate::2633] Error Code 4: Internal Error (Network must have at least one output) [07/13/2022-10:48:04] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. ) 2022-07-13 10:48:04,626 [INFO] Extract_binaries for featurizer -> /data/models/conformer-en-US-asr-offline-feature-extractor-streaming/1 2022-07-13 10:48:04,628 [INFO] Extract_binaries for vad -> /data/models/conformer-en-US-asr-offline-voice-activity-detector-ctc-streaming/1 2022-07-13 10:48:04,628 [INFO] extracting {'vocab_file': '/tmp/tmps75buu8h/riva_decoder_vocabulary.txt'} -> /data/models/conformer-en-US-asr-offline-voice-activity-detector-ctc-streaming/1 2022-07-13 10:48:04,628 [INFO] Extract_binaries for lm_decoder -> /data/models/conformer-en-US-asr-offline-ctc-decoder-cpu-streaming/1 2022-07-13 10:48:04,629 [INFO] extracting {'vocab_file': '/tmp/tmps75buu8h/riva_decoder_vocabulary.txt', 'tokenizer_model': ('nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE', 'e06949b0b85a485e9f280ea6d19e5492_tokenizer.model')} -> /data/models/conformer-en-US-asr-offline-ctc-decoder-cpu-streaming/1 2022-07-13 10:48:04,629 [INFO] {'vocab_file': '/data/models/conformer-en-US-asr-offline-ctc-decoder-cpu-streaming/1/riva_decoder_vocabulary.txt', 'tokenizer_model': '/data/models/conformer-en-US-asr-offline-ctc-decoder-cpu-streaming/1/e06949b0b85a485e9f280ea6d19e5492_tokenizer.model'} 2022-07-13 10:48:04,630 [INFO] Extract_binaries for conformer-en-US-asr-offline -> /data/models/conformer-en-US-asr-offline/1 2022-07-13 10:48:04,630 [ERROR] Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/servicemaker/cli/deploy.py", line 100, in deploy_from_rmir generator.serialize_to_disk( File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/triton.py", line 433, in serialize_to_disk RivaConfigGenerator.serialize_to_disk(self, repo_dir, rmir, config_only, verbose, overwrite) File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/triton.py", line 312, in serialize_to_disk self.generate_config(version_dir, rmir) File "/usr/local/lib/python3.8/dist-packages/servicemaker/triton/asr.py", line 838, in generate_config 'output_map': {nn._outputs[0].name: ctc_inp_key}, IndexError: list index out of range [W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored [W] 'Shape tensor cast elision' routine failed with: None