Hardware - NVIDIA GeForce RTX 2080
Hardware - Intel Core i7-9700k
Operating System - WSL with Ubuntu 20.04
nemo2riva - 2.11.0
nemo - 1.18.0rc0
I used this command for the installation of nemo :
BRANCH=main
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
After finetuning a conformer (stt_en_conformer_ctc_large_ls) model, I obtained a .nemo file that I fail to convert in .riva with nemo2riva.
I executed the following command :
nemo2riva --out {riva_file_path} {nemo_file_path}
I followed this tutorial and modified the conf file, mainly by reducing the batch size : Example: Kinyarwanda ASR using Mozilla Common Voice Dataset â NVIDIA NeMo
ERROR: Export failed. Please make sure your NeMo model class (<class ânemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPEâ>) has working export() and that you have the latest NeMo package installed with [all] dependencies.
Here is the full output :
[NeMo W 2023-05-30 09:48:51 optimizers:54] Apex was not found. Using the lamb or fused_adam optimizer will error out.
[NeMo W 2023-05-30 09:49:08 experimental:27] Module <class ânemo.collections.asr.modules.audio_modules.SpectrogramToMultichannelFeaturesâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2023-05-30 09:49:11 experimental:27] Module <class ânemo.collections.tts.models.fastpitch_ssl.FastPitchModel_SSLâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2023-05-30 09:49:11 experimental:27] Module <class ânemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.IPATokenizerâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2023-05-30 09:49:11 experimental:27] Module <class ânemo.collections.tts.models.radtts.RadTTSModelâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2023-05-30 09:49:16 experimental:27] Module <class ânemo.collections.tts.models.ssl_tts.SSLDisentanglerâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2023-05-30 09:49:16 experimental:27] Module <class ânemo.collections.tts.models.vits.VitsModelâ> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2023-05-30 09:49:16 nemo2riva:38] Logging level set to 20
[NeMo I 2023-05-30 09:49:16 convert:35] Restoring NeMo model from âmodels/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemoâ
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[NeMo W 2023-05-30 09:49:17 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set accelerator
and devices
using Trainer(accelerator='gpu', devices=1)
.
rank_zero_warn(
[NeMo I 2023-05-30 09:49:22 mixins:170] Tokenizer SentencePieceTokenizer initialized with 128 tokens
[NeMo W 2023-05-30 09:49:22 modelPT:161] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.
Train config :
manifest_filepath: ./manifests/train_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: true
num_workers: 16
pin_memory: true
max_duration: 30.0
min_duration: 0.1
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
bucketing_strategy: synced_randomized
bucketing_batch_size: null
[NeMo W 2023-05-30 09:49:22 modelPT:168] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s).
Validation config :
manifest_filepath: ./manifests/dev_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo W 2023-05-30 09:49:22 modelPT:174] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).
Test config :
manifest_filepath: ./manifests/test_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo I 2023-05-30 09:49:22 features:291] PADDING: 0
ERROR: Call to cuInit results in CUDA_ERROR_NO_DEVICE
[NeMo I 2023-05-30 09:49:24 save_restore_connector:249] Model EncDecCTCModelBPE was successfully restored from /mnt/d/MAI/STT_PER_EV/NEMO/models/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemo.
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-scr-exported-encdecclsmodel.yaml for nemo.collections.asr.models.classification_models.EncDecClassificationModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdecctcmodel.yaml for nemo.collections.asr.models.EncDecCTCModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdectcmodelbpe.yaml for nemo.collections.asr.models.EncDecCTCModelBPE
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-isc-exported-bert.yaml for nemo.collections.nlp.models.IntentSlotClassificationModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-mt-exported-encdecmtmodel.yaml for nemo.collections.nlp.models.MTEncDecModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-mt-exported-megatronnmtmodel.yaml for nemo.collections.nlp.models.MegatronNMTModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-pc-exported-bert.yaml for nemo.collections.nlp.models.PunctuationCapitalizationModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-qa-exported-bert.yaml for nemo.collections.nlp.models.QAModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-tc-exported-bert.yaml for nemo.collections.nlp.models.TextClassificationModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/nlp-tkc-exported-bert.yaml for nemo.collections.nlp.models.TokenClassificationModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/tts-exported-fastpitchmodel.yaml for nemo.collections.tts.models.FastPitchModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/tts-exported-hifiganmodel.yaml for nemo.collections.tts.models.HifiGanModel
[NeMo I 2023-05-30 09:49:24 schema:148] Loaded schema file /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/tts-exported-radttsmodel.yaml for nemo.collections.tts.models.RadTTSModel
[NeMo I 2023-05-30 09:49:24 schema:187] Found validation schema for nemo.collections.asr.models.EncDecCTCModelBPE at /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdectcmodelbpe.yaml
[NeMo I 2023-05-30 09:49:24 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.1)
[NeMo I 2023-05-30 09:49:25 artifacts:59] Found model at ./model_weights.ckpt
INFO: Checking Nemo version for ConformerEncoder âŠ
[NeMo I 2023-05-30 09:49:25 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.7.0rc0)
[NeMo I 2023-05-30 09:49:25 artifacts:136] Retrieved artifacts: dict_keys([â052b4d9f4d2c45b8938f6197bb348105_tokenizer.vocabâ, â94a4475434b1450aad2638e1c2d7ebb9_vocab.txtâ, âddd360ec8832437bb5a8f6114fc91547_tokenizer.modelâ, âmodel_config.yamlâ])
[NeMo I 2023-05-30 09:49:25 cookbook:71] Exporting model EncDecCTCModelBPE with config=ExportConfig(export_subnet=None, export_format=âONNXâ, export_file=âmodel_graph.onnxâ, encryption=None, autocast=True, max_dim=100000)
[NeMo W 2023-05-30 09:49:26 nemo2riva:62] It looks like youâre trying to export a ASR model with max_dim=100000. Export is failing due to CUDA OOM. Reducing max_dim
to 50000 and trying againâŠ
[NeMo I 2023-05-30 09:49:26 convert:35] Restoring NeMo model from âmodels/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemoâ
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[NeMo W 2023-05-30 09:49:26 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set accelerator
and devices
using Trainer(accelerator='gpu', devices=1)
.
rank_zero_warn(
[NeMo I 2023-05-30 09:49:31 mixins:170] Tokenizer SentencePieceTokenizer initialized with 128 tokens
[NeMo W 2023-05-30 09:49:31 modelPT:161] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.
Train config :
manifest_filepath: ./manifests/train_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: true
num_workers: 16
pin_memory: true
max_duration: 30.0
min_duration: 0.1
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
bucketing_strategy: synced_randomized
bucketing_batch_size: null
[NeMo W 2023-05-30 09:49:31 modelPT:168] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s).
Validation config :
manifest_filepath: ./manifests/dev_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo W 2023-05-30 09:49:31 modelPT:174] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).
Test config :
manifest_filepath: ./manifests/test_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo I 2023-05-30 09:49:31 features:291] PADDING: 0
[NeMo I 2023-05-30 09:49:33 save_restore_connector:249] Model EncDecCTCModelBPE was successfully restored from /mnt/d/MAI/STT_PER_EV/NEMO/models/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemo.
[NeMo I 2023-05-30 09:49:33 schema:187] Found validation schema for nemo.collections.asr.models.EncDecCTCModelBPE at /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdectcmodelbpe.yaml
[NeMo I 2023-05-30 09:49:33 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.1)
[NeMo I 2023-05-30 09:49:33 artifacts:59] Found model at ./model_weights.ckpt
INFO: Checking Nemo version for ConformerEncoder âŠ
[NeMo I 2023-05-30 09:49:33 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.7.0rc0)
[NeMo I 2023-05-30 09:49:33 artifacts:136] Retrieved artifacts: dict_keys([â052b4d9f4d2c45b8938f6197bb348105_tokenizer.vocabâ, â94a4475434b1450aad2638e1c2d7ebb9_vocab.txtâ, âddd360ec8832437bb5a8f6114fc91547_tokenizer.modelâ, âmodel_config.yamlâ])
[NeMo I 2023-05-30 09:49:33 cookbook:71] Exporting model EncDecCTCModelBPE with config=ExportConfig(export_subnet=None, export_format=âONNXâ, export_file=âmodel_graph.onnxâ, encryption=None, autocast=True, max_dim=50000)
[NeMo W 2023-05-30 09:49:38 nemo2riva:62] It looks like youâre trying to export a ASR model with max_dim=50000. Export is failing due to CUDA OOM. Reducing max_dim
to 25000 and trying againâŠ
[NeMo I 2023-05-30 09:49:38 convert:35] Restoring NeMo model from âmodels/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemoâ
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[NeMo W 2023-05-30 09:49:39 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set accelerator
and devices
using Trainer(accelerator='gpu', devices=1)
.
rank_zero_warn(
[NeMo I 2023-05-30 09:49:44 mixins:170] Tokenizer SentencePieceTokenizer initialized with 128 tokens
[NeMo W 2023-05-30 09:49:44 modelPT:161] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.
Train config :
manifest_filepath: ./manifests/train_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: true
num_workers: 16
pin_memory: true
max_duration: 30.0
min_duration: 0.1
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
bucketing_strategy: synced_randomized
bucketing_batch_size: null
[NeMo W 2023-05-30 09:49:44 modelPT:168] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s).
Validation config :
manifest_filepath: ./manifests/dev_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo W 2023-05-30 09:49:44 modelPT:174] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).
Test config :
manifest_filepath: ./manifests/test_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo I 2023-05-30 09:49:44 features:291] PADDING: 0
[NeMo I 2023-05-30 09:49:46 save_restore_connector:249] Model EncDecCTCModelBPE was successfully restored from /mnt/d/MAI/STT_PER_EV/NEMO/models/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemo.
[NeMo I 2023-05-30 09:49:46 schema:187] Found validation schema for nemo.collections.asr.models.EncDecCTCModelBPE at /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdectcmodelbpe.yaml
[NeMo I 2023-05-30 09:49:46 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.1)
[NeMo I 2023-05-30 09:49:46 artifacts:59] Found model at ./model_weights.ckpt
INFO: Checking Nemo version for ConformerEncoder âŠ
[NeMo I 2023-05-30 09:49:46 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.7.0rc0)
[NeMo I 2023-05-30 09:49:46 artifacts:136] Retrieved artifacts: dict_keys([â052b4d9f4d2c45b8938f6197bb348105_tokenizer.vocabâ, â94a4475434b1450aad2638e1c2d7ebb9_vocab.txtâ, âddd360ec8832437bb5a8f6114fc91547_tokenizer.modelâ, âmodel_config.yamlâ])
[NeMo I 2023-05-30 09:49:46 cookbook:71] Exporting model EncDecCTCModelBPE with config=ExportConfig(export_subnet=None, export_format=âONNXâ, export_file=âmodel_graph.onnxâ, encryption=None, autocast=True, max_dim=25000)
[NeMo W 2023-05-30 09:49:46 nemo2riva:62] It looks like youâre trying to export a ASR model with max_dim=25000. Export is failing due to CUDA OOM. Reducing max_dim
to 12500 and trying againâŠ
[NeMo I 2023-05-30 09:49:46 convert:35] Restoring NeMo model from âmodels/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemoâ
GPU available: True (cuda), used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[NeMo W 2023-05-30 09:49:47 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/pytorch_lightning/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. Set accelerator
and devices
using Trainer(accelerator='gpu', devices=1)
.
rank_zero_warn(
[NeMo I 2023-05-30 09:49:52 mixins:170] Tokenizer SentencePieceTokenizer initialized with 128 tokens
[NeMo W 2023-05-30 09:49:52 modelPT:161] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.
Train config :
manifest_filepath: ./manifests/train_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: true
num_workers: 16
pin_memory: true
max_duration: 30.0
min_duration: 0.1
is_tarred: false
tarred_audio_filepaths: null
shuffle_n: 2048
bucketing_strategy: synced_randomized
bucketing_batch_size: null
[NeMo W 2023-05-30 09:49:52 modelPT:168] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s).
Validation config :
manifest_filepath: ./manifests/dev_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo W 2023-05-30 09:49:52 modelPT:174] Please call the ModelPT.setup_test_data() or ModelPT.setup_multiple_test_data() method and provide a valid configuration file to setup the test data loader(s).
Test config :
manifest_filepath: ./manifests/test_manifest_final.json
sample_rate: 16000
batch_size: 4
shuffle: false
use_start_end_token: false
num_workers: 8
pin_memory: true
[NeMo I 2023-05-30 09:49:52 features:291] PADDING: 0
[NeMo I 2023-05-30 09:49:54 save_restore_connector:249] Model EncDecCTCModelBPE was successfully restored from /mnt/d/MAI/STT_PER_EV/NEMO/models/finetuned/conformer_ctc_bpe_final_epoch100_tokenbpe.nemo.
[NeMo I 2023-05-30 09:49:54 schema:187] Found validation schema for nemo.collections.asr.models.EncDecCTCModelBPE at /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/validation_schemas/asr-stt-exported-encdectcmodelbpe.yaml
[NeMo I 2023-05-30 09:49:54 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.1)
[NeMo I 2023-05-30 09:49:54 artifacts:59] Found model at ./model_weights.ckpt
INFO: Checking Nemo version for ConformerEncoder âŠ
[NeMo I 2023-05-30 09:49:54 schema:216] Checking installed NeMo version ⊠1.18.0rc0 OK (>=1.7.0rc0)
[NeMo I 2023-05-30 09:49:54 artifacts:136] Retrieved artifacts: dict_keys([â052b4d9f4d2c45b8938f6197bb348105_tokenizer.vocabâ, â94a4475434b1450aad2638e1c2d7ebb9_vocab.txtâ, âddd360ec8832437bb5a8f6114fc91547_tokenizer.modelâ, âmodel_config.yamlâ])
[NeMo I 2023-05-30 09:49:54 cookbook:71] Exporting model EncDecCTCModelBPE with config=ExportConfig(export_subnet=None, export_format=âONNXâ, export_file=âmodel_graph.onnxâ, encryption=None, autocast=True, max_dim=12500)
[NeMo W 2023-05-30 09:49:54 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/modules/conformer_encoder.py:466: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We canât record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if seq_length > self.max_audio_length:
[NeMo W 2023-05-30 09:49:55 nemo_logging:349] /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/symbolic_opset9.py:2112: FutureWarning: âtorch.onnx.symbolic_opset9._cast_Boolâ is deprecated in version 2.0 and will be removed in the future. Please Avoid using this function and create a Cast node instead.
return fn(g, to_cast_func(g, input, False), to_cast_func(g, other, False))
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
[NeMo E 2023-05-30 09:49:55 cookbook:122] ERROR: Export failed. Please make sure your NeMo model class (<class ânemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPEâ>) has working export() and that you have the latest NeMo package installed with [all] dependencies.
Traceback (most recent call last):
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/bin/nemo2rivaâ, line 8, in
sys.exit(nemo2riva())
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/cli/nemo2riva.pyâ, line 49, in nemo2riva
Nemo2Riva(args)
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/convert.pyâ, line 83, in Nemo2Riva
export_model(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/cookbook.pyâ, line 123, in export_model
raise e
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/cookbook.pyâ, line 81, in export_model
_, descriptions = model.export(
File â/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/core/classes/exportable.pyâ, line 113, in export
out, descr, out_example = model._export(
File â/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/core/classes/exportable.pyâ, line 220, in _export
torch.onnx.export(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.pyâ, line 506, in export
_export(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.pyâ, line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.pyâ, line 1117, in _model_to_graph
graph = _optimize_graph(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.pyâ, line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.pyâ, line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/symbolic_helper.pyâ, line 306, in wrapper
return fn(g, args, **kwargs)
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/symbolic_opset14.pyâ, line 79, in batch_norm
return symbolic_helper._onnx_opset_unsupported_detailed(
File â/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/symbolic_helper.pyâ, line 657, in _onnx_opset_unsupported_detailed
raise errors.SymbolicValueError(
torch.onnx.errors.SymbolicValueError: Unsupported: ONNX export of BatchNormalization in opset 14. All input tensors must have the same dtype
. Turn off Autocast or export using opset version 15⊠Please try opset version 15. [Caused by the value 'input.51 defined in (%input.51 : Half(, 512, *, strides=[1600000, 3125, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=512, kernel_shape=[31], pads=[0, 0], strides=[1]](%1122, %1121, %1120), scope: nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE::/nemo.collections.asr.parts.submodules.conformer_modules.ConformerLayer::layers.0/nemo.collections.asr.parts.submodules.conformer_modules.ConformerConvolution::conv/nemo.collections.asr.parts.submodules.causal_convs.CausalConv1D::depthwise_conv # /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py:309:0
)â (type âTensorâ) in the TorchScript graph. The containing node has kind âonnx::Convâ.]
(node defined in /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py(309): _conv_forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py(313): forward
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/parts/submodules/causal_convs.py(148): forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/parts/submodules/conformer_modules.py(374): forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/parts/submodules/conformer_modules.py(211): forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/modules/conformer_encoder.py(619): forward_internal
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/modules/conformer_encoder.py(508): forward_for_export
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/collections/asr/models/asr_model.py(192): forward_for_export
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1488): _slow_forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/jit/_trace.py(118): wrapper
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/jit/_trace.py(127): forward
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/module.py(1501): _call_impl
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/jit/_trace.py(1268): _get_trace_graph
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.py(893): _trace_and_get_graph_from_model
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.py(989): _create_jit_graph
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.py(1113): _model_to_graph
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.py(1548): _export
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/onnx/utils.py(506): export
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/core/classes/exportable.py(220): _export
/mnt/d/MAI/STT_PER_EV/NEMO/NeMo/nemo/core/classes/exportable.py(113): export
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/cookbook.py(81): export_model
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/convert.py(83): Nemo2Riva
/mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/nemo2riva/cli/nemo2riva.py(49): nemo2riva
/mnt/d/MAI/STT_PER_EV/NEMO/env/bin/nemo2riva(8):
)
Inputs:
#0: 1122 defined in (%1122 : Half(*, *, *, strides=[1615360, 3155, 1], requires_grad=0, device=cuda:0) = onnx::Cast[to=10](%input.47), scope: nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE::/nemo.collections.asr.parts.submodules.conformer_modules.ConformerLayer::layers.0/nemo.collections.asr.parts.submodules.conformer_modules.ConformerConvolution::conv/nemo.collections.asr.parts.submodules.causal_convs.CausalConv1D::depthwise_conv # /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py:309:0
) (type 'Tensor')
#1: 1121 defined in (%1121 : Half(512, 1, 31, strides=[31, 31, 1], requires_grad=0, device=cuda:0) = onnx::Cast[to=10](%encoder.layers.0.conv.depthwise_conv.weight), scope: nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE::/nemo.collections.asr.parts.submodules.conformer_modules.ConformerLayer::layers.0/nemo.collections.asr.parts.submodules.conformer_modules.ConformerConvolution::conv/nemo.collections.asr.parts.submodules.causal_convs.CausalConv1D::depthwise_conv # /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py:309:0
) (type 'Tensor')
#2: 1120 defined in (%1120 : Half(512, strides=[1], requires_grad=0, device=cuda:0) = onnx::Cast[to=10](%encoder.layers.0.conv.depthwise_conv.bias), scope: nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE::/nemo.collections.asr.parts.submodules.conformer_modules.ConformerLayer::layers.0/nemo.collections.asr.parts.submodules.conformer_modules.ConformerConvolution::conv/nemo.collections.asr.parts.submodules.causal_convs.CausalConv1D::depthwise_conv # /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py:309:0
) (type 'Tensor')
Outputs:
#0: input.51 defined in (%input.51 : Half(*, 512, *, strides=[1600000, 3125, 1], requires_grad=0, device=cuda:0) = onnx::Conv[dilations=[1], group=512, kernel_shape=[31], pads=[0, 0], strides=[1]](%1122, %1121, %1120), scope: nemo.collections.asr.models.ctc_bpe_models.EncDecCTCModelBPE::/nemo.collections.asr.parts.submodules.conformer_modules.ConformerLayer::layers.0/nemo.collections.asr.parts.submodules.conformer_modules.ConformerConvolution::conv/nemo.collections.asr.parts.submodules.causal_convs.CausalConv1D::depthwise_conv # /mnt/d/MAI/STT_PER_EV/NEMO/env/lib/python3.8/site-packages/torch/nn/modules/conv.py:309:0
) (type 'Tensor')