Challenges in Fine-Tuning an Asian Language ASR Model with an Open-Source Dataset

Please provide the following information when requesting support.

Hardware - GPU (T4)
Hardware - CPU
Operating System - Ubuntu 22.04 running on AWS EC2 g4dn.12xlarge instance

I recently followed a tutorial to fine-tune a Japanese (ja-JP) ASR model using the NeMo framework.
The fine-tuning improved the model’s accuracy in recognizing words from the training dataset. However, the model seems to have lost its original capability to recognize words not included in the dataset, which was not an issue before the fine-tuning.

I suspect the issue may stem from this segment of the script, which might be limiting the model’s vocabulary exclusively to the training dataset:
asr_model.cfg.labels = asr_model.cfg.train_ds.labels
asr_model.save_to(“asr_customize_vocab.nemo”)

I’m looking for advice on adjusting the script so that the model can incrementally learn from the training dataset without sacrificing its ability to recognize other words. Any suggestions on how to modify this script or approach this problem would be greatly appreciated.

Anyone could help with this? thanks