Nvidia Nemo Intent model

I try to import the Nemo IntentClassification model with this code:

print(nemo_nlp.models.IntentSlotClassificationModel.list_available_models())
from nemo.collections.nlp.models import IntentSlotClassificationModel
nemo_intent = IntentSlotClassificationModel.from_pretrained(“Joint_Intent_Slot_Assistant”)

but I get this error:

[PretrainedModelInfo(
pretrained_model_name=Joint_Intent_Slot_Assistant,
description=This models is trained on this GitHub - xliuhw/NLU-Evaluation-Data: Copora for evaluating NLU Services/Platforms such as Dialogflow, dataset which includes 64 various intents and 55 slots. Final Intent accuracy is about 87%, Slot accuracy is about 89%.,
location=https://api.ngc.nvidia.com/v2/models/nvidia/nemonlpmodels/versions/1.0.0a5/files/Joint_Intent_Slot_Assistant.nemo
)]
[NeMo I 2021-08-21 13:38:16 cloud:56] Found existing object /root/.cache/torch/NeMo/NeMo_1.0.2/Joint_Intent_Slot_Assistant/7643e366af80f1bee32349aeeb92b7ca/Joint_Intent_Slot_Assistant.nemo.
[NeMo I 2021-08-21 13:38:16 cloud:62] Re-using file from: /root/.cache/torch/NeMo/NeMo_1.0.2/Joint_Intent_Slot_Assistant/7643e366af80f1bee32349aeeb92b7ca/Joint_Intent_Slot_Assistant.nemo
[NeMo I 2021-08-21 13:38:16 common:675] Instantiating model from pre-trained checkpoint
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-08-21 13:38:26 modelPT:138] If you intend to do training or fine-tuning, please call the ModelPT.setup_training_data() method and provide a valid configuration file to setup the train data loader.
Train config :
prefix: train
batch_size: 32
shuffle: true
num_samples: -1
num_workers: 2
drop_last: false
pin_memory: false

[NeMo W 2021-08-21 13:38:26 modelPT:145] If you intend to do validation, please call the ModelPT.setup_validation_data() or ModelPT.setup_multiple_validation_data() method and provide a valid configuration file to setup the validation data loader(s).
Validation config :
prefix: test
batch_size: 32
shuffle: false
num_samples: -1
num_workers: 2
drop_last: false
pin_memory: false

[NeMo W 2021-08-21 13:38:26 modelPT:1198] World size can only be set by PyTorch Lightning Trainer.
[NeMo W 2021-08-21 13:38:26 modelPT:198] You tried to register an artifact under config key=tokenizer.vocab_file but an artifact forit has already been registered.
[NeMo W 2021-08-21 13:38:26 nemo_logging:349] /usr/local/lib/python3.7/dist-packages/nemo/core/classes/modelPT.py:243: UserWarning: update_node() is deprecated, use OmegaConf.update(). (Since 2.0)
self.cfg.update_node(config_path, return_path)

Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: [‘cls.predictions.transform.dense.bias’, ‘cls.predictions.transform.dense.weight’, ‘cls.seq_relationship.weight’, ‘cls.predictions.decoder.weight’, ‘cls.predictions.bias’, ‘cls.predictions.transform.LayerNorm.bias’, ‘cls.predictions.transform.LayerNorm.weight’, ‘cls.seq_relationship.bias’]

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertEncoder: [‘cls.predictions.transform.dense.bias’, ‘cls.predictions.transform.dense.weight’, ‘cls.seq_relationship.weight’, ‘cls.predictions.decoder.weight’, ‘cls.predictions.bias’, ‘cls.predictions.transform.LayerNorm.bias’, ‘cls.predictions.transform.LayerNorm.weight’, ‘cls.seq_relationship.bias’]
  • This IS expected if you are initializing BertEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

RuntimeError Traceback (most recent call last)
in ()
2 import pytorch_lightning as pl
3 from nemo.collections.nlp.models import IntentSlotClassificationModel
----> 4 nemo_intent = IntentSlotClassificationModel.from_pretrained(“Joint_Intent_Slot_Assistant”)

4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1405 if len(error_msgs) > 0:
1406 raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format(
→ 1407 self.class.name, “\n\t”.join(error_msgs)))
1408 return _IncompatibleKeys(missing_keys, unexpected_keys)
1409

RuntimeError: Error(s) in loading state_dict for IntentSlotClassificationModel:
Missing key(s) in state_dict: “bert_model.embeddings.position_ids”.
size mismatch for classifier.intent_mlp.layer2.weight: copying a param with shape torch.Size([64, 768]) from checkpoint, the shape in current model is torch.Size([1, 768]).
size mismatch for classifier.intent_mlp.layer2.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for classifier.slot_mlp.layer2.weight: copying a param with shape torch.Size([55, 768]) from checkpoint, the shape in current model is torch.Size([1, 768]).
size mismatch for classifier.slot_mlp.layer2.bias: copying a param with shape torch.Size([55]) from checkpoint, the shape in current model is torch.Size([1]).

Does someone know how to use it, without training

Hi @benjamin.vollmers
Please refer to below link for sample and more details related to “Joint_Intent_Slot”

https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html

Thanks

Hello,
I know this page. I would like to download a pretrained model for this, because I run it on google colab an that way I don‘t have to train it every time I start the notebook. On this side I haven’t found anything about this pretrained Version

Hi @benjamin.vollmers
Below link might be helpful to find all pre-trained models.
https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#pretrained

Thanks

Hello,
I know this page, and as you can see in the code I send you I have used it. For the Intent and slot classification I got this model

Joint_Intent_Slot_Assistant

,but I can’t import it because of the error shown above. Now I want to know what this error is and how to fix it

Hi @benjamin.vollmers ,
We are currently looking into this.
Please allow us some time.
Thanks!