Tao Text Classification Evaluate failing

Please provide the following information when requesting support.
on AWS EC2
• Hardware (T4)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)

docker version
Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:40 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: b0f5bc3
Built: Wed Jun 2 11:54:48 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.6
GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc:
Version: 1.0.0-rc95
GitCommit: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
docker-init:
Version: 0.19.0
GitCommit: de40ad0

• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ tao text_classification evaluate -e /specs/nlp/text_classification/evaluate.yaml -r /results/nlp/text_classification/evaluate -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY test_ds.file_path=/data/sst2/test.tsv test_ds.batch_size=32
2021-08-26 09:59:25,123 [INFO] root: Registry: [‘nvcr.io’]
2021-08-26 09:59:25,225 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-08-26 09:59:28 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-08-26 09:59:31 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-08-26 09:59:32 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
explicit_log_dir: /results/nlp/text_classification/evaluate
exp_dir: null
name: null
version: null
use_datetime_version: true
resume_if_exists: false
resume_past_end: false
resume_ignore_no_checkpoint: false
create_tensorboard_logger: false
summary_writer_kwargs: null
create_wandb_logger: false
wandb_logger_kwargs: null
create_checkpoint_callback: false
checkpoint_callback_params:
filepath: null
monitor: val_loss
verbose: true
save_last: true
save_top_k: 3
save_weights_only: false
mode: auto
period: 1
prefix: null
postfix: .nemo
save_best_model: false
files_to_copy: null
trainer:
logger: false
checkpoint_callback: false
callbacks: null
default_root_dir: null
gradient_clip_val: 0.0
process_position: 0
num_nodes: 1
num_processes: 1
gpus: 1
auto_select_gpus: false
tpu_cores: null
log_gpu_memory: null
progress_bar_refresh_rate: 1
overfit_batches: 0.0
track_grad_norm: -1
check_val_every_n_epoch: 1
fast_dev_run: false
accumulate_grad_batches: 1
max_epochs: 1000
min_epochs: 1
max_steps: null
min_steps: null
limit_train_batches: 1.0
limit_val_batches: 1.0
limit_test_batches: 1.0
val_check_interval: 1.0
flush_logs_every_n_steps: 100
log_every_n_steps: 50
accelerator: ddp
sync_batchnorm: false
precision: 32
weights_summary: full
weights_save_path: null
num_sanity_val_steps: 2
truncated_bptt_steps: null
resume_from_checkpoint: null
profiler: null
benchmark: false
deterministic: false
reload_dataloaders_every_epoch: false
auto_lr_find: false
replace_sampler_ddp: true
terminate_on_nan: false
auto_scale_batch_size: false
prepare_data_per_node: true
amp_backend: native
amp_level: O2
test_ds:
file_path: /data/sst2/test.tsv
batch_size: 32
shuffle: false
num_samples: -1
num_workers: 3
drop_last: false
pin_memory: false
encryption_key: ‘***’

GPU available: True, used: True
GPU available: True, used: True
TPU available: None, using: 0 TPU cores
TPU available: None, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
[NeMo W 2021-08-26 09:59:32 exp_manager:380] Exp_manager is logging to /results/nlp/text_classification/evaluate, but it already exists.
[NeMo I 2021-08-26 09:59:32 exp_manager:194] Experiments will be logged at /results/nlp/text_classification/evaluate
[NeMo W 2021-08-26 09:59:36 modelPT:193] Using /tmp/tmpzsinnlml/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-08-26 09:59:36 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:120] Read 1822 examples from /data/sst2/test.tsv.
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:238] *** Example ***
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:239] example 0: [‘1784’, ‘full’, ‘frontal’, ‘is’, ‘the’, ‘antidote’, ‘for’, ‘soderbergh’, ‘fans’, ‘who’, ‘think’, ‘he’, “'s”, ‘gone’, ‘too’, ‘commercial’, ‘since’, ‘his’, ‘two’, ‘oscar’, ‘nominated’, ‘films’, ‘in’]
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:240] subtokens: [CLS] 1784 full frontal is the anti ##dote for so ##der ##berg ##h fans who think he ’ s gone too commercial since his two oscar nominated films in [SEP]
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:241] input_ids: 101 16496 2440 19124 2003 1996 3424 23681 2005 2061 4063 4059 2232 4599 2040 2228 2002 1005 1055 2908 2205 3293 2144 2010 2048 7436 4222 3152 1999 102
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:242] segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:243] input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[NeMo I 2021-08-26 09:59:48 text_classification_dataset:244] label: 2000
[NeMo I 2021-08-26 09:59:48 data_preprocessing:295] Some stats of the lengths of the sequences:
[NeMo I 2021-08-26 09:59:48 data_preprocessing:297] Min: 30 | Max: 30 | Mean: 30.0 | Median: 30.0
[NeMo I 2021-08-26 09:59:48 data_preprocessing:303] 75 percentile: 30.00
[NeMo I 2021-08-26 09:59:48 data_preprocessing:304] 99 percentile: 30.00
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
initializing ddp: GLOBAL_RANK: 0, MEMBER: 1/1
Added key: store_based_barrier_key:1 to store for rank: 0
Testing: 0it [00:00, ?it/s]/tmp/pip-req-build-_tx3iysr/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [0,0,0] Assertion t >= 0 && t < n_classes failed.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/evaluate.py”, line 94, in main
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 754, in test
results = self.__test_given_model(model, test_dataloaders)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 819, in __test_given_model
results = self.fit(model)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 472, in fit
results = self.accelerator_backend.train()
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py”, line 152, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py”, line 307, in ddp_train
results = self.train_or_test()
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py”, line 67, in train_or_test
results = self.trainer.run_test()
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 661, in run_test
eval_loop_results, _ = self.run_evaluation(test_mode=True)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py”, line 608, in run_evaluation
output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py”, line 175, in evaluation_step
output = self.trainer.accelerator_backend.test_step(args)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py”, line 164, in test_step
return self._step(args)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py”, line 172, in _step
output = self.trainer.model(*args)
File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 881, in _call_impl
result = self.forward(*input, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/overrides/data_parallel.py”, line 182, in forward
output = self.module.test_step(*inputs[0], **kwargs[0])
File “/opt/conda/lib/python3.8/site-packages/nemo/collections/nlp/models/text_classification/text_classification_model.py”, line 178, in test_step
return self.validation_step(batch, batch_idx)
File “/opt/conda/lib/python3.8/site-packages/nemo/collections/nlp/models/text_classification/text_classification_model.py”, line 145, in validation_step
tp, fn, fp, _ = self.classification_report(preds, labels)
File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 881, in _call_impl
result = self.forward(*input, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py”, line 154, in forward
self.update(*args, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/pytorch_lightning/metrics/metric.py”, line 200, in wrapped_func
return update(*args, **kwargs)
File “/opt/conda/lib/python3.8/site-packages/nemo/collections/nlp/metrics/classification_report.py”, line 94, in update
TP.append((label_predicted == current_label)[label_predicted].sum())
RuntimeError: CUDA error: device-side assert triggered

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/evaluate.py”, line 101, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
Exception ignored in: <function tqdm. del at 0x7f54b2d8f550>
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/tqdm/std.py”, line 1150, in del
File “/opt/conda/lib/python3.8/site-packages/tqdm/std.py”, line 1363, in close
File “/opt/conda/lib/python3.8/site-packages/tqdm/std.py”, line 1542, in display
File “/opt/conda/lib/python3.8/site-packages/tqdm/std.py”, line 1153, in repr
File “/opt/conda/lib/python3.8/site-packages/tqdm/std.py”, line 1503, in format_dict
TypeError: cannot unpack non-iterable NoneType object
terminate called after throwing an instance of ‘c10::Error’
what(): CUDA error: device-side assert triggered
Exception raised from create_event_internal at …/c10/cuda/CUDACachingAllocator.cpp:716 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x6c (0x7f5504e7e44c in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0xfa (0x7f5504e450b4 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x987 (0x7f5504ebcf97 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: c10::TensorImpl::release_resources() + 0x5c (0x7f5504e647dc in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #4: + 0x8d2572 (0x7f554ccef572 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #5: + 0x8d2605 (0x7f554ccef605 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)

frame #32: __libc_start_main + 0xf3 (0x7f5577d390b3 in /usr/lib/x86_64-linux-gnu/libc.so.6)

2021-08-26 09:59:51,373 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ python3 --version
Python 3.6.9

evaluate.yaml (399 Bytes)

Attaching evaluate.yaml spec

Can you run nvidai-smi and
nvidia-smi -L

nvidia-smi -L
GPU 0: Tesla T4 (UUID: GPU-9a3d7360-595d-cb85-a728-26f7058bc5c7)

nvidia-smi
Fri Aug 27 08:58:00 2021
±----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 32C P8 10W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

I am following
https://developer.nvidia.com/blog/building-and-deploying-conversational-ai-models-using-tao-toolkit/

I have a query…
For finetuning the domain classification model downloaded
tao text_classification finetune
-e /specs/nlp/text_classification/finetune.yaml
-r /results/nlp/text_classification/finetune
-m /results/nlp-tc-trained-model.tao
-g 1
-k $KEY
finetuning_ds.file_path=/data/my_domain_classification/train.tsv
validation_ds.file_path=/data/my_domain_classification/dev.tsv

From where do we get the files train.tsv and dev.tsv?

I copied from previous steps in the link pasted above and placed them in the required path. Am I doing something wrong here?

ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ tao info --verbose
Configuration of the TAO Toolkit Instance

dockers:
nvidia/tao/tao-toolkit-tf:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3
tasks:
1. augment
2. bpnet
3. classification
4. detectnet_v2
5. dssd
6. emotionnet
7. faster_rcnn
8. fpenet
9. gazenet
10. gesturenet
11. heartratenet
12. lprnet
13. mask_rcnn
14. multitask_classification
15. retinanet
16. ssd
17. unet
18. yolo_v3
19. yolo_v4
20. converter
nvidia/tao/tao-toolkit-pyt:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3
tasks:
1. speech_to_text
2. speech_to_text_citrinet
3. text_classification
4. question_answering
5. token_classification
6. intent_slot_classification
7. punctuation_and_capitalization
nvidia/tao/tao-toolkit-lm:
docker_registry: nvcr.io
docker_tag: v3.21.08-py3
tasks:
1. n_gram
format_version: 1.0
toolkit_version: 3.21.08
published_date: 08/17/2021

See Text Classification - NVIDIA Docs, that’s the training ‘.tsv’ file or validation ‘.tsv’ file. Its format is mentioned in Text Classification - NVIDIA Docs

Suggest you to download jupyter notebook and run again with its guide. Text Classification Notebook | NVIDIA NGC

More, is your training dataset public? If yes, please share the link with us.

Totally following this link which I earlier shared in the issue.
Took the dataset from SST2.

https://developer.nvidia.com/blog/building-and-deploying-conversational-ai-models-using-tao-toolkit/

Seems the issue is with test.tsv . It needs to be converted to the format specified. Can we use the dataset convert api to set the format for test.tsv?

I think yes. See Text Classification - NVIDIA Docs

If your dataset is stored in another format, you need to convert it to this format to use this model. There are some conversion scripts available for datasets SST2 and IMDB to convert them from their original format to NeMo’s format. To convert the original dataset to NeMo’s format, you can use dataset_convert utility as the following example:

tao text_classification dataset_convert
-e /specs/nlp/text_classification/dataset_convert.yaml
source_data_dir=SOURCE_PATH
target_data_dir=TARGET_PATH
dataset_name=sst2

Thank you so much for the help Morganh. Issue resolved.

1 Like

With the same setup as mentioned in the issue above I am getting this issue when running the infer command
Logs here…
tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY
2021-09-01 09:40:27,870 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 09:40:27,971 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 09:40:31 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 09:40:34 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 09:40:35 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer/
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘*********’

[NeMo W 2021-09-01 09:40:35 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer/``, but it already exists.
[NeMo W 2021-09-01 09:40:37 modelPT:193] Using /tmp/tmpr24pl3kf/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 09:40:37 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/posixpath.py”, line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 09:40:47,348 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ cd ~/tao
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ ls
data results specs
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ cd data
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data$ ls
NLU-Evaluation-Data-master NLU-Evaluation-Data-processed SST-2 my_domain_classification sst2
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data$ cd NLU-Evaluation-Data-processed/
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data/NLU-Evaluation-Data-processed$ ls
dict.intents.csv dict.slots.csv intent_labels.csv slot_labels.csv test.tsv test_intent_stats.tsv test_slot_stats.tsv test_slots.tsv train.tsv train_intent_stats.tsv train_slot_stats.tsv train_slots.tsv
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data/NLU-Evaluation-Data-processed$
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data/NLU-Evaluation-Data-processed$
(taoenv) ubuntu@ip-172-31-14-240:~/tao/data/NLU-Evaluation-Data-processed$ cd …/…/…/
(taoenv) ubuntu@ip-172-31-14-240:~$ cd jarvis_quickstart_v1.0.0-b.1/
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/finetune/checkpoints/finetuned-model.tlt -g 1 -k $KEY
2021-09-01 09:57:17,349 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 09:57:17,448 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 09:57:20 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 09:57:24 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 09:57:24 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/finetune/checkpoints/finetuned-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer/
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘********’

[NeMo W 2021-09-01 09:57:24 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer/``, but it already exists.
[NeMo W 2021-09-01 09:57:27 modelPT:193] Using /tmp/tmpdzz4go9b/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 09:57:28 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/site-packages/omegaconf/dictconfig.py”, line 317, in getitem
self._format_and_raise(
File “/opt/conda/lib/python3.8/site-packages/omegaconf/base.py”, line 95, in _format_and_raise
format_and_raise(
File “/opt/conda/lib/python3.8/site-packages/omegaconf/_utils.py”, line 629, in format_and_raise
_raise(ex, cause)
File “/opt/conda/lib/python3.8/site-packages/omegaconf/_utils.py”, line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.ConfigKeyError: Key ‘class_labels’ is not in struct
full_key: class_labels
reference_type=Optional[Dict[Union[str, Enum], Any]]
object_type=dict

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 09:57:38,209 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY
2021-09-01 10:00:41,887 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 10:00:41,988 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 10:00:45 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 10:00:48 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 10:00:49 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer/
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘****’

[NeMo W 2021-09-01 10:00:49 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer/``, but it already exists.
[NeMo W 2021-09-01 10:00:51 modelPT:193] Using /tmp/tmppcfdktmo/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 10:00:51 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/posixpath.py”, line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 10:01:01,216 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ echo $HOME
/home/ubuntu
(taoenv) ubuntu@ip-172-31-14-240:~/jarvis_quickstart_v1.0.0-b.1$ cd …/
(taoenv) ubuntu@ip-172-31-14-240:~$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY2021-09-01 10:04:48,173 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 10:04:48,282 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 10:04:51 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 10:04:54 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 10:04:55 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer/
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘***’

[NeMo W 2021-09-01 10:04:55 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer/``, but it already exists.
[NeMo W 2021-09-01 10:04:57 modelPT:193] Using /tmp/tmp8yjsgse_/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 10:04:57 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/posixpath.py”, line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 10:05:07,560 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$
(taoenv) ubuntu@ip-172-31-14-240:~$ cd tao/
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY
2021-09-01 10:10:21,368 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 10:10:21,470 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 10:10:24 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 10:10:28 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 10:10:28 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer/
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘*****’

[NeMo W 2021-09-01 10:10:28 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer/``, but it already exists.
[NeMo W 2021-09-01 10:10:30 modelPT:193] Using /tmp/tmpal82qy6u/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 10:10:30 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/posixpath.py”, line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 10:10:40,688 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer/ -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY^C
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -k $KEY
2021-09-01 10:25:38,664 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 10:25:38,765 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 10:25:42 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
usage: text_classification [-h] -r RESULTS_DIR [-k KEY] [-e EXPERIMENT_SPEC_FILE] [-g GPUS] [-m RESUME_MODEL_WEIGHTS] [-o OUTPUT_SPECS_DIR] {dataset_convert,evaluate,export,finetune,infer,infer_onnx,train,download_specs}
text_classification: error: the following arguments are required: -r/–results_dir
2021-09-01 10:25:43,076 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
(taoenv) ubuntu@ip-172-31-14-240:~/tao$ tao text_classification infer -e /specs/nlp/text_classification/infer.yaml -r /results/nlp/text_classification/infer -m /results/nlp/text_classification/train/checkpoints/trained-model.tlt -g 1 -k $KEY
2021-09-01 10:27:38,950 [INFO] root: Registry: [‘nvcr.io’]
2021-09-01 10:27:39,055 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the “/home/ubuntu/.tao_mounts.json” file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
[NeMo W 2021-09-01 10:27:42 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo W 2021-09-01 10:27:45 experimental:27] Module <class ‘nemo.collections.nlp.modules.common.megatron.megatron_bert.MegatronBertEncoder’> is experimental, not ready for production and is not fully supported. Use at your own risk.
[NeMo I 2021-09-01 10:27:46 tlt_logging:20] Experiment configuration:
restore_from: /results/nlp/text_classification/train/checkpoints/trained-model.tlt
exp_manager:
task_name: infer
explicit_log_dir: /results/nlp/text_classification/infer
input_batch:
- by the end of no such thing the audience , like beatrice , has a watchful affection
for the monster .
- director rob marshall went out gunning to make a great one .
- uneasy mishmash of styles and genres .
- I love exotic science fiction / fantasy movies but this one was very unpleasant
to watch . Suggestions and images of child abuse , mutilated bodies (live or dead)
, other gruesome scenes , plot holes , boring acting made this a regretable experience
, The basic idea of entering another person’s mind is not even new to the movies
or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4
/ 10 since some special effects were nice .
encryption_key: ‘*****’

[NeMo W 2021-09-01 10:27:46 exp_manager:26] Exp_manager is logging to `/results/nlp/text_classification/infer``, but it already exists.
[NeMo W 2021-09-01 10:27:48 modelPT:193] Using /tmp/tmptv7d8fn6/tokenizer.vocab_file instead of tokenizer.vocab_file.
Using bos_token, but it is not set yet.
Using eos_token, but it is not set yet.
[NeMo W 2021-09-01 10:27:48 modelPT:1202] World size can only be set by PyTorch Lightning Trainer.
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 198, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 347, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 107, in run
return run_job(
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 127, in run_job
ret.return_value = task_function(task_cfg)
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 83, in main
File “/opt/conda/lib/python3.8/posixpath.py”, line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/tlt-nemo/nlp/text_classification/scripts/infer.py”, line 113, in
File “/opt/conda/lib/python3.8/site-packages/nemo/core/config/hydra_runner.py”, line 98, in wrapper
_run_hydra(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 346, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 237, in run_and_report
assert mdl is not None
AssertionError
2021-09-01 10:27:58,400 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

infer.yaml
////////////////////////////////////

Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.

TLT Spec file for inference using a previously pretrained BERT model for a text classification task.

“Simulate” user input: batch with four samples.

input_batch:

  • “by the end of no such thing the audience , like beatrice , has a watchful affection for the monster .”
  • “director rob marshall went out gunning to make a great one .”
  • “uneasy mishmash of styles and genres .”
  • “I love exotic science fiction / fantasy movies but this one was very unpleasant to watch . Suggestions and images of child abuse , mutilated bodies (live or dead) , other gruesome scenes , plot holes , boring acting made this a regretable experience , The basic idea of entering another person’s mind is not even new to the movies or TV (An Outer Limits episode was better at exploring this idea) . i gave it 4 / 10 since some special effects were nice .”

Is it a new issue? If yes, please create a new forum topic. Thanks.

ok

I executed this api… It converts train.tsv and dev.tsv but not test.tsv.
Though all three files are there in SST_2 directory… but in sst2 that is the dest directory, I get only train.tsv and dev.tsv.

Changed the test.tsv manually for a sample but not the whole file, the above issue got resolved.

Please help me with test.tsv

Can you create a new forum topic for above question? Because original question is different from above question.