Curating Trillion-Token Datasets: Introducing NVIDIA NeMo Data Curator

Originally published at: https://developer.nvidia.com/blog/curating-trillion-token-datasets-introducing-nemo-data-curator/

The latest developments in large language model (LLM) scaling laws have shown that when scaling the number of model parameters, the number of tokens used for training should be scaled at the same rate. The Chinchilla and LLaMA models have validated these empirically derived laws and suggest that previous state-of-the-art models have been under-trained regarding…