Ask a Question

Prefer a chat interface with context about you and your work?

Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum

Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum

Large language models (LLMs) are commonly trained on datasets consisting of fixed-length token sequences. These datasets are created by randomly concatenating documents of various lengths and then chunking them into sequences of a predetermined target length. However, this method of concatenation can lead to cross-document attention within a sequence, which …