Reuse, Don't Retrain: A Recipe for Continued Pretraining of Language Models
Abstract
Guidelines for designing data distributions and learning rate schedules improve continued pretraining of language models by 9% in accuracy.
As language models have scaled both their number of parameters and pretraining dataset sizes, the computational cost for pretraining has become intractable except for the most well-resourced teams. This increasing cost makes it ever more important to be able to reuse a model after it has completed pretraining; allowing for a model's abilities to further improve without needing to train from scratch. In this work, we detail a set of guidelines that cover how to design efficacious data distributions and learning rate schedules for continued pretraining of language models. When applying these findings within a continued pretraining run on top of a well-trained 15B parameter model, we show an improvement of 9\% in average model accuracy compared to the baseline of continued training on the pretraining set. The resulting recipe provides a practical starting point with which to begin developing language models through reuse rather than retraining.
Community
Hello everyone,
I found the results very compelling, particularly the emphasis on efficiency. However, I am seeking some clarification on the specific structure of the Continued Pre-training (CPT) dataset. In Section 3.1.2, the paper mentions:
"The only new additional source of data is a set of question and answer (QA), alignment style examples... This set of QA data totals 2.8B tokens..."
However, a subsequent table indicates that the QA stage encompasses 50B tokens, following the initial 250B General Blend (GB) dataset. To ensure I have the correct understanding for my own implementation, could you clarify which of the following interpretations is correct?
Interpretation 1: The schedule consists of 250B tokens of General Blend data, followed by 50B tokens of entirely novel QA-formatted data.
Interpretation 2: The schedule consists of 250B tokens of General Blend data, followed by a 50B token QA stage. In this second stage, the data is a mix of 47.2B tokens from the original pre-training distribution and 2.8B tokens of the new QA examples.
If neither of these is correct, I would greatly appreciate any further detail you could provide on how that final 50B token block is composed.
Thank you for your time. I look forward to your response.
Best regards,
Tomás
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper