Ask a Question

Prefer a chat interface with context about you and your work?

The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization

The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DR Summarization

This work is the first to openly reproduce the Reinforcement Learning from Human Feedback (RLHF) scaling behaviors reported in OpenAI's seminal TL;DR summarization work. We create an RLHF pipeline from scratch, enumerate over 20 key implementation details, and share key insights during the reproduction. Our RLHF-trained Pythia models demonstrate significant …