Ask a Question

Prefer a chat interface with context about you and your work?

Context-DPO: Aligning Language Models for Context-Faithfulness

Context-DPO: Aligning Language Models for Context-Faithfulness

Reliable responses from large language models (LLMs) require adherence to user instructions and retrieved information. While alignment techniques help LLMs align with human intentions and values, improving context-faithfulness through alignment remains underexplored. To address this, we propose $\textbf{Context-DPO}$, the first alignment method specifically designed to enhance LLMs' context-faithfulness. We introduce …