Ask a Question

Prefer a chat interface with context about you and your work?

CriticBench: Benchmarking LLMs for Critique-Correct Reasoning

CriticBench: Benchmarking LLMs for Critique-Correct Reasoning

The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CriticBench, a comprehensive benchmark designed to assess LLMs' abilities to critique and rectify their reasoning across a variety of tasks. CriticBench encompasses five …