Ask a Question

Prefer a chat interface with context about you and your work?

LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought

LLMs can Find Mathematical Reasoning Mistakes by Pedagogical Chain-of-Thought

Self-correction is emerging as a promising approach to mitigate the issue of hallucination in Large Language Models (LLMs). To facilitate effective self-correction, recent research has proposed mistake detection as its initial step. However, current literature suggests that LLMs often struggle with reliably identifying reasoning mistakes when using simplistic prompting strategies. …