Beyond Imitation: Learning Key Reasoning Steps from Dual
Chain-of-Thoughts in Reasoning Distillation
Beyond Imitation: Learning Key Reasoning Steps from Dual
Chain-of-Thoughts in Reasoning Distillation
As Large Language Models (LLMs) scale up and gain powerful Chain-of-Thoughts (CoTs) reasoning abilities, practical resource constraints drive efforts to distill these capabilities into more compact Smaller Language Models (SLMs). We find that CoTs consist mainly of simple reasoning forms, with a small proportion ($\approx 4.7\%$) of key reasoning steps …