Ask a Question

Prefer a chat interface with context about you and your work?

Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection

Enhancing Uncertainty Modeling with Semantic Graph for Hallucination Detection

Large Language Models (LLMs) are prone to hallucination with non-factual or unfaithful statements, which undermines the applications in real-world scenarios. Recent researches focus on uncertainty-based hallucination detection, which utilizes the output probability of LLMs for uncertainty calculation and does not rely on external knowledge or frequent sampling from LLMs. Whereas, …