Ask a Question

Prefer a chat interface with context about you and your work?

Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation

Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation

Large Language Models (LLMs) have achieved remarkable performance across a wide variety of natural language tasks. However, they have been shown to suffer from a critical limitation pertinent to 'hallucination' in their output. Recent research has focused on investigating and addressing this problem for a variety of tasks such as …