Ask a Question

Prefer a chat interface with context about you and your work?

Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes

Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes

Examining the alignment of large language models (LLMs) has become increasingly important, particularly when these systems fail to operate as intended. This study explores the challenge of aligning LLMs with human intentions and values, with specific focus on their political inclinations. Previous research has highlighted LLMs' propensity to display political …