Ask a Question

Prefer a chat interface with context about you and your work?

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models

Large language models (LLMs) hold immense promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. In this work, we present resources and methodologies for …