Ask a Question

Prefer a chat interface with context about you and your work?

Transparency Helps Reveal When Language Models Learn Meaning

Transparency Helps Reveal When Language Models Learn Meaning

Abstract Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our systematic experiments with synthetic data reveal that, with languages where all expressions have context-independent denotations (i.e., languages with strong …