A Neural Model of Adaptation in Reading

Type: Article

Publication Date: 2018-01-01

Citations: 49

DOI: https://doi.org/10.18653/v1/d18-1499

Abstract

It has been argued that humans rapidly adapt their lexical and syntactic expectations to match the statistics of the current linguistic context. We provide further support to this claim by showing that the addition of a simple adaptation mechanism to a neural language model improves our predictions of human reading times compared to a non-adaptive model. We analyze the performance of the model on controlled materials from psycholinguistic experiments and show that it adapts not only to lexical items but also to abstract syntactic structures.

Locations

  • arXiv (Cornell University) - View - PDF
  • Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing - View - PDF

Similar Works

Action Title Year Authors
+ A Neural Model of Adaptation in Reading 2018 Marten van Schijndel
Tal Linzen
+ Context Limitations Make Neural Language Models More Human-Like 2022 Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
+ PDF Chat Context Limitations Make Neural Language Models More Human-Like 2022 Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
+ On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior 2020 Ethan Wilcox
Jon Gauthier
Jennifer Hu
Peng Qian
Roger Lévy
+ PDF Chat YanFarmerJaeger_EyeTrackingAdaptation 2019 SHAORONG YAN
Thomas A. Farmer
T. Florian Jaeger
+ Humans and language models diverge when predicting repeating text 2023 Aditya Vaidya
Javier S. Turek
Alexander G. Huth
+ Humans and language models diverge when predicting repeating text 2023 Aditya R. Vaidya
Javier S. Turek
Alexander G. Huth
+ Syntactic prediction adaptation accounts for language processing and language learning 2021 Naomi Havron
Mireille Babineau
Anne‐Caroline Fiévet
Alex de Carvalho
Anne Christophe
+ A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans 2021 Ethan Wilcox
Pranali Vani
Roger Lévy
+ A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans 2021 Ethan Wilcox
Pranali Vani
Roger Lévy
+ Prediction as a basis for skilled reading: insights from modern language models 2022 Benedetta Cevoli
Chris Watkins
Kathleen Rastle
+ Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior? 2023 Aryaman Chobey
Oliver Smith
Anzi Wang
Grusha Prasad
+ Can training neural language models on a curriculum with developmentally plausible data improve alignment with human reading behavior? 2023 Aryaman Chobey
Oliver Smith
Anzi Wang
Grusha Prasad
+ Prediction as a Basis for Skilled Reading: Insights from Modern Language Models 2021 Benedetta Cevoli
Chris Watkins
Kathleen Rastle
+ Modeling Task Effects in Human Reading with Neural Network-based Attention 2018 Michael G. Hahn
Frank Keller
+ Modeling Task Effects in Human Reading with Neural Network-based Attention 2022 Michael G. Hahn
Frank Keller
+ PDF Chat Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming 2024 Zhenghao Zhou
Robert Frank
R. Thomas McCoy
+ Probing Language Models from A Human Behavioral Perspective 2023 Xintong Wang
Xiaoyu Li
Xingshan Li
Chris Biemann
+ Surprisal does not explain syntactic disambiguation difficulty: evidence from a large-scale benchmark 2023 Kuan‐Jung Huang
Suhas Arehalli
Mari Kugemoto
Christian Muxica
Grusha Prasad
Brian Dillon
Tal Linzen
+ PDF Chat Reverse-Engineering the Reader 2024 Samuel Kiegeland
Ethan Wilcox
Afra Amini
David Robert Reich
Ryan Cotterell