Tal Linzen

Follow

Generating author description...

All published works
Action Title Year Authors
+ PDF Chat Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora 2024 Michael Y. Hu
Aaron Mueller
Candace Ross
Adina Williams
Tal Linzen
Chengxu Zhuang
Ryan Cotterell
Leshem Choshen
Alex Warstadt
Ethan Wilcox
+ PDF Chat What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length 2024 Lindia Tjuatja
Graham Neubig
Tal Linzen
Sophie Hao
+ PDF Chat How Does Code Pretraining Affect Language Model Task Performance? 2024 Jackson Petty
Sjoerd van Steenkiste
Tal Linzen
+ PDF Chat Testing learning hypotheses using neural networks by manipulating learning data 2024 Cara Su-Yi Leong
Tal Linzen
+ PDF Chat [Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus 2024 Leshem Choshen
Ryan Cotterell
Michael Y. Hu
Tal Linzen
Aaron Mueller
Candace Ross
Alex Warstadt
Ethan Wilcox
Adina Williams
Chengxu Zhuang
+ PDF Chat SPAWNing Structural Priming Predictions from a Cognitively Motivated Parser 2024 Grusha Prasad
Tal Linzen
+ PDF Chat Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment 2024 William Merrill
Zhaofeng Wu
Norihito Naka
Yoon Kim
Tal Linzen
+ PDF Chat In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax 2024 Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
+ PDF Chat A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models 2024 Tiwalayo Eisape
Michael Tessler
Ishita Dasgupta
Fei Sha
Sjoerd van Steenkiste
Tal Linzen
+ PDF Chat The Impact of Depth on Compositional Generalization in Transformer Language Models 2024 Jackson Petty
Sjoerd van Steenkiste
Ishita Dasgupta
Fei Sha
Dan Garrette
Tal Linzen
+ Neural Networks Can Learn Patterns of Island-insensitivity in Norwegian 2023 Anastasia Kobzeva
Suhas Arehalli
Tal Linzen
Dave Kush
+ Surprisal does not explain syntactic disambiguation difficulty: evidence from a large-scale benchmark 2023 Kuan‐Jung Huang
Suhas Arehalli
Mari Kugemoto
Christian Muxica
Grusha Prasad
Brian Dillon
Tal Linzen
+ PDF Chat How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN 2023 R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Çelikyılmaz
+ How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech 2023 Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
+ How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases 2023 Aaron Mueller
Tal Linzen
+ Language Models Can Learn Exceptions to Syntactic Rules 2023 Cara Su-Yi Leong
Tal Linzen
+ How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech 2023 Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
+ How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases 2023 Aaron Mueller
Tal Linzen
+ Do Language Models Refer? 2023 Matthew Mandelkern
Tal Linzen
+ SLOG: A Structural Generalization Benchmark for Semantic Parsing 2023 Bingzhi Li
Lucia Donatelli
Alexander Koller
Tal Linzen
Yuekun Yao
Najoung Kim
+ Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number 2023 Sophie Hao
Tal Linzen
+ A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing 2023 William Timkey
Tal Linzen
+ The Impact of Depth and Width on Transformer Language Model Generalization 2023 Jackson Petty
Sjoerd van Steenkiste
Ishita Dasgupta
Fei Sha
Dan Garrette
Tal Linzen
+ A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models 2023 Tiwalayo Eisape
Mh Tessler
Ishita Dasgupta
Fei Sha
Sjoerd van Steenkiste
Tal Linzen
+ In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax 2023 Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
+ A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing 2023 William Timkey
Tal Linzen
+ PDF Chat SLOG: A Structural Generalization Benchmark for Semantic Parsing 2023 Bingzhi Li
Lucia Donatelli
Alexander Koller
Tal Linzen
Yuekun Yao
Najoung Kim
+ Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number 2023 Sophie Hao
Tal Linzen
+ LSTMs Can Learn Basic Wh- and Relative Clause Dependencies in Norwegian 2022 Anastasia Kobzeva
Suhas Arehalli
Tal Linzen
Dave Kush
+ Syntactic Intervention cannot explain agreement attraction in English wh-questions 2022 Suhas Arehalli
Tal Linzen
GĂ©raldine Legendre
+ PDF Chat Improving Compositional Generalization with Latent Structure and Data Augmentation 2022 Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweƂ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
+ Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models 2022 Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
+ When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it 2022 Sebastian Schuster
Tal Linzen
+ Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models 2022 Aarohi Srivastava
Abhinav Rastogi
Abhishek S. Rao
Abu Awal Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
AdriĂ  Garriga-Alonso
+ PDF Chat Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models 2022 Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
+ PDF Chat When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it 2022 Sebastian Schuster
Tal Linzen
+ PDF Chat Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark 2022 Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
+ Entailment Semantics Can Be Extracted from an Ideal Language Model 2022 William Merrill
Alex Warstadt
Tal Linzen
+ Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities 2022 Suhas Arehalli
Brian Dillon
Tal Linzen
+ Characterizing Verbatim Short-Term Memory in Neural Language Models 2022 Kristijan Armeni
Christopher Honey
Tal Linzen
+ Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models 2022 Aaron Mueller
Yu Xia
Tal Linzen
+ Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models 2022 Najoung Kim
Tal Linzen
Paul Smolensky
+ Entailment Semantics Can Be Extracted from an Ideal Language Model 2022 William Merrill
Alex Warstadt
Tal Linzen
+ Characterizing Verbatim Short-Term Memory in Neural Language Models 2022 Kristijan Armeni
Christopher Honey
Tal Linzen
+ Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities 2022 Suhas Arehalli
Brian Dillon
Tal Linzen
+ Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models 2022 Aaron Mueller
Yu Xia
Tal Linzen
+ PDF Chat Improving Compositional Generalization with Latent Structure and Data Augmentation 2021 Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweƂ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
+ PDF Chat How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN 2021 R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Çelikyılmaz
+ Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks. 2021 Zhu Wang
Peter Shaw
Tal Linzen
Fei Sha
+ The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation 2021 Laura Aina
Tal Linzen
+ PDF Chat Rapid syntactic adaptation in self-paced reading: Detectable, but only with many participants. 2021 Grusha Prasad
Tal Linzen
+ The MultiBERTs: BERT Reproductions for Robustness Analysis 2021 Thibault Sellam
Steve Yadlowsky
Jason Wei
Naomi Saphra
Alexander D’Amour
Tal Linzen
Jasmijn Bastings
Iulia Turc
Jacob Eisenstein
Dipanjan Das
+ PDF Chat Single‐Stage Prediction Models Do Not Explain the Magnitude of Syntactic Disambiguation Difficulty 2021 Marten van Schijndel
Tal Linzen
+ Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark 2021 Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
+ Does Putting a Linguist in the Loop Improve NLU Data Collection? 2021 Alicia Parrish
William C. Huang
Omar Agha
Soo-Hwan Lee
Nikita Nangia
Alexia Warstadt
Karmanya Aggarwal
Emily Allaway
Tal Linzen
Samuel R. Bowman
+ Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction 2021 Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
+ The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation 2021 Laura Aina
Tal Linzen
+ PDF Chat Frequency Effects on Syntactic Rule Learning in Transformers 2021 Jason Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
+ NOPE: A Corpus of Naturally-Occurring Presuppositions in English 2021 Alicia Parrish
Sebastian Schuster
Alex Warstadt
Omar Agha
Soo-Hwan Lee
Zhuoye Zhao
Samuel R. Bowman
Tal Linzen
+ Frequency Effects on Syntactic Rule Learning in Transformers 2021 Jason Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
+ Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks 2021 Zhu Wang
Peter J. Shaw
Tal Linzen
Fei Sha
+ The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation 2021 Laura Aina
Tal Linzen
+ NOPE: A Corpus of Naturally-Occurring Presuppositions in English 2021 Alicia Parrish
Sebastian Schuster
Alex Warstadt
Omar M. A. Mahmood Agha
Soo-Hwan Lee
Zhuoye Zhao
Samuel R. Bowman
Tal Linzen
+ The MultiBERTs: BERT Reproductions for Robustness Analysis 2021 Thibault Sellam
Steve Yadlowsky
Jason Lee
Naomi Saphra
Alexander D’Amour
Tal Linzen
Jasmijn Bastings
Iulia Turc
Jacob Eisenstein
Dipanjan Das
+ Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models 2021 Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
+ Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark 2021 Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
+ Does Putting a Linguist in the Loop Improve NLU Data Collection? 2021 Alicia Parrish
William C. Huang
Omar M. A. Mahmood Agha
Soo-Hwan Lee
Nikita Nangia
Alex Warstadt
Karmanya Aggarwal
Emily Allaway
Tal Linzen
Samuel R. Bowman
+ Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction 2021 Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
+ Improving Compositional Generalization with Latent Structure and Data Augmentation 2021 Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweƂ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
+ How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN 2021 R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Çelikyılmaz
+ PDF Chat Syntactic Structure from Deep Learning 2020 Tal Linzen
Marco Baroni
+ Single-stage prediction models do not explain the magnitude of syntactic disambiguation difficulty 2020 Marten van Schijndel
Tal Linzen
+ PDF Chat Priming syntactic ambiguity resolution in children and adults 2020 Naomi Havron
Camila Scaff
M. Julia Carbajal
Tal Linzen
Axel Barrault
Anne Christophe
+ PDF Chat Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks 2020 R. Thomas McCoy
Robert Frank
Tal Linzen
+ PDF Chat Neural Language Models Capture Some, But Not All, Agreement Attraction Effects 2020 Suhas Arehalli
Tal Linzen
+ Syntactic Data Augmentation Increases Robustness to Inference Heuristics 2020 Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
+ Cross-Linguistic Syntactic Evaluation of Word Prediction Models 2020 Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
+ How Can We Accelerate Progress Towards Human-like Linguistic Generalization? 2020 Tal Linzen
+ Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs 2020 Michael A. Lepori
Tal Linzen
R. Thomas McCoy
+ Syntactic Data Augmentation Increases Robustness to Inference Heuristics 2020 Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
+ Universal linguistic inductive biases via meta-learning 2020 R. Thomas McCoy
Erin Grant
Paul Smolensky
Thomas L. Griffiths
Tal Linzen
+ COGS: A Compositional Generalization Challenge Based on Semantic Interpretation 2020 Najoung Kim
Tal Linzen
+ BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance 2020 R. Thomas McCoy
Junghyun Min
Tal Linzen
+ Discovering the Compositional Structure of Vector Representations with Role Learning Networks 2020 Paul Soulos
R. Thomas McCoy
Tal Linzen
Paul Smolensky
+ COGS: A Compositional Generalization Challenge Based on Semantic Interpretation 2020 Najoung Kim
Tal Linzen
+ How Can We Accelerate Progress Towards Human-like Linguistic Generalization? 2020 Tal Linzen
+ Cross-Linguistic Syntactic Evaluation of Word Prediction Models 2020 Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
+ Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs 2020 Michael A. Lepori
Tal Linzen
R. Thomas McCoy
+ Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks 2020 R. Thomas McCoy
Robert Frank
Tal Linzen
+ PDF Chat Neural network surprisal predicts the existence but not the magnitude of human syntactic disambiguation difficulty 2019 Marten van Schijndel
Tal Linzen
+ Discovering the Compositional Structure of Vector Representations with Role Learning Networks 2019 Paul Soulos
Tom McCoy
Tal Linzen
Paul Smolensky
+ PDF Chat Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop 2019 Afra Alishahi
Grzegorz ChrupaƂa
Tal Linzen
+ PDF Chat Rapid Syntactic Adaptation in Self-Paced Reading: Detectable, but Only With Many Participants. 2019 Grusha Prasad
Tal Linzen
+ Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. 2019 Shauli Ravfogel
Yoav Goldberg
Tal Linzen
+ Human few-shot learning of compositional instructions 2019 Brenden M. Lake
Tal Linzen
Marco Baroni
+ Probing What Different NLP Tasks Teach Machines about Function Word Comprehension 2019 Najoung Kim
Roma Patel
Adam Poliak
Patrick Xia
Alex Wang
Tom McCoy
Ian Tenney
Alexis Ross
Tal Linzen
Benjamin Van Durme
+ Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference 2019 Tom McCoy
Ellie Pavlick
Tal Linzen
+ Probing What Different NLP Tasks Teach Machines about Function Word Comprehension 2019 Najoung Kim
Roma Patel
Adam Poliak
Alex Wang
Patrick Xia
R. Thomas McCoy
Ian Tenney
Alexis Ross
Tal Linzen
Benjamin Van Durme
+ Quantity doesn’t buy quality syntax with neural language models 2019 Marten van Schijndel
Aaron Mueller
Tal Linzen
+ Human few-shot learning of compositional instructions 2019 Brenden M. Lake
Tal Linzen
Marco Baroni
+ Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models 2019 Grusha Prasad
Marten van Schijndel
Tal Linzen
+ BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance 2019 R. Thomas McCoy
Junghyun Min
Tal Linzen
+ Discovering the Compositional Structure of Vector Representations with Role Learning Networks 2019 Paul Soulos
Tom Mccoy
Tal Linzen
Paul Smolensky
+ Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models 2019 Grusha Prasad
Marten van Schijndel
Tal Linzen
+ Quantity doesn't buy quality syntax with neural language models 2019 Marten van Schijndel
Aaron Mueller
Tal Linzen
+ Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop 2019 Afra Alishahi
Grzegorz ChrupaƂa
Tal Linzen
+ Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference 2019 R. Thomas McCoy
Ellie Pavlick
Tal Linzen
+ Human few-shot learning of compositional instructions 2019 Brenden M. Lake
Tal Linzen
Marco Baroni
+ Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages 2019 Shauli Ravfogel
Yoav Goldberg
Tal Linzen
+ Non-entailed subsequences as a challenge for natural language inference 2018 R. Thomas McCoy
Tal Linzen
+ Non-Entailed Subsequences as a Challenge for Natural Language Inference 2018 R. Thomas McCoy
Tal Linzen
+ What can linguistics and deep learning contribute to each other 2018 Tal Linzen
+ Colorless green recurrent networks dream hierarchically 2018 Kristina Gulordava
Piotr Bojanowski
Édouard Grave
Tal Linzen
Marco Baroni
+ Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. 2018 R. Thomas McCoy
Robert Frank
Tal Linzen
+ Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks 2018 R. Thomas McCoy
Robert Frank
Tal Linzen
+ Distinct patterns of syntactic agreement errors in recurrent networks and humans 2018 Tal Linzen
Brian Leonard
+ PDF Chat Targeted Syntactic Evaluation of Language Models 2018 Rebecca Marvin
Tal Linzen
+ Can Entropy Explain Successor Surprisal Effects in Reading? 2018 Marten van Schijndel
Tal Linzen
+ RNNs Implicitly Implement Tensor Product Representations 2018 R. Thomas McCoy
Tal Linzen
Ewan Dunbar
Paul Smolensky
+ Phonological (un)certainty weights lexical activation 2018 Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
+ Colorless Green Recurrent Networks Dream Hierarchically 2018 Kristina Gulordava
Piotr Bojanowski
Édouard Grave
Tal Linzen
Marco Baroni
+ PDF Chat A Neural Model of Adaptation in Reading 2018 Marten van Schijndel
Tal Linzen
+ Non-entailed subsequences as a challenge for natural language inference 2018 R. Thomas McCoy
Tal Linzen
+ What can linguistics and deep learning contribute to each other? 2018 Tal Linzen
+ A Neural Model of Adaptation in Reading 2018 Marten van Schijndel
Tal Linzen
+ Targeted Syntactic Evaluation of Language Models 2018 Rebecca Marvin
Tal Linzen
+ Colorless green recurrent networks dream hierarchically 2018 Kristina Gulordava
Piotr Bojanowski
Édouard Grave
Tal Linzen
Marco Baroni
+ Phonological (un)certainty weights lexical activation 2017 Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
+ Exploring the Syntactic Abilities of RNNs with Multi-task Learning 2017 Émile Enguehard
Yoav Goldberg
Tal Linzen
+ Exploring the Syntactic Abilities of RNNs with Multi-task Learning 2017 Émile Enguehard
Yoav Goldberg
Tal Linzen
+ Exploring the Syntactic Abilities of RNNs with Multi-task Learning 2017 Émile Enguehard
Yoav Goldberg
Tal Linzen
+ Phonological (un)certainty weights lexical activation 2017 Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
+ PDF Chat Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies 2016 Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
+ Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies 2016 Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
+ Issues in evaluating semantic spaces using word analogies 2016 Tal Linzen
+ Issues in evaluating semantic spaces using word analogies 2016 Tal Linzen
+ Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies 2016 Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
+ Issues in evaluating semantic spaces using word analogies 2016 Tal Linzen
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies 2016 Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
35
+ Colorless Green Recurrent Networks Dream Hierarchically 2018 Kristina Gulordava
Piotr Bojanowski
Édouard Grave
Tal Linzen
Marco Baroni
32
+ PDF Chat Targeted Syntactic Evaluation of Language Models 2018 Rebecca Marvin
Tal Linzen
24
+ Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference 2019 Tom McCoy
Ellie Pavlick
Tal Linzen
16
+ What do RNN Language Models Learn about Filler–Gap Dependencies? 2018 Ethan Wilcox
Roger LĂ©vy
Takashi Morita
Richard Futrell
16
+ Attention Is All You Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Ɓukasz Kaiser
Illia Polosukhin
14
+ A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference 2018 Adina Williams
Nikita Nangia
Samuel Bowman
14
+ Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information 2018 Mario Giulianelli
Jacqueline Harding
Florian Mohnert
Dieuwke Hupkes
Willem Zuidema
13
+ Assessing BERT's Syntactic Abilities 2019 Yoav Goldberg
13
+ Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks 2018 R. Thomas McCoy
Robert Frank
Tal Linzen
12
+ Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 2019 Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
11
+ PDF Chat Recurrent Neural Network Grammars 2016 Chris Dyer
Adhiguna Kuncoro
Miguel Ballesteros
Noah A. Smith
11
+ Assessing BERT's Syntactic Abilities. 2019 Yoav Goldberg
10
+ Attention is All you Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Ɓukasz Kaiser
Illia Polosukhin
10
+ PDF Chat A Systematic Assessment of Syntactic Generalization in Neural Language Models 2020 Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Wilcox
Roger LĂ©vy
10
+ Deep Contextualized Word Representations 2018 Matthew E. Peters
Mark E Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
9
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
9
+ Exploring the Syntactic Abilities of RNNs with Multi-task Learning 2017 Émile Enguehard
Yoav Goldberg
Tal Linzen
9
+ PDF Chat Neural language models as psycholinguistic subjects: Representations of syntactic state 2019 Richard Futrell
Ethan Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
Roger LĂ©vy
9
+ Quantity doesn’t buy quality syntax with neural language models 2019 Marten van Schijndel
Aaron Mueller
Tal Linzen
9
+ Exploring the Limits of Language Modeling 2016 RafaƂ Józefowicz
Oriol Vinyals
Mike Schuster
Noam Shazeer
Yonghui Wu
9
+ PDF Chat BLiMP: The Benchmark of Linguistic Minimal Pairs for English 2020 Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
Sheng‐Fu Wang
Samuel R. Bowman
9
+ Sequence to Sequence Learning with Neural Networks 2014 Ilya Sutskever
Oriol Vinyals
Quoc V. Le
8
+ Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks 2016 Yossi Adi
Einat Kermany
Yonatan Belinkov
Ofer Lavi
Yoav Goldberg
8
+ GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding 2018 Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel Bowman
8
+ Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks 2016 Yossi Adi
Einat Kermany
Yonatan Belinkov
Ofer Lavi
Yoav Goldberg
8
+ PDF Chat Expectation-based syntactic comprehension 2007 Roger LĂ©vy
8
+ A large annotated corpus for learning natural language inference 2015 Samuel R. Bowman
Gabor Angeli
Christopher Potts
Christopher D. Manning
7
+ Adam: A Method for Stochastic Optimization 2014 Diederik P. Kingma
Jimmy Ba
7
+ PDF Chat Compositionality Decomposed: How do Neural Networks Generalise? 2020 Dieuwke Hupkes
Verna Dankers
Mathijs Mul
Elia Bruni
7
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
7
+ Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation 2014 Kyunghyun Cho
Bart van Merriënboer
Çaǧlar GĂŒlçehre
Dzmitry Bahdanau
Fethi Bougares
Holger Schwenk
Yoshua Bengio
7
+ Compositional Generalization for Primitive Substitutions 2019 Yuanpeng Li
Liang Zhao
Jianyu Wang
Joel Hestness
6
+ Language Models are Few-Shot Learners 2020 T. B. Brown
Benjamin F. Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
6
+ Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually) 2020 Alex Warstadt
Yian Zhang
Xiaocheng Li
Haokun Liu
Samuel R. Bowman
6
+ PDF Chat Syntactic Structure from Deep Learning 2020 Tal Linzen
Marco Baroni
6
+ COGS: A Compositional Generalization Challenge Based on Semantic Interpretation 2020 Najoung Kim
Tal Linzen
6
+ Learning to transduce with unbounded memory 2015 Edward Grefenstette
Karl Moritz Hermann
Mustafa Suleyman
Phil Blunsom
5
+ PDF Chat What you can cram into a single \$&!#* vector: Probing sentence embeddings for linguistic properties 2018 Alexis Conneau
GermĂĄn Kruszewski
Guillaume Lample
LoĂŻc Barrault
Marco Baroni
5
+ PDF Chat A Primer in BERTology: What We Know About How BERT Works 2020 Anna Rogers
Olga Kovaleva
Anna Rumshisky
5
+ PDF Chat Mechanisms for handling nested dependencies in neural-network language models and humans 2021 Yair Lakretz
Dieuwke Hupkes
Alessandra Vergallito
Marco Marelli
Marco Baroni
Stanislas Dehaene
5
+ What do you learn from context? Probing for sentence structure in contextualized word representations 2019 Ian Tenney
Patrick Xia
Berlin Chen
Alex Wang
Adam Poliak
R Thomas McCoy
Najoung Kim
Benjamin Van Durme
Samuel R. Bowman
Dipanjan Das
5
+ Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks 2017 Brenden M. Lake
Marco Baroni
5
+ Can neural networks acquire a structural bias from raw linguistic data? 2020 Alex Warstadt
Samuel R. Bowman
5
+ PDF Chat Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks 2020 R. Thomas McCoy
Robert Frank
Tal Linzen
5
+ Stress Test Evaluation for Natural Language Inference 2018 Aakanksha Naik
Abhilasha Ravichander
Norman Sadeh
Carolyn Penstein Rosé
Graham Neubig
5
+ Can LSTM Learn to Capture Agreement? The Case of Basque 2018 Shauli Ravfogel
Yoav Goldberg
Francis M. Tyers
5
+ Evaluating Compositionality in Sentence Embeddings 2018 Ishita Dasgupta
Demi Guo
Andreas StuhlmĂŒller
Samuel J. Gershman
Noah D. Goodman
5
+ Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks. 2018 R. Thomas McCoy
Robert Frank
Tal Linzen
5
+ PDF Chat Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation 2018 Adam Poliak
Aparajita Haldar
Rachel Rudinger
J. Edward Hu
Ellie Pavlick
Aaron Steven White
Benjamin Van Durme
5