+
PDF
Chat
|
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on
Developmentally Plausible Corpora
|
2024
|
Michael Y. Hu
Aaron Mueller
Candace Ross
Adina Williams
Tal Linzen
Chengxu Zhuang
Ryan Cotterell
Leshem Choshen
Alex Warstadt
Ethan Wilcox
|
+
PDF
Chat
|
What Goes Into a LM Acceptability Judgment? Rethinking the Impact of
Frequency and Length
|
2024
|
Lindia Tjuatja
Graham Neubig
Tal Linzen
Sophie Hao
|
+
PDF
Chat
|
How Does Code Pretraining Affect Language Model Task Performance?
|
2024
|
Jackson Petty
Sjoerd van Steenkiste
Tal Linzen
|
+
PDF
Chat
|
Testing learning hypotheses using neural networks by manipulating
learning data
|
2024
|
Cara Su-Yi Leong
Tal Linzen
|
+
PDF
Chat
|
[Call for Papers] The 2nd BabyLM Challenge: Sample-efficient pretraining
on a developmentally plausible corpus
|
2024
|
Leshem Choshen
Ryan Cotterell
Michael Y. Hu
Tal Linzen
Aaron Mueller
Candace Ross
Alex Warstadt
Ethan Wilcox
Adina Williams
Chengxu Zhuang
|
+
PDF
Chat
|
SPAWNing Structural Priming Predictions from a Cognitively Motivated
Parser
|
2024
|
Grusha Prasad
Tal Linzen
|
+
PDF
Chat
|
Can You Learn Semantics Through Next-Word Prediction? The Case of
Entailment
|
2024
|
William Merrill
Zhaofeng Wu
Norihito Naka
Yoon Kim
Tal Linzen
|
+
PDF
Chat
|
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
|
2024
|
Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
|
+
PDF
Chat
|
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
|
2024
|
Tiwalayo Eisape
Michael Tessler
Ishita Dasgupta
Fei Sha
Sjoerd van Steenkiste
Tal Linzen
|
+
PDF
Chat
|
The Impact of Depth on Compositional Generalization in Transformer Language Models
|
2024
|
Jackson Petty
Sjoerd van Steenkiste
Ishita Dasgupta
Fei Sha
Dan Garrette
Tal Linzen
|
+
|
Neural Networks Can Learn Patterns of Island-insensitivity in Norwegian
|
2023
|
Anastasia Kobzeva
Suhas Arehalli
Tal Linzen
Dave Kush
|
+
|
Surprisal does not explain syntactic disambiguation difficulty: evidence from a large-scale benchmark
|
2023
|
KuanâJung Huang
Suhas Arehalli
Mari Kugemoto
Christian Muxica
Grusha Prasad
Brian Dillon
Tal Linzen
|
+
PDF
Chat
|
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN
|
2023
|
R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Ăelikyılmaz
|
+
|
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
|
2023
|
Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
|
+
|
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
|
2023
|
Aaron Mueller
Tal Linzen
|
+
|
Language Models Can Learn Exceptions to Syntactic Rules
|
2023
|
Cara Su-Yi Leong
Tal Linzen
|
+
|
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
|
2023
|
Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
|
+
|
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
|
2023
|
Aaron Mueller
Tal Linzen
|
+
|
Do Language Models Refer?
|
2023
|
Matthew Mandelkern
Tal Linzen
|
+
|
SLOG: A Structural Generalization Benchmark for Semantic Parsing
|
2023
|
Bingzhi Li
Lucia Donatelli
Alexander Koller
Tal Linzen
Yuekun Yao
Najoung Kim
|
+
|
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number
|
2023
|
Sophie Hao
Tal Linzen
|
+
|
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
|
2023
|
William Timkey
Tal Linzen
|
+
|
The Impact of Depth and Width on Transformer Language Model Generalization
|
2023
|
Jackson Petty
Sjoerd van Steenkiste
Ishita Dasgupta
Fei Sha
Dan Garrette
Tal Linzen
|
+
|
A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models
|
2023
|
Tiwalayo Eisape
Mh Tessler
Ishita Dasgupta
Fei Sha
Sjoerd van Steenkiste
Tal Linzen
|
+
|
In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax
|
2023
|
Aaron Mueller
Albert Webson
Jackson Petty
Tal Linzen
|
+
|
A Language Model with Limited Memory Capacity Captures Interference in Human Sentence Processing
|
2023
|
William Timkey
Tal Linzen
|
+
PDF
Chat
|
SLOG: A Structural Generalization Benchmark for Semantic Parsing
|
2023
|
Bingzhi Li
Lucia Donatelli
Alexander Koller
Tal Linzen
Yuekun Yao
Najoung Kim
|
+
|
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number
|
2023
|
Sophie Hao
Tal Linzen
|
+
|
LSTMs Can Learn Basic Wh- and Relative Clause Dependencies in Norwegian
|
2022
|
Anastasia Kobzeva
Suhas Arehalli
Tal Linzen
Dave Kush
|
+
|
Syntactic Intervention cannot explain agreement attraction in English wh-questions
|
2022
|
Suhas Arehalli
Tal Linzen
GĂ©raldine Legendre
|
+
PDF
Chat
|
Improving Compositional Generalization with Latent Structure and Data Augmentation
|
2022
|
Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweĆ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
|
+
|
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
|
2022
|
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
|
+
|
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
|
2022
|
Sebastian Schuster
Tal Linzen
|
+
|
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
|
2022
|
Aarohi Srivastava
Abhinav Rastogi
Abhishek S. Rao
Abu Awal Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
AdriĂ Garriga-Alonso
|
+
PDF
Chat
|
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
|
2022
|
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
|
+
PDF
Chat
|
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
|
2022
|
Sebastian Schuster
Tal Linzen
|
+
PDF
Chat
|
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
|
2022
|
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
|
+
|
Entailment Semantics Can Be Extracted from an Ideal Language Model
|
2022
|
William Merrill
Alex Warstadt
Tal Linzen
|
+
|
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
|
2022
|
Suhas Arehalli
Brian Dillon
Tal Linzen
|
+
|
Characterizing Verbatim Short-Term Memory in Neural Language Models
|
2022
|
Kristijan Armeni
Christopher Honey
Tal Linzen
|
+
|
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
|
2022
|
Aaron Mueller
Yu Xia
Tal Linzen
|
+
|
Uncontrolled Lexical Exposure Leads to Overestimation of Compositional Generalization in Pretrained Models
|
2022
|
Najoung Kim
Tal Linzen
Paul Smolensky
|
+
|
Entailment Semantics Can Be Extracted from an Ideal Language Model
|
2022
|
William Merrill
Alex Warstadt
Tal Linzen
|
+
|
Characterizing Verbatim Short-Term Memory in Neural Language Models
|
2022
|
Kristijan Armeni
Christopher Honey
Tal Linzen
|
+
|
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
|
2022
|
Suhas Arehalli
Brian Dillon
Tal Linzen
|
+
|
Causal Analysis of Syntactic Agreement Neurons in Multilingual Language Models
|
2022
|
Aaron Mueller
Yu Xia
Tal Linzen
|
+
PDF
Chat
|
Improving Compositional Generalization with Latent Structure and Data
Augmentation
|
2021
|
Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweĆ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
|
+
PDF
Chat
|
How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN
|
2021
|
R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Ăelikyılmaz
|
+
|
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks.
|
2021
|
Zhu Wang
Peter Shaw
Tal Linzen
Fei Sha
|
+
|
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
|
2021
|
Laura Aina
Tal Linzen
|
+
PDF
Chat
|
Rapid syntactic adaptation in self-paced reading: Detectable, but only with many participants.
|
2021
|
Grusha Prasad
Tal Linzen
|
+
|
The MultiBERTs: BERT Reproductions for Robustness Analysis
|
2021
|
Thibault Sellam
Steve Yadlowsky
Jason Wei
Naomi Saphra
Alexander DâAmour
Tal Linzen
Jasmijn Bastings
Iulia Turc
Jacob Eisenstein
Dipanjan Das
|
+
PDF
Chat
|
SingleâStage Prediction Models Do Not Explain the Magnitude of Syntactic Disambiguation Difficulty
|
2021
|
Marten van Schijndel
Tal Linzen
|
+
|
Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark
|
2021
|
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
|
+
|
Does Putting a Linguist in the Loop Improve NLU Data Collection?
|
2021
|
Alicia Parrish
William C. Huang
Omar Agha
Soo-Hwan Lee
Nikita Nangia
Alexia Warstadt
Karmanya Aggarwal
Emily Allaway
Tal Linzen
Samuel R. Bowman
|
+
|
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
|
2021
|
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
|
+
|
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
|
2021
|
Laura Aina
Tal Linzen
|
+
PDF
Chat
|
Frequency Effects on Syntactic Rule Learning in Transformers
|
2021
|
Jason Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
|
+
|
NOPE: A Corpus of Naturally-Occurring Presuppositions in English
|
2021
|
Alicia Parrish
Sebastian Schuster
Alex Warstadt
Omar Agha
Soo-Hwan Lee
Zhuoye Zhao
Samuel R. Bowman
Tal Linzen
|
+
|
Frequency Effects on Syntactic Rule Learning in Transformers
|
2021
|
Jason Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
|
+
|
Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
|
2021
|
Zhu Wang
Peter J. Shaw
Tal Linzen
Fei Sha
|
+
|
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
|
2021
|
Laura Aina
Tal Linzen
|
+
|
NOPE: A Corpus of Naturally-Occurring Presuppositions in English
|
2021
|
Alicia Parrish
Sebastian Schuster
Alex Warstadt
Omar M. A. Mahmood Agha
Soo-Hwan Lee
Zhuoye Zhao
Samuel R. Bowman
Tal Linzen
|
+
|
The MultiBERTs: BERT Reproductions for Robustness Analysis
|
2021
|
Thibault Sellam
Steve Yadlowsky
Jason Lee
Naomi Saphra
Alexander DâAmour
Tal Linzen
Jasmijn Bastings
Iulia Turc
Jacob Eisenstein
Dipanjan Das
|
+
|
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
|
2021
|
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
|
+
|
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
|
2021
|
Nouha Dziri
Hannah Rashkin
Tal Linzen
David Reitter
|
+
|
Does Putting a Linguist in the Loop Improve NLU Data Collection?
|
2021
|
Alicia Parrish
William C. Huang
Omar M. A. Mahmood Agha
Soo-Hwan Lee
Nikita Nangia
Alex Warstadt
Karmanya Aggarwal
Emily Allaway
Tal Linzen
Samuel R. Bowman
|
+
|
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
|
2021
|
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
|
+
|
Improving Compositional Generalization with Latent Structure and Data Augmentation
|
2021
|
Linlu Qiu
Peter Shaw
Panupong Pasupat
PaweĆ Krzysztof Nowak
Tal Linzen
Fei Sha
Kristina Toutanova
|
+
|
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
|
2021
|
R. Thomas McCoy
Paul Smolensky
Tal Linzen
Jianfeng Gao
Aslı Ăelikyılmaz
|
+
PDF
Chat
|
Syntactic Structure from Deep Learning
|
2020
|
Tal Linzen
Marco Baroni
|
+
|
Single-stage prediction models do not explain the magnitude of syntactic disambiguation difficulty
|
2020
|
Marten van Schijndel
Tal Linzen
|
+
PDF
Chat
|
Priming syntactic ambiguity resolution in children and adults
|
2020
|
Naomi Havron
Camila Scaff
M. Julia Carbajal
Tal Linzen
Axel Barrault
Anne Christophe
|
+
PDF
Chat
|
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks
|
2020
|
R. Thomas McCoy
Robert Frank
Tal Linzen
|
+
PDF
Chat
|
Neural Language Models Capture Some, But Not All, Agreement Attraction Effects
|
2020
|
Suhas Arehalli
Tal Linzen
|
+
|
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
|
2020
|
Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
|
+
|
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
|
2020
|
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
|
+
|
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
|
2020
|
Tal Linzen
|
+
|
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
|
2020
|
Michael A. Lepori
Tal Linzen
R. Thomas McCoy
|
+
|
Syntactic Data Augmentation Increases Robustness to Inference Heuristics
|
2020
|
Junghyun Min
R. Thomas McCoy
Dipanjan Das
Emily Pitler
Tal Linzen
|
+
|
Universal linguistic inductive biases via meta-learning
|
2020
|
R. Thomas McCoy
Erin Grant
Paul Smolensky
Thomas L. Griffiths
Tal Linzen
|
+
|
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
|
2020
|
Najoung Kim
Tal Linzen
|
+
|
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
|
2020
|
R. Thomas McCoy
Junghyun Min
Tal Linzen
|
+
|
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
|
2020
|
Paul Soulos
R. Thomas McCoy
Tal Linzen
Paul Smolensky
|
+
|
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
|
2020
|
Najoung Kim
Tal Linzen
|
+
|
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
|
2020
|
Tal Linzen
|
+
|
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
|
2020
|
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
|
+
|
Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs
|
2020
|
Michael A. Lepori
Tal Linzen
R. Thomas McCoy
|
+
|
Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks
|
2020
|
R. Thomas McCoy
Robert Frank
Tal Linzen
|
+
PDF
Chat
|
Neural network surprisal predicts the existence but not the magnitude of human syntactic disambiguation difficulty
|
2019
|
Marten van Schijndel
Tal Linzen
|
+
|
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
|
2019
|
Paul Soulos
Tom McCoy
Tal Linzen
Paul Smolensky
|
+
PDF
Chat
|
Analyzing and interpreting neural networks for NLP: A report on the first BlackboxNLP workshop
|
2019
|
Afra Alishahi
Grzegorz ChrupaĆa
Tal Linzen
|
+
PDF
Chat
|
Rapid Syntactic Adaptation in Self-Paced Reading: Detectable, but Only With Many Participants.
|
2019
|
Grusha Prasad
Tal Linzen
|
+
|
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages.
|
2019
|
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
|
+
|
Human few-shot learning of compositional instructions
|
2019
|
Brenden M. Lake
Tal Linzen
Marco Baroni
|
+
|
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
|
2019
|
Najoung Kim
Roma Patel
Adam Poliak
Patrick Xia
Alex Wang
Tom McCoy
Ian Tenney
Alexis Ross
Tal Linzen
Benjamin Van Durme
|
+
|
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
|
2019
|
Tom McCoy
Ellie Pavlick
Tal Linzen
|
+
|
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension
|
2019
|
Najoung Kim
Roma Patel
Adam Poliak
Alex Wang
Patrick Xia
R. Thomas McCoy
Ian Tenney
Alexis Ross
Tal Linzen
Benjamin Van Durme
|
+
|
Quantity doesnât buy quality syntax with neural language models
|
2019
|
Marten van Schijndel
Aaron Mueller
Tal Linzen
|
+
|
Human few-shot learning of compositional instructions
|
2019
|
Brenden M. Lake
Tal Linzen
Marco Baroni
|
+
|
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
|
2019
|
Grusha Prasad
Marten van Schijndel
Tal Linzen
|
+
|
BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance
|
2019
|
R. Thomas McCoy
Junghyun Min
Tal Linzen
|
+
|
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
|
2019
|
Paul Soulos
Tom Mccoy
Tal Linzen
Paul Smolensky
|
+
|
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
|
2019
|
Grusha Prasad
Marten van Schijndel
Tal Linzen
|
+
|
Quantity doesn't buy quality syntax with neural language models
|
2019
|
Marten van Schijndel
Aaron Mueller
Tal Linzen
|
+
|
Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop
|
2019
|
Afra Alishahi
Grzegorz ChrupaĆa
Tal Linzen
|
+
|
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference
|
2019
|
R. Thomas McCoy
Ellie Pavlick
Tal Linzen
|
+
|
Human few-shot learning of compositional instructions
|
2019
|
Brenden M. Lake
Tal Linzen
Marco Baroni
|
+
|
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages
|
2019
|
Shauli Ravfogel
Yoav Goldberg
Tal Linzen
|
+
|
Non-entailed subsequences as a challenge for natural language inference
|
2018
|
R. Thomas McCoy
Tal Linzen
|
+
|
Non-Entailed Subsequences as a Challenge for Natural Language Inference
|
2018
|
R. Thomas McCoy
Tal Linzen
|
+
|
What can linguistics and deep learning contribute to each other
|
2018
|
Tal Linzen
|
+
|
Colorless green recurrent networks dream hierarchically
|
2018
|
Kristina Gulordava
Piotr Bojanowski
Ădouard Grave
Tal Linzen
Marco Baroni
|
+
|
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.
|
2018
|
R. Thomas McCoy
Robert Frank
Tal Linzen
|
+
|
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks
|
2018
|
R. Thomas McCoy
Robert Frank
Tal Linzen
|
+
|
Distinct patterns of syntactic agreement errors in recurrent networks and humans
|
2018
|
Tal Linzen
Brian Leonard
|
+
PDF
Chat
|
Targeted Syntactic Evaluation of Language Models
|
2018
|
Rebecca Marvin
Tal Linzen
|
+
|
Can Entropy Explain Successor Surprisal Effects in Reading?
|
2018
|
Marten van Schijndel
Tal Linzen
|
+
|
RNNs Implicitly Implement Tensor Product Representations
|
2018
|
R. Thomas McCoy
Tal Linzen
Ewan Dunbar
Paul Smolensky
|
+
|
Phonological (un)certainty weights lexical activation
|
2018
|
Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
|
+
|
Colorless Green Recurrent Networks Dream Hierarchically
|
2018
|
Kristina Gulordava
Piotr Bojanowski
Ădouard Grave
Tal Linzen
Marco Baroni
|
+
PDF
Chat
|
A Neural Model of Adaptation in Reading
|
2018
|
Marten van Schijndel
Tal Linzen
|
+
|
Non-entailed subsequences as a challenge for natural language inference
|
2018
|
R. Thomas McCoy
Tal Linzen
|
+
|
What can linguistics and deep learning contribute to each other?
|
2018
|
Tal Linzen
|
+
|
A Neural Model of Adaptation in Reading
|
2018
|
Marten van Schijndel
Tal Linzen
|
+
|
Targeted Syntactic Evaluation of Language Models
|
2018
|
Rebecca Marvin
Tal Linzen
|
+
|
Colorless green recurrent networks dream hierarchically
|
2018
|
Kristina Gulordava
Piotr Bojanowski
Ădouard Grave
Tal Linzen
Marco Baroni
|
+
|
Phonological (un)certainty weights lexical activation
|
2017
|
Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
|
+
|
Exploring the Syntactic Abilities of RNNs with Multi-task Learning
|
2017
|
Ămile Enguehard
Yoav Goldberg
Tal Linzen
|
+
|
Exploring the Syntactic Abilities of RNNs with Multi-task Learning
|
2017
|
Ămile Enguehard
Yoav Goldberg
Tal Linzen
|
+
|
Exploring the Syntactic Abilities of RNNs with Multi-task Learning
|
2017
|
Ămile Enguehard
Yoav Goldberg
Tal Linzen
|
+
|
Phonological (un)certainty weights lexical activation
|
2017
|
Laura Gwilliams
David Poeppel
Alec Marantz
Tal Linzen
|
+
PDF
Chat
|
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
|
2016
|
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
|
+
|
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
|
2016
|
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
|
+
|
Issues in evaluating semantic spaces using word analogies
|
2016
|
Tal Linzen
|
+
|
Issues in evaluating semantic spaces using word analogies
|
2016
|
Tal Linzen
|
+
|
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
|
2016
|
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
|
+
|
Issues in evaluating semantic spaces using word analogies
|
2016
|
Tal Linzen
|