Projects
Reading
People
Chat
SU\G
(đž)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Panayiota Petrou-Zeniou
Follow
Share
Generating author description...
All published works
Action
Title
Year
Authors
+
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
2020
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
+
Cross-Linguistic Syntactic Evaluation of Word Prediction Models
2020
Aaron Mueller
Garrett Nicolai
Panayiota Petrou-Zeniou
Natalia Talmina
Tal Linzen
Common Coauthors
Coauthor
Papers Together
Garrett Nicolai
2
Aaron Mueller
2
Natalia Talmina
2
Tal Linzen
2
Commonly Cited References
Action
Title
Year
Authors
# of times referenced
+
Exploring the Limits of Language Modeling
2016
RafaĆ JĂłzefowicz
Oriol Vinyals
Mike Schuster
Noam Shazeer
Yonghui Wu
1
+
PDF
Chat
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
2016
Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
1
+
Continuous multilinguality with language vectors
2017
Robert Ăstling
Jörg Tiedemann
1
+
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks
2018
R. Thomas McCoy
Robert Frank
Tal Linzen
1
+
How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs
2017
Rico Sennrich
1
+
PDF
Chat
Targeted Syntactic Evaluation of Language Models
2018
Rebecca Marvin
Tal Linzen
1
+
Can LSTM Learn to Capture Agreement? The Case of Basque
2018
Shauli Ravfogel
Yoav Goldberg
Francis M. Tyers
1
+
Assessing BERT's Syntactic Abilities.
2019
Yoav Goldberg
1
+
PDF
Chat
Neural language models as psycholinguistic subjects: Representations of syntactic state
2019
Richard Futrell
Ethan Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
Roger LĂ©vy
1
+
PDF
Chat
Scalable Syntax-Aware Language Models Using Knowledge Distillation
2019
Adhiguna Kuncoro
Chris Dyer
Laura Rimell
Stephen Clark
Phil Blunsom
1
+
PDF
Chat
The Importance of Being Recurrent for Modeling Hierarchical Structure
2018
Ke Tran
Arianna Bisazza
Christof Monz
1
+
Verb Argument Structure Alternations in Word and Sentence Embeddings
2019
Katharina Kann
Alex Warstadt
Adina Williams
Samuel R. Bowman
1
+
What do RNN Language Models Learn about FillerâGap Dependencies?
2018
Ethan Wilcox
Roger LĂ©vy
Takashi Morita
Richard Futrell
1
+
Attention is All you Need
2017
Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Ćukasz Kaiser
Illia Polosukhin
1
+
Colorless Green Recurrent Networks Dream Hierarchically
2018
Kristina Gulordava
Piotr Bojanowski
Ădouard Grave
Tal Linzen
Marco Baroni
1
+
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
2018
Jaap Jumelet
Dieuwke Hupkes
1
+
PDF
Chat
A Challenge Set Approach to Evaluating Machine Translation
2017
Pierre Isabelle
Colin Cherry
George Foster
1
+
Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.
2018
R. Thomas McCoy
Robert Frank
Tal Linzen
1
+
PDF
Chat
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study
2019
Aixiu An
Peng Qian
Ethan Wilcox
Roger LĂ©vy
1
+
Quantity doesnât buy quality syntax with neural language models
2019
Marten van Schijndel
Aaron Mueller
Tal Linzen
1
+
PDF
Chat
Neural Network Acceptability Judgments
2019
Alex Warstadt
Amanpreet Singh
Samuel R. Bowman
1
+
PDF
Chat
BLiMP: The Benchmark of Linguistic Minimal Pairs for English
2020
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
ShengâFu Wang
Samuel R. Bowman
1
+
Understanding Cross-Lingual Syntactic Transfer in Multilingual Recurrent Neural Networks
2020
Prajit Dhar
Arianna Bisazza
1
+
Attribution Analysis of Grammatical Dependencies in LSTMs
2020
Yiding Hao
1
+
An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models
2020
Hiroshi Noji
Hiroya Takamura
1
+
BLiMP: A Benchmark of Linguistic Minimal Pairs for English
2019
Alex Warstadt
Alicia Parrish
Haokun Liu
Anhad Mohananey
Wei Peng
ShengâFu Wang
Samuel R. Bowman
1
+
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
2019
Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
1
+
Assessing BERT's Syntactic Abilities
2019
Yoav Goldberg
1
+
Attention Is All You Need
2017
Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Ćukasz Kaiser
Illia Polosukhin
1