Christopher Honey

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ Learning to transduce with unbounded memory 2015 Edward Grefenstette
Karl Moritz Hermann
Mustafa Suleyman
Phil Blunsom
1
+ Neural Machine Translation by Jointly Learning to Align and Translate 2014 Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
1
+ Using Fast Weights to Attend to the Recent Past 2016 Jimmy Ba
Geoffrey E. Hinton
Volodymyr Mnih
Joel Z. Leibo
Catalin Ionescu
1
+ PDF Chat Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies 2016 Tal Linzen
Emmanuel Dupoux
Yoav Goldberg
1
+ PDF Chat Targeted Syntactic Evaluation of Language Models 2018 Rebecca Marvin
Tal Linzen
1
+ PDF Chat Neural language models as psycholinguistic subjects: Representations of syntactic state 2019 Richard Futrell
Ethan Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
Roger Lévy
1
+ The Curious Case of Neural Text Degeneration 2019 Ari Holtzman
Jan Buys
Li Du
Maxwell Forbes
Yejin Choi
1
+ Colorless Green Recurrent Networks Dream Hierarchically 2018 Kristina Gulordava
Piotr Bojanowski
Édouard Grave
Tal Linzen
Marco Baroni
1
+ PDF Chat Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context 2018 Urvashi Khandelwal
He He
Peng Qi
Dan Jurafsky
1
+ Incorporating Copying Mechanism in Sequence-to-Sequence Learning 2016 Jiatao Gu
Zhengdong Lu
Hang Li
Victor O. K. Li
1
+ Scaling Laws for Neural Language Models 2020 Jared Kaplan
Sam McCandlish
Tom Henighan
T. B. Brown
Benjamin Chess
Rewon Child
Scott Gray
Alec Radford
Jeffrey Wu
Dario Amodei
1
+ Multi-scale Transformer Language Models 2020 Sandeep Subramanian
Ronan Collobert
Marc’Aurelio Ranzato
Y-Lan Boureau
1
+ PDF Chat Syntactic Structure from Deep Learning 2020 Tal Linzen
Marco Baroni
1
+ PDF Chat Mechanisms for handling nested dependencies in neural-network language models and humans 2021 Yair Lakretz
Dieuwke Hupkes
Alessandra Vergallito
Marco Marelli
Marco Baroni
Stanislas Dehaene
1
+ What Context Features Can Transformer Language Models Use? 2021 Joe O’Connor
Jacob Andreas
1
+ Linear Transformers Are Secretly Fast Weight Programmers 2021 Imanol Schlag
Kazuki Irie
Jürgen Schmidhuber
1
+ Efficient Transformers: A Survey 2020 Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
1
+ Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 2019 Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
1
+ RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency 2018 Richard Futrell
Ethan Wilcox
Takashi Morita
Roger Lévy
1
+ Language Models are Few-Shot Learners 2020 T. B. Brown
Benjamin F. Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
1
+ In-context Learning and Induction Heads 2022 Catherine Olsson
Nelson Elhage
Neel Nanda
Nicholas Joseph
Nova DasSarma
Tom Henighan
Ben Mann
Amanda Askell
Yuntao Bai
Anna Chen
1
+ Pointer Sentinel Mixture Models 2016 Stephen Merity
Caiming Xiong
James T. Bradbury
Richard Socher
1
+ Regularizing and Optimizing LSTM Language Models 2017 Stephen Merity
Nitish Shirish Keskar
Richard Socher
1
+ Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study 2017 Samuel B. Ritter
David G. T. Barrett
Adam Santoro
Matt Botvinick
1
+ Attention Is All You Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Łukasz Kaiser
Illia Polosukhin
1