Wojciech Gajewski

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ Task-Oriented Query Reformulation with Reinforcement Learning 2017 Rodrigo Nogueira
Kyunghyun Cho
2
+ Ask the Right Questions: Active Question Reformulation with Reinforcement Learning 2018 Christian Buck
Jannis Bulian
Massimiliano Ciaramita
Wojciech Gajewski
Andréa Gesmundo
Neil Houlsby
Wei Wang
2
+ NewsQA: A Machine Comprehension Dataset 2017 Adam Trischler
Tong Wang
Xingdi Yuan
Justin Harris
Alessandro Sordoni
Philip Bachman
Kaheer Suleman
1
+ SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine 2017 Matthew Dunn
Levent Sagun
Mike Higgins
V. Uğur GĂŒney
Volkan Cirik
Kyunghyun Cho
1
+ Adversarial Examples for Evaluating Reading Comprehension Systems 2017 Robin Jia
Percy Liang
1
+ Discrete Autoencoders for Sequence Models 2018 Ɓukasz Kaiser
Samy Bengio
1
+ PDF Chat HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering 2018 Zhilin Yang
Peng Qi
Saizheng Zhang
Yoshua Bengio
William W. Cohen
Ruslan Salakhutdinov
Christopher D. Manning
1
+ An Atari Model Zoo for Analyzing, Visualizing, and Comparing Deep Reinforcement Learning Agents 2018 Felipe Petroski Such
Vashisht Madhavan
Rosanne Liu
Rui Wang
Pablo Samuel Castro
Yulun Li
Jiale Zhi
Ludwig Schubert
Marc G. Bellemare
Jeff Clune
1
+ A BERT Baseline for the Natural Questions 2019 Chris Alberti
Kenton Lee
Michael Collins
1
+ The State of Sparsity in Deep Neural Networks 2019 Trevor Gale
Erich Elsen
Sara Hooker
1
+ Neural Speed Reading with Structural-Jump-LSTM 2019 Christian Hansen
Casper Worm Hansen
Stephen Alstrup
Jakob Grue Simonsen
Christina Lioma
1
+ Generating Long Sequences with Sparse Transformers. 2019 Rewon Child
Scott Gray
Alec Radford
Ilya Sutskever
1
+ Adaptive Attention Span in Transformers 2019 Sainbayar Sukhbaatar
Édouard Grave
Piotr Bojanowski
Armand Joulin
1
+ Energy and Policy Considerations for Deep Learning in NLP 2019 Emma Strubell
Ananya Ganesh
Andrew McCallum
1
+ Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data 2019 Moonsu Han
Minki Kang
Hyun-Woo Jung
Sung Ju Hwang
1
+ Synthetic QA Corpora Generation with Roundtrip Consistency 2019 Chris Alberti
Daniel Andor
Emily Pitler
Jacob Devlin
Michael Collins
1
+ Sequence to Sequence Learning with Neural Networks 2014 Ilya Sutskever
Oriol Vinyals
Quoc V. Le
1
+ Categorical Reparameterization with Gumbel-Softmax 2016 Eric Jang
Shixiang Gu
Ben Poole
1
+ Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction 2019 Kosuke Nishida
Kyosuke Nishida
Masaaki Nagata
Atsushi Otsuka
Itsumi Saito
Hisako Asano
Junji Tomita
1
+ Reinforcement Learning with Unsupervised Auxiliary Tasks 2016 Max Jaderberg
Volodymyr Mnih
Wojciech Marian Czarnecki
Tom Schaul
Joel Z. Leibo
David Silver
Koray Kavukcuoglu
1
+ Latent Retrieval for Weakly Supervised Open Domain Question Answering 2019 Kenton Lee
Ming‐Wei Chang
Kristina Toutanova
1
+ Bidirectional Attention Flow for Machine Comprehension 2016 Minjoon Seo
Aniruddha Kembhavi
Ali Farhadi
Hannaneh Hajishirzi
1
+ Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer 2017 Noam Shazeer
Azalia Mirhoseini
Krzysztof Maziarz
Andy Davis
Quoc V. Le
Geoffrey E. Hinton
Jeff Dean
1
+ Probing Neural Network Comprehension of Natural Language Arguments 2019 Timothy Niven
Hung‐Yu Kao
1
+ PDF Chat Deal or No Deal? End-to-End Learning of Negotiation Dialogues 2017 Mike Lewis
Denis Yarats
Yann Dauphin
Devi Parikh
Dhruv Batra
1
+ Know What You Don’t Know: Unanswerable Questions for SQuAD 2018 Pranav Rajpurkar
Robin Jia
Percy Liang
1
+ Mesh-TensorFlow: Deep Learning for Supercomputers 2018 Noam Shazeer
Youlong Cheng
Niki Parmar
Dustin Tran
Ashish Vaswani
Penporn Koanantakool
Peter Hawkins
HyoukJoong Lee
Mingsheng Hong
Cliff Young
1
+ PDF Chat Did the Model Understand the Question? 2018 Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
1
+ PDF Chat SQuAD: 100,000+ Questions for Machine Comprehension of Text 2016 Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
1
+ A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents 2018 Arman Cohan
Franck Dernoncourt
Doo Soon Kim
Trung Bui
Seokhwan Kim
Walter Chang
Nazli Goharian
1
+ PDF Chat Adversarial Examples for Evaluating Reading Comprehension Systems 2017 Robin Jia
Percy Liang
1
+ The Fact Extraction and VERification (FEVER) Shared Task 2018 James Thorne
Andreas Vlachos
Oana Cocarascu
Christos Christodoulopoulos
Arpit Mittal
1
+ Neural Machine Translation by Jointly Learning to Align and Translate 2015 Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
1
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
1
+ PDF Chat Trick Me If You Can: Human-in-the-Loop Generation of Adversarial Examples for Question Answering 2019 Eric Wallace
Pedro RodrĂ­guez
Shi Feng
Ikuya Yamada
Jordan Boyd‐Graber
1
+ Revealing the Importance of Semantic Retrieval for Machine Reading at Scale 2019 Yixin Nie
Songhe Wang
Mohit Bansal
1
+ Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting 2019 Shiyang Li
Xiaoyong Jin
Yao Xuan
Xiyou Zhou
Wenhu Chen
Yu-Xiang Wang
Xifeng Yan
1
+ DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter 2019 Victor Sanh
Lysandre Debut
Julien Chaumond
Thomas Wolf
1
+ Gumbel-Matrix Routing for Flexible Multi-task Learning 2019 Krzysztof Maziarz
Efi Kokiopoulou
Andréa Gesmundo
Luciano Sbaiz
GĂĄbor BartĂłk
Jesse Berent
1
+ PDF Chat Optimizing agent behavior over long time scales by transporting value 2019 Chia-Chun Hung
Timothy Lillicrap
Josh Abramson
Yan Wu
Mehdi Mirza
Federico Carnevale
Arun Ahuja
Greg Wayne
1
+ Efficient Content-Based Sparse Attention with Routing Transformers 2020 Aurko Roy
Mohammad Saffar
Ashish Vaswani
David Grangier
1
+ PDF Chat Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT 2020 Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Zhewei Yao
Amir Gholami
Michael W. Mahoney
Kurt Keutzer
1
+ Reformer: The Efficient Transformer 2020 Nikita Kitaev
Ɓukasz Kaiser
Anselm Levskaya
1
+ Scaling Laws for Neural Language Models 2020 Jared Kaplan
Sam McCandlish
Tom Henighan
T. B. Brown
Benjamin Chess
Rewon Child
Scott Gray
Alec Radford
Jeffrey Wu
Dario Amodei
1
+ Fully Quantized Transformer for Machine Translation. 2019 Gabriele Prato
Ella Charlaix
Mehdi Rezagholizadeh
1
+ Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture 2020 Christopher Brix
Parnia Bahar
Hermann Ney
1
+ HAT: Hardware-Aware Transformers for Efficient Natural Language Processing 2020 Hanrui Wang
Zhanghao Wu
Zhijian Liu
Han Cai
Ligeng Zhu
Chuang Gan
Song Han
1
+ Language Models are Few-Shot Learners 2020 T. B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
1
+ Interactive Machine Comprehension with Information Seeking Agents 2020 Xingdi Yuan
Jie Fu
Marc-Alexandre CÎté
Yi Tay
Chris Pal
Adam Trischler
1
+ MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices 2020 Zhiqing Sun
Hongkun Yu
Xiaodan Song
Renjie Liu
Yiming Yang
Denny Zhou
1