Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
Amirreza Mirzaei
Follow
Share
Generating author description...
All published works
Action
Title
Year
Authors
+
PDF
Chat
ComAlign: Compositional Alignment in Vision-Language Models
2024
Ali Abdollah
Ahmad Izadi
Armin Saghafian
Reza Vahidimajd
Mohammad Mozafari
Amirreza Mirzaei
Mohammadmahdi Samiei
Mahdieh Soleymani Baghshah
+
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
2022
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
Anjana Arunkumar
Arjun Ashok
Arut Selvan Dhanasekaran
Atharva Naik
David Stap
+
PDF
Chat
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
2022
Yizhong Wang
Swaroop Mishra
Pegah Alipoormolabashi
Yeganeh Kordi
Amirreza Mirzaei
Atharva Naik
Arjun Ashok
Arut Selvan Dhanasekaran
Anjana Arunkumar
David Stap
Common Coauthors
Coauthor
Papers Together
Atharva Naik
2
Arut Selvan Dhanasekaran
2
Neeraj Varshney
2
Pegah Alipoormolabashi
2
Arjun Ashok
2
Sumanta Patro
2
David Stap
2
Ravsehaj Singh Puri
2
Rushang Karia
2
Maitreya Patel
2
Krima Doshi
2
Phani Rohitha Kaza
2
Shailaja Keyur Sampat
2
Kuntal Kumar Pal
2
Pulkit Verma
2
Savan Doshi
2
Swaroop Mishra
2
Ishan Purohit
2
Giannis Karamanolakis
2
Tanay Dixit
2
Kirby Kuznia
2
Anjana Arunkumar
2
Mirali Purohit
2
Ishani Mondal
2
Yizhong Wang
2
Siddhartha Mishra
2
Jacob Anderson
2
Mihir Parmar
2
Mehrad Moradshahi
2
Yeganeh Kordi
2
Eshaan Pathak
2
Haizhi Gary Lai
1
Ali Abdollah
1
Hannaneh Hajishirzi
1
Ahmad Izadi
1
Chitta Baral
1
Sujan Reddy
1
Armin Saghafian
1
Daniel Khashabi
1
Mohammadmahdi Samiei
1
Noah A. Smith
1
Mohammad Mozafari
1
Sujan Reddy A
1
Xudong Shen
1
Haizhi Lai
1
Reza Vahidimajd
1
Xudong Shen
1
Mahdieh Soleymani Baghshah
1
Yejin Choi
1
Commonly Cited References
Action
Title
Year
Authors
# of times referenced
+
PDF
Chat
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks
2017
Kazuma Hashimoto
Caiming Xiong
Yoshimasa Tsuruoka
Richard Socher
1
+
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction
2017
Courtney Napoles
Keisuke Sakaguchi
Joel Tetreault
1
+
Deep Learning Scaling is Predictable, Empirically
2017
Joel Hestness
Sharan Narang
Newsha Ardalani
Gregory Diamos
Heewoo Jun
Hassan Kianinejad
Md. Mostofa Ali Patwary
Yang Yang
Yanqi Zhou
1
+
PDF
Chat
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era
2017
Chen Sun
Abhinav Shrivastava
Saurabh Singh
Abhinav Gupta
1
+
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
2019
Eleni Triantafillou
Tyler Zhu
Vincent Dumoulin
Pascal Lamblin
Utku Evci
Kelvin Xu
Ross Goroshin
Carles Gelada
Kevin Swersky
Pierre-Antoine Manzagol
1
+
PDF
Chat
WinoGrande: An Adversarial Winograd Schema Challenge at Scale
2020
Keisuke Sakaguchi
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
1
+
PDF
Chat
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
2020
Mohit Shridhar
Jesse Thomason
Daniel Gordon
Yonatan Bisk
Winson Han
Roozbeh Mottaghi
Luke Zettlemoyer
Dieter Fox
1
+
Understanding Points of Correspondence between Sentences for Abstractive Summarization
2020
Logan Lebanoff
John Muchovej
Franck Dernoncourt
Doo Soon Kim
Lidan Wang
Walter Chang
Fei Liu
1
+
PDF
Chat
UNIFIEDQA: Crossing Format Boundaries with a Single QA System
2020
Daniel Khashabi
Sewon Min
Tushar Khot
Ashish Sabharwal
Oyvind Tafjord
Peter E. Clark
Hannaneh Hajishirzi
1
+
Learning from Task Descriptions
2020
Orion Weller
Nicholas Lourie
Matt Gardner
Matthew E. Peters
1
+
PDF
Chat
Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension
2020
Max Bartolo
Alastair Roberts
Johannes Welbl
Sebastian Riedel
Pontus Stenetorp
1
+
PDF
Chat
Author’s Sentiment Prediction
2020
Mohaddeseh Bastan
Mahnaz Koupaee
Youngseo Son
Richard Sicoli
Niranjan Balasubramanian
1
+
PDF
Chat
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
2021
Qinyuan Ye
Bill Yuchen Lin
Xiang Ren
1
+
PDF
Chat
mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer
2021
Linting Xue
Noah Constant
Adam P. Roberts
Mihir Kale
Rami Al‐Rfou
Aditya Siddhant
Aditya Barua
Colin Raffel
1
+
PDF
Chat
CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems
2021
Kushal Chawla
Jaysa Ramirez
Rene Clever
Gale Lucas
Jonathan May
Jonathan Gratch
1
+
Disfl-QA: A Benchmark Dataset for Understanding Disfluencies in Question Answering
2021
Aditya Gupta
Jiacheng Xu
Shyam Upadhyay
Diyi Yang
Manaal Faruqui
1
+
Learning to Generate Task-Specific Adapters from Task Description
2021
Qinyuan Ye
Xiang Ren
1
+
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
2021
Ruiqi Zhong
Kristy Lee
Zheng Zhang
Dan Klein
1
+
PDF
Chat
Cross-Task Generalization via Natural Language Crowdsourcing Instructions
2022
Swaroop Mishra
Daniel Khashabi
Chitta Baral
Hannaneh Hajishirzi
1
+
PDF
Chat
Reframing Instructional Prompts to GPTk’s Language
2022
Swaroop Mishra
Daniel Khashabi
Chitta Baral
Yejin Choi
Hannaneh Hajishirzi
1
+
Multitask Prompted Training Enables Zero-Shot Task Generalization
2021
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
Zaid Alyafeai
Antoine Chaffin
Arnaud Stiegler
Teven Le Scao
Arun Raja
1
+
PDF
Chat
The Power of Scale for Parameter-Efficient Prompt Tuning
2021
Brian Lester
Rami Al‐Rfou
Noah Constant
1
+
One-Shot Learning from a Demonstration with Hierarchical Latent Language
2022
Nathaniel Weir
Xingdi Yuan
Marc-Alexandre Côté
Matthew Hausknecht
Romain Laroche
Ida Momennejad
Harm van Seijen
Benjamin Van Durme
1
+
Training language models to follow instructions with human feedback
2022
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
Pamela Mishkin
Chong Zhang
Sandhini Agarwal
Katarina Slama
Alex Ray
1
+
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
2021
Vamsi Aribandi
Yi Tay
Tal Schuster
Jinfeng Rao
Huaixiu Zheng
Sanket Vaibhav Mehta
Honglei Zhuang
Vinh Q. Tran
Dara Bahri
Jianmo Ni
1
+
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
2022
Aarohi Srivastava
Abhinav Rastogi
Abhishek S. Rao
Abu Awal Shoeb
Abubakar Abid
Adam Fisch
Adam R. Brown
Adam Santoro
Aditya Gupta
Adrià Garriga-Alonso
1
+
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
2022
Stephen Bach
Victor Sanh
Zheng Yong
Albert Webson
Colin Raffel
Nihal V. Nayak
Abheesht Sharma
Taewoon Kim
M Saiful Bari
Thibault Févry
1
+
FILM: Following Instructions in Language with Modular Methods
2021
So-Yeon Min
Devendra Singh Chaplot
Pradeep Ravikumar
Yonatan Bisk
Ruslan Salakhutdinov
1
+
Finetuned Language Models Are Zero-Shot Learners
2021
Jason Lee
Maarten Bosma
Vincent Y. Zhao
Kelvin Guu
Adams Wei Yu
Brian Lester
Nan Du
Andrew M. Dai
Quoc V. Le
1
+
FLEX: Unifying Evaluation for Few-Shot NLP
2021
Jonathan Bragg
Arman Cohan
Kyle Lo
Iz Beltagy
1
+
PDF
Chat
In-BoXBART: Get Instructions into Biomedical Multi-Task Learning
2022
Mihir Parmar
Swaroop Mishra
Mirali Purohit
Man Luo
Murad Mohammad
Chitta Baral
1
+
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
2019
Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
1
+
Language Models are Few-Shot Learners
2020
T. B. Brown
Benjamin F. Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
1
+
The Natural Language Decathlon: Multitask Learning as Question Answering
2018
Bryan McCann
Nitish Shirish Keskar
Caiming Xiong
Richard Socher
1
+
The Turking Test: Can Language Models Understand Instructions?
2020
Avia Efrat
Omer Levy
1
+
MetaICL: Learning to Learn In Context
2021
Sewon Min
Michael Lewis
Luke Zettlemoyer
Hannaneh Hajishirzi
1
+
ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization
2022
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Wang Yanggang
Haiyu Li
Zhilin Yang
1
+
PDF
Chat
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
2022
Tianbao Xie
Chen Wu
Peng Shi
Ruiqi Zhong
Torsten Scholak
Michihiro Yasunaga
Chien-Sheng Wu
Ming Zhong
Pengcheng Yin
Sida I. Wang
1
+
Can language models learn from explanations in context?
2022
Andrew K. Lampinen
Ishita Dasgupta
Stephanie C. Y. Chan
Kory W. Mathewson
Mh Tessler
Antonia Creswell
James L. McClelland
Jane Wang
Felix Hill
1