Veselin Stoyanov

Follow

Generating author description...

All published works
Action Title Year Authors
+ Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model 2023 Leo Z. Liu
Tim Dettmers
Xi Victoria Lin
Veselin Stoyanov
Xian Li
+ bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark 2023 Momchil Hardalov
Pepa Atanasova
Todor Mihaylov
Galia Angelova
Kiril Simov
Petya Osenova
Veselin Stoyanov
Ivan Koychev
Preslav Nakov
Dragomir Radev
+ Training Trajectories of Language Models Across Scales 2023 Mengzhou Xia
Mikel Artetxe
Chunting Zhou
Xi Victoria Lin
Ramakanth Pasunuru
Danqi Chen
Luke Zettlemoyer
Veselin Stoyanov
+ PDF Chat Complementary Explanations for Effective In-Context Learning 2023 Xi Ye
Srinivasan Iyer
Aslı Çelikyılmaz
Veselin Stoyanov
Greg Durrett
Ramakanth Pasunuru
+ PDF Chat Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model 2023 Zeyu Liu
Tim Dettmers
Xi Lin
Veselin Stoyanov
Xian Li
+ PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models 2022 Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Veselin Stoyanov
Majid Yazdani
+ PDF Chat Improving In-Context Few-Shot Learning via Self-Supervised Training 2022 Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Veselin Stoyanov
Zornitsa Kozareva
+ Improving In-Context Few-Shot Learning via Self-Supervised Training 2022 Mingda Chen
Jingfei Du
Ramakanth Pasunuru
Todor Mihaylov
Srini Iyer
Veselin Stoyanov
Zornitsa Kozareva
+ PDF Chat Efficient Large Scale Language Modeling with Mixtures of Experts 2022 Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
Sam Shleifer
Xi Victoria Lin
Jingfei Du
Srinivasan Iyer
Ramakanth Pasunuru
+ PDF Chat ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection 2022 Badr AlKhamissi
Faisal Ladhak
Srinivasan Iyer
Veselin Stoyanov
Zornitsa Kozareva
Xian Li
Pascale Fung
Lambert Mathias
Aslı Çelikyılmaz
Mona Diab
+ On the Role of Bidirectionality in Language Model Pre-Training 2022 Mikel Artetxe
Jingfei Du
Naman Goyal
Luke Zettlemoyer
Veselin Stoyanov
+ PDF Chat Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models 2022 Mengzhou Xia
Mikel Artetxe
Jingfei Du
Danqi Chen
Veselin Stoyanov
+ PDF Chat Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs 2021 Peter Hase
Mona Diab
Aslı Çelikyılmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Mohit Bansal
Srinivasan Iyer
+ Multi-task Retrieval for Knowledge-Intensive Tasks 2021 Jean Maillard
Vladimir Karpukhin
Fabio Petroni
Wen-tau Yih
Barlas Oğuz
Veselin Stoyanov
Gargi Ghosh
+ PDF Chat Self-training Improves Pre-training for Natural Language Understanding 2021 Jingfei Du
Édouard Grave
Beliz Gunel
Vishrav Chaudhary
Onur Çelebi
Michael Auli
Veselin Stoyanov
Alexis Conneau
+ Multi-Task Retrieval for Knowledge-Intensive Tasks 2021 Jean Maillard
Vladimir Karpukhin
Fabio Petroni
Wen-tau Yih
Barlas Oğuz
Veselin Stoyanov
Gargi Ghosh
+ Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs 2021 Peter Hase
Mona Diab
Aslı Çelikyılmaz
Xian Li
Zornitsa Kozareva
Veselin Stoyanov
Mohit Bansal
Srinivasan Iyer
+ Few-shot Learning with Multilingual Language Models 2021 Xi Victoria Lin
Todor Mihaylov
Mikel Artetxe
Tianlu Wang
Shuohui Chen
Daniel Simig
Myle Ott
Naman Goyal
Shruti Bhosale
Jingfei Du
+ Multi-task Retrieval for Knowledge-Intensive Tasks 2021 Jean Maillard
Vladimir Karpukhin
Fabio Petroni
Wen-tau Yih
Barlas Oğuz
Veselin Stoyanov
Gargi Ghosh
+ General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference 2020 Jingfei Du
Myle Ott
Haoran Li
Xing Zhou
Veselin Stoyanov
+ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension 2020 Mike Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdelrahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
+ Unsupervised Cross-lingual Representation Learning at Scale 2020 Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Édouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
+ Emerging Cross-lingual Structure in Pretrained Language Models 2020 Alexis Conneau
Shijie Wu
Haoran Li
Luke Zettlemoyer
Veselin Stoyanov
+ General Purpose Text Embeddings from Pre-trained Language Models for Scalable Inference 2020 Jingfei Du
Myle Ott
Haoran Li
Xing Zhou
Veselin Stoyanov
+ Conversational Semantic Parsing 2020 Armen Aghajanyan
Jean Maillard
Akshat Shrivastava
Keith Diedrick
M. Haeger
Haoran Li
Yashar Mehdad
Veselin Stoyanov
Anuj Kumar
Mike Lewis
+ Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model 2019 Wenhan Xiong
Jingfei Du
William Yang Wang
Veselin Stoyanov
+ SemEval-2015 Task 10: Sentiment Analysis in Twitter 2019 Sara Rosenthal
Saif M. Mohammad
Preslav Nakov
Alan Ritter
Svetlana Kiritchenko
Veselin Stoyanov
+ SemEval-2016 Task 4: Sentiment Analysis in Twitter 2019 Preslav Nakov
Alan Ritter
Sara Rosenthal
Fabrizio Sebastiani
Veselin Stoyanov
+ Unsupervised Cross-lingual Representation Learning at Scale. 2019 Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Édouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
+ Knowledge-Augmented Language Model and its Application to Unsupervised Named-Entity Recognition 2019 Angli Liu
Jingfei Du
Veselin Stoyanov
+ PDF Chat Knowledge-Augmented Language Model and Its Application to Unsupervised Named-Entity Recognition 2019 Angli Liu
Jingfei Du
Veselin Stoyanov
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
+ Bridging the domain gap in cross-lingual document classification 2019 Guokun Lai
Barlas Oğuz
Yiming Yang
Veselin Stoyanov
+ Emerging Cross-lingual Structure in Pretrained Language Models 2019 Shijie Wu
Alexis Conneau
Haoran Li
Luke Zettlemoyer
Veselin Stoyanov
+ Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model 2019 Wenhan Xiong
Jingfei Du
William Yang Wang
Veselin Stoyanov
+ SemEval-2016 Task 4: Sentiment Analysis in Twitter 2019 Preslav Nakov
Alan Ritter
Sara Brin Rosenthal
Fabrizio Sebastiani
Veselin Stoyanov
+ Unsupervised Cross-lingual Representation Learning at Scale 2019 Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Édouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
+ SemEval-2015 Task 10: Sentiment Analysis in Twitter 2019 Sara Brin Rosenthal
Saif M. Mohammad
Preslav Nakov
Alan Ritter
Svetlana Kiritchenko
Veselin Stoyanov
+ SemEval-2014 Task 9: Sentiment Analysis in Twitter 2019 Sara Brin Rosenthal
Preslav Nakov
Alan Ritter
Veselin Stoyanov
+ SemEval-2013 Task 2: Sentiment Analysis in Twitter 2019 Preslav Nakov
Zornitsa Kozareva
Alan Ritter
Sara Brin Rosenthal
Veselin Stoyanov
Theresa Wilson
+ XNLI: Evaluating Cross-lingual Sentence Representations 2018 Alexis Conneau
Guillaume Lample
Ruty Rinott
Adina Williams
Samuel R. Bowman
Holger Schwenk
Veselin Stoyanov
+ Simple Fusion: Return of the Language Model 2018 Felix Stahlberg
James Cross
Veselin Stoyanov
+ PDF Chat XNLI: Evaluating Cross-lingual Sentence Representations 2018 Alexis Conneau
Ruty Rinott
Guillaume Lample
Adina Williams
Samuel Bowman
Holger Schwenk
Veselin Stoyanov
+ Simple Fusion: Return of the Language Model 2018 Felix Stahlberg
James Cross
Veselin Stoyanov
+ XNLI: Evaluating Cross-lingual Sentence Representations 2018 Alexis Conneau
Guillaume Lample
Ruty Rinott
Adina Williams
Samuel R. Bowman
Holger Schwenk
Veselin Stoyanov
+ Simple Fusion: Return of the Language Model 2018 Felix Stahlberg
James H. Cross
Veselin Stoyanov
+ PDF Chat SemEval-2016 Task 4: Sentiment Analysis in Twitter 2016 Preslav Nakov
Alan Ritter
Sara Rosenthal
Fabrizio Sebastiani
Veselin Stoyanov
+ PDF Chat SemEval-2015 Task 10: Sentiment Analysis in Twitter 2015 Sara Rosenthal
Preslav Nakov
Svetlana Kiritchenko
Saif M. Mohammad
Alan Ritter
Veselin Stoyanov
+ PDF Chat SemEval-2014 Task 9: Sentiment Analysis in Twitter 2014 Sara Rosenthal
Alan Ritter
Preslav Nakov
Veselin Stoyanov
Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
17
+ Deep Contextualized Word Representations 2018 Matthew E. Peters
Mark E Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
12
+ Attention is All you Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Łukasz Kaiser
Illia Polosukhin
11
+ Cross-lingual Language Model Pretraining 2019 Guillaume Lample
Alexis Conneau
9
+ PDF Chat fairseq: A Fast, Extensible Toolkit for Sequence Modeling 2019 Myle Ott
Sergey Edunov
Alexei Baevski
Angela Fan
Sam Gross
Nathan Ng
David Grangier
Michael Auli
8
+ Language Models are Few-Shot Learners 2020 T. B. Brown
Benjamin F. Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
Prafulla Dhariwal
Arvind Neelakantan
Pranav Shyam
Girish Sastry
Amanda Askell
8
+ PDF Chat SQuAD: 100,000+ Questions for Machine Comprehension of Text 2016 Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
8
+ A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference 2018 Adina Williams
Nikita Nangia
Samuel Bowman
8
+ Adam: A Method for Stochastic Optimization 2014 Diederik P. Kingma
Jimmy Ba
8
+ Exploiting Similarities among Languages for Machine Translation 2013 Tomáš Mikolov
Quoc V. Le
Ilya Sutskever
8
+ PDF Chat Universal Language Model Fine-tuning for Text Classification 2018 Jeremy Howard
Sebastian Ruder
7
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
7
+ Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer 2019 Colin Raffel
Noam Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
7
+ Attention Is All You Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Łukasz Kaiser
Illia Polosukhin
6
+ XLNet: Generalized Autoregressive Pretraining for Language Understanding 2019 Zhilin Yang
Zihang Dai
Yiming Yang
Jaime Carbonell
Ruslan Salakhutdinov
Quoc V. Le
6
+ GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding 2018 Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel Bowman
6
+ PDF Chat Neural Machine Translation of Rare Words with Subword Units 2016 Rico Sennrich
Barry Haddow
Alexandra Birch
5
+ PDF Chat XNLI: Evaluating Cross-lingual Sentence Representations 2018 Alexis Conneau
Ruty Rinott
Guillaume Lample
Adina Williams
Samuel Bowman
Holger Schwenk
Veselin Stoyanov
5
+ BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension 2020 Mike Lewis
Yinhan Liu
Naman Goyal
Marjan Ghazvininejad
Abdelrahman Mohamed
Omer Levy
Veselin Stoyanov
Luke Zettlemoyer
5
+ A large annotated corpus for learning natural language inference 2015 Samuel R. Bowman
Gabor Angeli
Christopher Potts
Christopher D. Manning
5
+ PDF Chat Supervised Learning of Universal Sentence Representations from Natural Language Inference Data 2017 Alexis Conneau
Douwe Kiela
Holger Schwenk
Loïc Barrault
Antoine Bordes
5
+ PaLM: Scaling Language Modeling with Pathways 2022 Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
Adam Roberts
Paul Barham
Hyung Won Chung
Charles Sutton
Sebastian Gehrmann
5
+ OPT: Open Pre-trained Transformer Language Models 2022 Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
Shuohui Chen
Christopher Dewan
Mona Diab
Xian Li
Xi Victoria Lin
5
+ PDF Chat Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books 2015 Yukun Zhu
Ryan Kiros
Rich Zemel
Ruslan Salakhutdinov
Raquel Urtasun
Antonio Torralba
Sanja Fidler
5
+ CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data 2019 Guillaume Wenzek
Marie-Anne Lachaux
Alexis Conneau
Vishrav Chaudhary
Francisco Guzmán
Armand Joulin
Édouard Grave
5
+ Neural Machine Translation by Jointly Learning to Align and Translate 2015 Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
4
+ PDF Chat Neural Architectures for Named Entity Recognition 2016 Guillaume Lample
Miguel Ballesteros
Sandeep Subramanian
Kazuya Kawakami
Chris Dyer
4
+ Deep contextualized word representations 2018 Matthew E. Peters
Mark E Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
4
+ Language Models as Knowledge Bases? 2019 Fabio Petroni
Tim Rocktäschel
Sebastian Riedel
Patrick Lewis
Anton Bakhtin
Yuxiang Wu
Alexander Miller
4
+ PDF Chat Efficient Large Scale Language Modeling with Mixtures of Experts 2022 Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
Sam Shleifer
Xi Victoria Lin
Jingfei Du
Srinivasan Iyer
Ramakanth Pasunuru
4
+ PDF Chat SemEval-2014 Task 9: Sentiment Analysis in Twitter 2014 Sara Rosenthal
Alan Ritter
Preslav Nakov
Veselin Stoyanov
4
+ Unsupervised Data Augmentation for Consistency Training 2019 Qizhe Xie
Zihang Dai
Eduard Hovy
Minh-Thang Luong
Quoc V. Le
4
+ Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT 2019 Shijie Wu
Mark Dredze
4
+ PDF Chat Cross-Lingual Alignment of Contextual Word Embeddings, with Applications to Zero-shot Dependency Parsing 2019 Tal Schuster
Ori Ram
Regina Barzilay
Amir Globerson
4
+ Sequence to Sequence Learning with Neural Networks 2014 Ilya Sutskever
Oriol Vinyals
Quoc V. Le
4
+ Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks 2019 Haoyang Huang
Yaobo Liang
Nan Duan
Ming Gong
Linjun Shou
Daxin Jiang
Ming Zhou
4
+ A Neural Knowledge Language Model 2016 Sungjin Ahn
Heeyoul Choi
Tanel Pärnamaa
Yoshua Bengio
4
+ Gaussian Error Linear Units (GELUs) 2016 Dan Hendrycks
Kevin Gimpel
4
+ ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension 2018 Sheng Zhang
Xiaodong Liu
Jun Liu
Jianfeng Gao
Kevin Duh
Benjamin Van Durme
4
+ Introduction to the CoNLL-2002 Shared Task: Language-Independent Named Entity Recognition 2002 Erik F. Tjong Kim Sang
4
+ PDF Chat Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates 2018 Taku Kudo
3
+ Teaching Machines to Read and Comprehend 2015 Karl Moritz Hermann
Tomáš Kočiský
Edward Grefenstette
Lasse Espeholt
Will Kay
Mustafa Suleyman
Phil Blunsom
3
+ Multi-Task Deep Neural Networks for Natural Language Understanding 2019 Xiaodong Liu
Pengcheng He
Weizhu Chen
Jianfeng Gao
3
+ A Simple Method for Commonsense Reasoning 2018 Trieu H. Trinh
Quoc V. Le
3
+ Learned in translation: contextualized word vectors 2017 Bryan McCann
James Bradbury
Caiming Xiong
Richard Socher
3
+ FEVER: a Large-scale Dataset for Fact Extraction and VERification 2018 James Thorne
Andreas Vlachos
Christos Christodoulopoulos
Arpit Mittal
3
+ PDF Chat Improving Neural Machine Translation Models with Monolingual Data 2016 Rico Sennrich
Barry Haddow
Alexandra Birch
3
+ SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing 2018 Taku Kudo
John T. E. Richardson
3
+ Character-level Convolutional Networks for Text Classification 2015 Xiang Zhang
Junbo Zhao
Yann LeCun
3
+ Fine-tuned Language Models for Text Classification. 2018 Jeremy Howard
Sebastian Ruder
3