Lan Jiang

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding 2018 Alex Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel Bowman
2
+ FAQ-based Question Answering via Word Alignment 2015 Zhiguo Wang
Abraham Ittycheriah
1
+ Explaining and Harnessing Adversarial Examples 2014 Ian Goodfellow
Jonathon Shlens
Christian Szegedy
1
+ PDF Chat A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations 2016 Shengxian Wan
Yanyan Lan
Jiafeng Guo
Jun Xu
Liang Pang
Xueqi Cheng
1
+ PDF Chat Text Matching as Image Recognition 2016 Liang Pang
Yanyan Lan
Jiafeng Guo
Jun Xu
Shengxian Wan
Xueqi Cheng
1
+ PDF Chat Entropy-SGD: biasing gradient descent into wide valleys 2019 Pratik Chaudhari
Anna Choromanska
Stefano Soatto
Yann LeCun
Carlo Baldassi
Christian Borgs
Jennifer Chayes
Levent Sagun
Riccardo Zecchina
1
+ Bilateral Multi-Perspective Matching for Natural Language Sentences 2017 Zhiguo Wang
Wael Hamza
Radu Florian
1
+ PDF Chat Enhanced LSTM for Natural Language Inference 2017 Qian Chen
Xiaodan Zhu
Zhen-Hua Ling
Si Wei
Hui Jiang
Diana Inkpen
1
+ Visualizing the Loss Landscape of Neural Nets 2017 Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
1
+ PDF Chat What you can cram into a single \$&!#* vector: Probing sentence embeddings for linguistic properties 2018 Alexis Conneau
Germán Kruszewski
Guillaume Lample
Loïc Barrault
Marco Baroni
1
+ Progress & Compress: A scalable framework for continual learning 2018 Jonathan Schwarz
Jelena Luketina
Wojciech Marian Czarnecki
Agnieszka Grabska‐Barwińska
Yee Whye Teh
Razvan Pascanu
Raia Hadsell
1
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
1
+ Decoupled Weight Decay Regularization 2017 Ilya Loshchilov
Frank Hutter
1
+ Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference 2019 Tom McCoy
Ellie Pavlick
Tal Linzen
1
+ Probing Neural Network Comprehension of Natural Language Arguments 2019 Timothy Niven
Hung‐Yu Kao
1
+ Stress Test Evaluation for Natural Language Inference 2018 Aakanksha Naik
Abhilasha Ravichander
Norman Sadeh
Carolyn Penstein Rosé
Graham Neubig
1
+ PDF Chat A Continuously Growing Dataset of Sentential Paraphrases 2017 Wuwei Lan
Siyu Qiu
Hua He
Wei Xu
1
+ PDF Chat Multi-Perspective Relevance Matching with Hierarchical ConvNets for Social Media Search 2019 Jinfeng Rao
Wei Yang
Yuhao Zhang
Ferhan Türe
Jimmy Lin
1
+ PDF Chat SQuAD: 100,000+ Questions for Machine Comprehension of Text 2016 Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
1
+ PDF Chat Adversarial Examples for Evaluating Reading Comprehension Systems 2017 Robin Jia
Percy Liang
1
+ Explicit Inductive Bias for Transfer Learning with Convolutional Networks 2018 Xuhong Li
Yves Grandvalet
Franck Davoine
1
+ RoBERTa: A Robustly Optimized BERT Pretraining Approach 2019 Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
Mike Lewis
Luke Zettlemoyer
Veselin Stoyanov
1
+ Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models 2019 Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
1
+ PDF Chat Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment 2020 Di Jin
Zhijing Jin
Joey Tianyi Zhou
Peter Szolovits
1
+ PDF Chat FreeLB: Enhanced Adversarial Training for Natural Language Understanding 2019 Chen Zhu
Yu Cheng
Zhe Gan
Siqi Sun
Tom Goldstein
Jingjing Liu
1
+ An Investigation of Why Overparameterization Exacerbates Spurious Correlations 2020 Shiori Sagawa
Aditi Raghunathan
Pang Wei Koh
Percy Liang
1
+ Adversarial NLI: A New Benchmark for Natural Language Understanding 2020 Yixin Nie
Adina Williams
Emily Dinan
Mohit Bansal
Jason Weston
Douwe Kiela
1
+ SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization 2020 Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
Tuo Zhao
1
+ Beyond Accuracy: Behavioral Testing of NLP Models with CheckList 2020 Marco Túlio Ribeiro
Tongshuang Wu
Carlos Guestrin
Sameer Singh
1
+ PDF Chat Adversarial Training with Fast Gradient Projection Method against Synonym Substitution Based Text Attacks 2021 Xiaosen Wang
Yichen Yang
Yihe Deng
Kun He
1
+ InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective 2020 Boxin Wang
Shuohang Wang
Yu Cheng
Zhe Gan
Ruoxi Jia
Bo Li
Jingjing Liu
1
+ Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually) 2020 Alex Warstadt
Yian Zhang
Xiaocheng Li
Haokun Liu
Samuel R. Bowman
1
+ PDF Chat Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting 2020 Sanyuan Chen
Yutai Hou
Yiming Cui
Wanxiang Che
Ting Liu
Xiangzhan Yu
1
+ Evaluating Models’ Local Decision Boundaries via Contrast Sets 2020 Matt Gardner
Yoav Artzi
Victoria Basmov
Jonathan Berant
Ben Bogin
Sihao Chen
Pradeep Dasigi
Dheeru Dua
Yanai Elazar
Ananth Gottumukkala
1
+ Revisiting Few-sample BERT Fine-tuning 2021 Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
1
+ PDF Chat Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction 2021 Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
1
+ PDF Chat Achieving Model Robustness through Discrete Adversarial Training 2021 Maor Ivgi
Jonathan Berant
1
+ PDF Chat SimCSE: Simple Contrastive Learning of Sentence Embeddings 2021 Tianyu Gao
Xingcheng Yao
Danqi Chen
1
+ Using Adversarial Attacks to Reveal the Statistical Bias in Machine Reading Comprehension Models 2021 Jieyu Lin
Jiajie Zou
Nai Ding
1
+ R-Drop: Regularized Dropout for Neural Networks 2021 Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
Min Zhang
Tie‐Yan Liu
1
+ PDF Chat Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning 2021 Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
1
+ Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models 2021 Boxin Wang
Chejian Xu
Shuohang Wang
Zhe Gan
Yu Cheng
Jianfeng Gao
Ahmed Hassan Awadallah
Bo Li
1
+ How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? 2021 Xinshuai Dong
Anh Tuan Luu
Min Lin
Shuicheng Yan
Hanwang Zhang
1
+ PDF Chat SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher 2022 Thai Le
Noseong Park
Dongwon Lee
1
+ PDF Chat On Length Divergence Bias in Textual Matching Models 2022 Lan Jiang
Tianshu Lyu
Yankai Lin
Chong Meng
Xiaoyong Lyu
Dawei Yin
1
+ Distance-Based Regularisation of Deep Networks for Fine-Tuning 2020 Henry Gouk
Timothy M. Hospedales
Massimiliano Pontil
1
+ Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction 2021 Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
1
+ Qualitatively characterizing neural network optimization problems 2014 Ian Goodfellow
Oriol Vinyals
Andrew Saxe
1
+ Attention Is All You Need 2017 Ashish Vaswani
Noam Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan N. Gomez
Łukasz Kaiser
Illia Polosukhin
1