Hao Yang

Follow

Generating author description...

Common Coauthors
Commonly Cited References
Action Title Year Authors # of times referenced
+ PDF Chat Fairness through awareness 2012 Cynthia Dwork
Moritz Hardt
Toniann Pitassi
Omer Reingold
Richard S. Zemel
1
+ PDF Chat Inherent Tradeoffs in the Fair Determination of Risk Scores 2023 Manish Raghavan
1
+ Equality of Opportunity in Supervised Learning 2016 Moritz Hardt
Eric Price
Nathan Srebro
1
+ PDF Chat Fairness in Criminal Justice Risk Assessments: The State of the Art 2018 Richard A. Berk
Hoda Heidari
Shahin Jabbari
Michael Kearns
Aaron Roth
1
+ Beyond Parity: Fairness Objectives for Collaborative Filtering 2017 Sirui Yao
Bert Huang
1
+ Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making 2018 Michael Veale
Max Van Kleek
Reuben Binns
1
+ BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 2018 Jacob Devlin
Ming‐Wei Chang
Kenton Lee
Kristina Toutanova
1
+ How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness 2018 Nripsuta Ani Saxena
Karen Huang
Evan DeFilippis
Goran Radanović
David C. Parkes
Yang Liu
1
+ PDF Chat Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification 2019 Daniel Borkan
Lucas Dixon
Jeffrey Sorensen
Nithum Thain
Lucy Vasserman
1
+ Metric Learning for Individual Fairness 2019 Christina Ilvento
1
+ Evaluating Gender Bias in Machine Translation 2019 Gabriel Stanovsky
Noah A. Smith
Luke Zettlemoyer
1
+ Counterfactual Fairness in Text Classification through Robustness 2019 Sahaj Garg
Vincent Perot
Nicole Limtiaco
Ankur Taly
Ed H.
Alex Beutel
1
+ PDF Chat Reducing Gender Bias in Abusive Language Detection 2018 Ji Ho Park
Jamin Shin
Pascale Fung
1
+ Two-Player Games for Efficient Non-Convex Constrained Optimization 2018 Andrew Cotter
Heinrich Jiang
Karthik Sridharan
1
+ PDF Chat A comparative study of fairness-enhancing interventions in machine learning 2019 Sorelle A. Friedler
Carlos Scheidegger
Suresh Venkatasubramanian
Sonam Choudhary
Evan P. Hamilton
Derek Roth
1
+ Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments 2017 Alexandra Chouldechova
1
+ The Woman Worked as a Babysitter: On Biases in Language Generation 2019 Emily Sheng
Kai-Wei Chang
Prem Natarajan
Nanyun Peng
1
+ Debiasing Embeddings for Reduced Gender Bias in Text Classification 2019 Flavien Prost
Nithum Thain
Tolga Bolukbasi
1
+ HuggingFace's Transformers: State-of-the-art Natural Language Processing 2019 Thomas Wolf
Lysandre Debut
Victor Sanh
Julien Chaumond
Clément Delangue
Anthony Moi
Pierric Cistac
Tim Rault
RĂ©mi Louf
Morgan Funtowicz
1
+ Training individually fair ML models with Sensitive Subspace Robustness 2019 Mikhail Yurochkin
Amanda Bower
Yuekai Sun
1
+ Auditing ML Models for Individual Bias and Unfairness 2020 Songkai Xue
Mikhail Yurochkin
Yuekai Sun
1
+ PDF Chat Do I Look Like a Criminal? Examining how Race Presentation Impacts Human Judgement of Recidivism 2020 Keri Mallari
Kori Inkpen
Paul Johns
Sarah Tan
Divya Ramesh
Ece Kamar
1
+ Towards Understanding Gender Bias in Relation Extraction 2020 Andrew Gaut
Tony Sun
Shirlyn Tang
Yuxin Huang
Jing Qian
Mai ElSherief
Jieyu Zhao
Diba Mirza
Elizabeth Belding
Kai-Wei Chang
1
+ Two Simple Ways to Learn Individual Fairness Metrics from Data 2020 Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
1
+ SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness 2020 Mikhail Yurochkin
Yuekai Sun
1
+ PDF Chat Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? 2019 Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miro DudĂ­k
Hanna Wallach
1
+ UNQOVERing Stereotyping Biases via Underspecified Questions 2020 Tao Li
Daniel Khashabi
Tushar Khot
Ashish Sabharwal
Vivek Srikumar
1
+ WILDS: A Benchmark of in-the-Wild Distribution Shifts 2020 Pang Wei Koh
Shiori Sagawa
Henrik Marklund
Sang Michael Xie
Marvin Zhang
Akshay Balsubramani
Weihua Hu
Michihiro Yasunaga
Richard L. Phillips
Irena Gao
1
+ Statistical inference for individual fairness 2021 Subha Maity
Songkai Xue
Mikhail Yurochkin
Yuekai Sun
1
+ PDF Chat Detecting Race and Gender Bias in Visual Representation of AI on Web Search Engines 2021 Mykola Makhortykh
Aleksandra Urman
Roberto Ulloa
1
+ Towards Involving End-users in Interactive Human-in-the-loop AI Fairness 2022 Yuri Nakao
Simone Stumpf
Subeida Ahmed
Aisha Naseer
Lorenzo Strappelli
1
+ PDF Chat Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support 2022 Michael Madaio
Lisa Egede
Hariharan Subramonyam
Jennifer Wortman Vaughan
Hanna Wallach
1
+ PDF Chat Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness 2022 Yuri Nakao
Lorenzo Strappelli
Simone Stumpf
Aisha Naseer
Daniele Regoli
Giulia Del Gamba
1
+ PDF Chat Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits 2022 Wesley Hanwen Deng
Manish Nagireddy
Michelle Seng Ah Lee
Jatinder Singh
Zhiwei Steven Wu
Kenneth Holstein
Haiyi Zhu
1
+ PDF Chat Your fairness may vary: Pretrained language model fairness in toxic text classification 2022 Ioana Baldini
Dennis Wei
Karthikeyan Natesan Ramamurthy
Moninder Singh
Mikhail Yurochkin
1
+ Post-processing for Individual Fairness 2021 Felix Petersen
Debarghya Mukherjee
Yuekai Sun
Mikhail Yurochkin
1
+ Algorithm Fairness in AI for Medicine and Healthcare 2021 Richard J. Chen
Tiffany Chen
Jana LipkovĂĄ
Judy J. Wang
Drew F. K. Williamson
Ming Y. Lu
Sharifa Sahai
Faisal Mahmood
1
+ Designing Toxic Content Classification for a Diversity of Perspectives 2021 Dinesh Kumar
Patrick Gage Kelley
Sunny Consolvo
Joshua Mason
Elie Bursztein
Zakir Durumeric
Kurt Thomas
Michael Bailey
1
+ UnQovering Stereotyping Biases via Underspecified Questions 2020 Tao Li
Tushar Khot
Daniel Khashabi
Ashish Sabharwal
Vivek Srikumar
1
+ PDF Chat Improving Fairness in Machine Learning Systems 2019 Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miro DudĂ­k
Hanna Wallach
1
+ Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning 2019 Megha Srivastava
Hoda Heidari
Andreas Krause
1
+ Aequitas: A Bias and Fairness Audit Toolkit 2018 Pedro Saleiro
Benedict Kuester
Loren Hinkson
Jesse London
Abby Stevens
Ari Anisfeld
Kit T. Rodolfa
Rayid Ghani
1
+ A Reductions Approach to Fair Classification 2018 Alekh Agarwal
Alina Beygelzimer
Miroslav Dudı́k
John Langford
Hanna Wallach
1
+ On Measures of Biases and Harms in NLP 2021 Sunipa Dev
Emily Sheng
Jieyu Zhao
Aubrie Amstutz
Jiao Sun
Yu Hou
Mattie Sanseverino
Ji‐In Kim
Akihiro Nishi
Nanyun Peng
1
+ Gerrymandering individual fairness 2023 Tim RĂ€z
1