Automated directed fairness testing

Type: Preprint

Publication Date: 2018-08-20

Citations: 85

DOI: https://doi.org/10.1145/3238147.3238165

Download PDF

Abstract

Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.

Locations

  • arXiv (Cornell University) - View - PDF
  • DataCite API - View

Similar Works

Action Title Year Authors
+ PDF Chat FairTTTS: A Tree Test Time Simulation Method for Fairness-Aware Classification 2025 Nurit Cohen-Inger
Lior Rokach
Bracha Shapira
Seffi Cohen
+ PDF Chat Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software 2024 Zhenpeng Chen
Xinyue Li
Jie M. Zhang
Federica Sarro
Yan Liu
+ PDF Chat FAIREDU: A Multiple Regression-Based Method for Enhancing Fairness in Machine Learning Models for Educational Applications 2024 Nga Pham
Minh Kha Do
T. Dai
Phạm Ngọc Hưng
Anh Nguyen‐Duc
+ Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms 2020 Gareth Jones
James M. Hickey
Pietro G. Di Stefano
Charanpal Dhanjal
Laura C. Stoddart
V. Vasileiou
+ Certifying Robustness to Programmable Data Bias in Decision Trees. 2021 Anna P. Meyer
Aws Albarghouthi
Loris D’Antoni
+ Metrics and methods for a systematic comparison of fairness-aware machine learning algorithms. 2020 Gareth Jones
James M. Hickey
Pietro G. Di Stefano
Charanpal Dhanjal
Laura C. Stoddart
Vlasios Vasileiou
+ PDF Chat Analyzing Fairness of Classification Machine Learning Model with Structured Dataset 2024 Ahmed Nabih Zaki Rashed
Abdelkrim Kallich
Mohamed A. El-Tayeb
+ Certifying Robustness to Programmable Data Bias in Decision Trees 2021 Anna P. Meyer
Aws Albarghouthi
Loris D’Antoni
+ Certifying Robustness to Programmable Data Bias in Decision Trees 2021 Anna P. Meyer
Aws Albarghouthi
Loris D’Antoni
+ PDF Chat The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning 2024 Jake Fawkes
Nic Fishman
M. B. Andrews
Zachary C. Lipton
+ FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks 2022 Kiarash Mohammadi
Aishwarya Sivaraman
Golnoosh Farnadi
+ FITNESS: A Causal De-correlation Approach for Mitigating Bias in Machine Learning Software 2023 Ying Xiao
Shangwen Wang
Sicen Liu
Dingyuan Xue
Xiang Zhan
Yepang Liu
+ Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness 2020 Tong Wang
Maytal Saar‐Tsechansky
+ PDF Chat Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness 2020 Tong Wang
Maytal Saar‐Tsechansky
+ FairBalance: Improving Machine Learning Fairness on MultipleSensitive Attributes With Data Balancing. 2021 Zhe Yu
Joymallya Chakraborty
Tim Menzies
+ PDF Chat Causality-Aided Trade-Off Analysis for Machine Learning Fairness 2023 Zhenlan Ji
Pingchuan Ma
Shuai Wang
Yanhui Li
+ Causality-Aided Trade-off Analysis for Machine Learning Fairness 2023 Zhenlan Ji
Pingchuan Ma
Shuai Wang
Yanhui Li
+ PDF Chat FairGridSearch: A Framework to Compare Fairness-Enhancing Models 2023 Shih-Chi Ma
Tatiana Ermakova
Benjamin Fabian
+ Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances 2022 Minghua Ma
Tian Zhao
Max Hort
Federica Sarro
Hongyu Zhang
Qingwei Lin
Dongmei Zhang
+ Fix Fairness, Don’t Ruin Accuracy: Performance Aware Fairness Repair using AutoML 2023 Giang Nguyen
Sumon Biswas
Hridesh Rajan