Projects
Reading
People
Chat
SU\G
(𝔸)
/K·U
Projects
Reading
People
Chat
Sign Up
Light
Dark
System
BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity
Alexey A. Gritsenko
,
Alexander D’Amour
,
James Atwood
,
Yoni Halpern
,
D. Sculley
Type:
Preprint
Publication Date:
2018-12-17
Citations:
1
View Publication
Share
Locations
arXiv (Cornell University) -
View
Similar Works
Action
Title
Year
Authors
+
BriarPatches: Pixel-Space Interventions for Inducing Demographic Parity
2018
Alexey A. Gritsenko
Alex D’Amour
James Atwood
Yoni Halpern
D. Sculley
+
PDF
Chat
When is Multicalibration Post-Processing Necessary?
2024
Dutch Hansen
Siddartha Devic
Preetum Nakkiran
Vatsal Sharan
+
PDF
Chat
Examining the Robustness of Homogeneity Bias to Hyperparameter Adjustments in GPT-4
2025
Messi H. J. Lee
+
Enhancing Fairness of Visual Attribute Predictors
2022
T. Hänel
Nishant Kumar
Dmitrij Schlesinger
Mengze Li
Erdem Ăśnal
Abouzar Eslami
Stefan Gumhold
+
PDF
Chat
Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection
2023
Shadi Iskander
Kira Radinsky
Yonatan Belinkov
+
Conditional Learning of Fair Representations
2020
Han Zhao
Amanda Coston
Tameem Adel
Geoffrey J. Gordon
+
Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation
2022
Aahlad Puli
Nitish Joshi
He He
Rajesh Ranganath
+
Multiaccuracy: Black-Box Post-Processing for Fairness in Classification
2018
Michael P. Kim
Amirata Ghorbani
James Zou
+
Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection
2023
Shadi Iskander
Kira Radinsky
Yonatan Belinkov
+
PDF
Chat
DCAST: Diverse Class-Aware Self-Training Mitigates Selection Bias for Fairer Learning
2024
Yasin Ä°lkaÄźan Tepeli
Joana P. Gonçalves
+
PDF
Chat
Self-Paced Deep Regression Forests with Consideration on Underrepresented Examples
2020
Lili Pan
Shijie Ai
Yazhou Ren
Zenglin Xu
+
PDF
Chat
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space
2024
Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
+
Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
2021
Umang Gupta
Aaron Ferber
Bistra Dilkina
Greg Ver Steeg
+
Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
2021
Umang Gupta
Aaron Ferber
Bistra Dilkina
Greg Ver Steeg
+
PDF
Chat
Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
2021
Umang Gupta
Aaron Ferber
Bistra Dilkina
Greg Ver Steeg
+
PDF
Chat
T-HITL Effectively Addresses Problematic Associations in Image Generation and Maintains Overall Visual Quality
2024
Susan B. Epstein
Li Chen
Alessandro Vecchiato
Ankit Jain
+
Optimising Equal Opportunity Fairness in Model Training
2022
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
+
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space
2023
Maximilian Dreyer
Frederik Pahde
Christopher J. Anders
Wojciech Samek
Sebastian Lapuschkin
+
Flexibly Fair Representation Learning by Disentanglement
2019
Elliot Creager
David Madras
Jörn-Henrik Jacobsen
Marissa A. Weis
Kevin Swersky
Toniann Pitassi
Richard S. Zemel
+
Demographic bias in machine learning: measuring transference from dataset bias to model predictions
2024
Iris Dominguez-Catena
Works That Cite This (0)
Action
Title
Year
Authors
Works Cited by This (4)
Action
Title
Year
Authors
+
A kernel two-sample test
2012
Arthur Gretton
Karsten Borgwardt
Malte J. Rasch
Bernhard Schölkopf
Alexander J. Smola
+
PDF
Chat
Fairness through awareness
2012
Cynthia Dwork
Moritz Hardt
Toniann Pitassi
Omer Reingold
Richard S. Zemel
+
Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations
2017
Alex Beutel
Ed H.
Jilin Chen
Zhe Zhao
+
Learning Adversarially Fair and Transferable Representations
2018
David Madras
Elliot Creager
Toniann Pitassi
Richard S. Zemel