Oh the Prices You’ll See: Designing a Fair Exchange System to Mitigate Personalized Pricing

Type: Article
Publication Date: 2025-06-23
Citations: 0

Locations

  • arXiv (Cornell University)

Ask a Question About This Paper

Summary

The paper introduces a novel fair exchange system (S) designed to empower consumers and mitigate the adverse effects of personalized pricing in online marketplaces. This system offers a consumer-driven solution to the challenge where individuals are often unaware of paying higher prices due to behavioral profiling and lack effective means to secure better deals.

The core innovation lies in creating a market mechanism that facilitates mutually beneficial transactions among consumers, leveraging existing personalized price disparities rather than fighting against them. In this system, a “lower-paying” consumer (who is offered a better price by the market) can act as an “intermediary,” buying a good for a “higher-paying” consumer. The higher-paying consumer then pays the intermediary an agreed-upon price, from which the system takes a small fee (gamma) to sustain itself. This allows both consumers to potentially benefit, with the higher-paying consumer acquiring the good at a lower net cost than their original personalized offer, and the intermediary earning a profit.

Key innovations and findings include:

  1. A Consumer-Centric Exchange Model: Unlike prior work that focuses on sellers implementing fair pricing algorithms or consumers altering their behavior, this paper proposes an independent, third-party system that allows consumers to directly act and financially benefit. This shifts agency to the consumer side.
  2. Decentralized Negotiation for Fairness: The research meticulously compares centralized price-setting (where the system dictates transaction prices) with decentralized negotiation (where consumers agree on prices). A pivotal finding is that decentralized negotiation, modeled via the Nash bargaining solution, leads to significantly fairer outcomes and more active trading. This is because it allows individuals to account for their private utility functions and ensures both parties find the trade worthwhile, unlike centralized approaches which often lead to minimal transactions due to a lack of intermediary incentive.
  3. Feasibility of Fairness Targets: The study systematically evaluates different fairness objectives: minimizing mean net cost versus minimizing standard deviation of net cost, both at individual and group levels. It demonstrates that minimizing the mean net cost (both individually and for groups) is the most feasible and effective fairness target within this system, leading to substantial reductions (up to 66% for individuals and 69% for groups). Conversely, attempts to reduce the standard deviation of prices paid were largely unsuccessful, highlighting a trade-off between reducing average costs and ensuring uniform outcomes.
  4. The Counter-Intuitive Role of Price Dispersion: A crucial and counter-intuitive discovery is that a high initial price dispersion (i.e., a wide range of personalized prices offered by the market) is not merely tolerable but necessary and beneficial for the fair exchange system to be viable and financially sustainable. High dispersion provides greater opportunity for beneficial trades, leading to lower net prices for consumers and higher revenue for the system. This insight suggests that personalized pricing, typically viewed as unfair, can paradoxically be leveraged to improve fairness when an appropriate exchange mechanism is in place, acting as a “check against extreme personalization.”

This work builds upon several main prior ingredients:

  • Personalized Pricing and its Critique: The foundational problem is the widespread practice of personalized pricing in online marketplaces, and the existing literature that highlights its opaqueness and potential for unfairness.
  • Fairness in Machine Learning and Algorithms: The paper draws on established concepts of fairness from computational and ethical domains, particularly in defining metrics like mean and standard deviation across individuals and groups to quantify fairness outcomes.
  • Market Design and Game Theory: The system’s design is rooted in principles of market mechanisms, matching theory (e.g., assignment games), and economic concepts like utility functions and rational agents. The choice of Nash bargaining for decentralized price setting is a direct application of game theory.
  • Optimization Techniques: The modeling and simulation rely on robust optimization methods, specifically linear programming and mixed-integer quadratic programming, to determine optimal matchings and price settings under various constraints.
  • Agent-Based Simulation: The methodology employs agent-based simulations to evaluate the system’s performance and consumer behavior, drawing on an understanding of how simulated rational agents interact within a defined market structure.

In essence, the paper proposes a pragmatic, consumer-driven solution to a pressing issue in digital commerce, demonstrating its financial viability and revealing surprising insights into how personalized pricing disparities can be strategically utilized to foster greater fairness.

Many online marketplaces personalize prices based on consumer attributes. Since these prices are private, consumers will not realize if they spend more on a good than the lowest possible price, … Many online marketplaces personalize prices based on consumer attributes. Since these prices are private, consumers will not realize if they spend more on a good than the lowest possible price, and cannot easily take action to get better prices. In this paper we introduce a system that takes advantage of personalized pricing so consumers can profit while improving fairness. Our system matches consumers for trading; the lower-paying consumer buys the good for the higher-paying consumer for some fee. We explore various modeling choices and fairness targets to determine which schema will leave consumers best off, while also earning revenue for the system itself. We show that when consumers individually negotiate the transaction price, they are able to achieve the most fair outcomes. Conversely, when transaction prices are centrally set, consumers are often unwilling to transact. Minimizing the average price paid by an individual or group is most profitable for the system, while achieving a $67\%$ reduction in prices. We see that a high dispersion (or range) of original prices is necessary for our system to be viable. Higher dispersion can actually lead to increased consumer welfare, and act as a check against extreme personalization. Our results provide theoretical evidence that such a system could improve fairness for consumers while sustaining itself financially.
We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of … We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of demand conditional on covariates: setting customized interest rates, targeted discounts of consumer goods, and personalized subsidies of scarce resources with positive externalities like vaccines and bed nets. These different application areas may lead to different concerns around fairness, welfare, and equity on different objectives: price burdens on consumers, price envy, firm revenue, access to a good, equal access, and distributional consequences when the good in question further impacts downstream outcomes of interest. We conduct a comprehensive literature review in order to disentangle these different normative considerations and propose a taxonomy of different objectives with mathematical definitions. We focus on observational metrics that do not assume access to an underlying valuation distribution which is either unobserved due to binary feedback or ill-defined due to overriding behavioral concerns regarding interpreting revealed preferences. In the setting of personalized pricing for the provision of goods with positive benefits, we discuss how price optimization may provide unambiguous benefit by achieving a "triple bottom line": personalized pricing enables expanding access, which in turn may lead to gains in welfare due to heterogeneous utility, and improve revenue or budget utilization. We empirically demonstrate the potential benefits of personalized pricing in two settings: pricing subsidies for an elective vaccine, and the effects of personalized interest rates on downstream outcomes in microcredit.
We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of … We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of demand conditional on covariates: setting customized interest rates, targeted discounts of consumer goods, and personalized subsidies of scarce resources with positive externalities like vaccines and bed nets. These different application areas may lead to different concerns around fairness, welfare, and equity on different objectives: price burdens on consumers, price envy, firm revenue, access to a good, equal access, and distributional consequences when the good in question further impacts downstream outcomes of interest. We conduct a comprehensive literature review in order to disentangle these different normative considerations and propose a taxonomy of different objectives with mathematical definitions. We focus on observational metrics that do not assume access to an underlying valuation distribution which is either unobserved due to binary feedback or ill-defined due to overriding behavioral concerns regarding interpreting revealed preferences. In the setting of personalized pricing for the provision of goods with positive benefits, we discuss how price optimization may provide unambiguous benefit by achieving a "triple bottom line": personalized pricing enables expanding access, which in turn may lead to gains in welfare due to heterogeneous utility, and improve revenue or budget utilization. We empirically demonstrate the potential benefits of personalized pricing in two settings: pricing subsidies for an elective vaccine, and the effects of personalized interest rates on downstream outcomes in microcredit.
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors. It has become common practice in many industries nowadays due to … Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors. It has become common practice in many industries nowadays due to the availability of a growing amount of high granular consumer data. The discriminatory nature of personalized pricing has triggered heated debates among policymakers and academics on how to design regulation policies to balance market efficiency and equity. In this paper, we propose two sound policy instruments, i.e., capping the range of the personalized prices or their ratios. We investigate the optimal pricing strategy of a profit-maximizing monopoly under both regulatory constraints and the impact of imposing them on consumer surplus, producer surplus, and social welfare. We theoretically prove that both proposed constraints can help balance consumer surplus and producer surplus at the expense of total surplus for common demand distributions, such as uniform, logistic, and exponential distributions. Experiments on both simulation and real-world datasets demonstrate the correctness of these theoretical results. Our findings and insights shed light on regulatory policy design for the increasingly monopolized business in the digital era.
We consider a personalized pricing problem in which we have data consisting of feature information, historical pricing decisions, and binary realized demand. The goal is to perform off-policy evaluation for … We consider a personalized pricing problem in which we have data consisting of feature information, historical pricing decisions, and binary realized demand. The goal is to perform off-policy evaluation for a new personalized pricing policy that maps features to prices. Methods based on inverse propensity weighting (including doubly robust methods) for off-policy evaluation may perform poorly when the logging policy has little exploration or is deterministic, which is common in pricing applications. Building on the balanced policy evaluation framework of Kallus (2018), we propose a new approach tailored to pricing applications. The key idea is to compute an estimate that minimizes the worst-case mean squared error or maximizes a worst-case lower bound on policy performance, where in both cases the worst-case is taken with respect to a set of possible revenue functions. We establish theoretical convergence guarantees and empirically demonstrate the advantage of our approach using a real-world pricing dataset.
Firms tracking consumer purchase information often use behavior-based pricing (BBP), i.e., price discriminate between consumers based on preferences revealed from purchase histories. However, behavioral research has shown that such pricing … Firms tracking consumer purchase information often use behavior-based pricing (BBP), i.e., price discriminate between consumers based on preferences revealed from purchase histories. However, behavioral research has shown that such pricing practices can lead to perceptions of unfairness when consumers are charged a higher price than other consumers for the same product. This paper studies the impact of consumers’ fairness concerns on firms’ behavior-based pricing strategy, profits, consumer surplus, and social welfare. Prior research shows that BBP often yields lower profits than profits without customer recognition or behavior-based price discrimination. By contrast, we find that firms’ profits from conducting BBP increase with consumers’ fairness concerns. When fairness concerns are sufficiently strong, practicing BBP is more profitable than without customer recognition. However, consumers’ fairness concerns decrease consumer surplus. In addition, when consumers’ fairness concerns are sufficiently strong, they reduce inefficient switching and improve social welfare. This paper was accepted by J. Miguel Villas-Boas, marketing.
In this note, we explore channel interactions in an information-intensive environment where the retailer can implement personalized pricing and the manufacturer can leverage both personalized pricing and entry into a … In this note, we explore channel interactions in an information-intensive environment where the retailer can implement personalized pricing and the manufacturer can leverage both personalized pricing and entry into a direct distribution channel. We study whether a retailer can benefit from personalized pricing and how upstream personalized pricing or entry into a direct distribution channel affects the allocation of channel profit. We find that the retailer is worse off because of its own or upstream personalized pricing, even when the retailer is a monopoly. However, it may still be optimal for the retailer to embrace personalized pricing in order to reap the strategic benefit of deterring the manufacturer from selling direct and targeting end consumers.
Abstract Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on … Abstract Firms increasingly deploy algorithmic pricing approaches to determine what to charge for their goods and services. Algorithmic pricing can discriminate prices both dynamically over time and personally depending on individual consumer information. Although legal, the ethicality of such approaches needs to be examined as often they trigger moral concerns and sometimes outrage. In this research paper, we provide an overview and discussion of the ethical challenges germane to algorithmic pricing. As a basis for our discussion, we perform a systematic interpretative review of 315 related articles on dynamic and personalized pricing as well as pricing algorithms in general. We then use this review to define the term algorithmic pricing and map its key elements at the micro-, meso-, and macro levels from a business and marketing ethics perspective. Thus, we can identify morally ambivalent topics that call for deeper exploration by future research.
The widespread availability of behavioral data has led to the development of data-driven personalized pricing algorithms: sellers attempt to maximize their revenue by estimating the consumer's willingness-to-pay and pricing accordingly. … The widespread availability of behavioral data has led to the development of data-driven personalized pricing algorithms: sellers attempt to maximize their revenue by estimating the consumer's willingness-to-pay and pricing accordingly. Our objective is to develop algorithms that protect consumer interests against personalized pricing schemes. In this paper, we consider a consumer who learns more and more about a potential purchase across time, while simultaneously revealing more and more information about herself to a potential seller. We formalize a strategic consumer's purchasing decision when interacting with a seller who uses personalized pricing algorithms, and contextualize this problem among the existing literature in optimal stopping time theory and computational finance. We provide an algorithm that consumers can use to protect their own interests against personalized pricing algorithms. This algorithmic stopping method uses sample paths to train estimates of the optimal stopping time. To the best of our knowledge, this is one of the first works that provides computational methods for the consumer to maximize her utility when decision making under surveillance. We demonstrate the efficacy of the algorithmic stopping method using a numerical simulation, where the seller uses a Kalman filter to approximate the consumer's valuation and sets prices based on myopic expected revenue maximization. Compared to a myopic purchasing strategy, we demonstrate increased payoffs for the consumer in expectation.
Abstract This paper proposes a theory of pricing premised upon the assumptions that customers dislike unfair prices—those marked up steeply over cost—and that firms take these concerns into account when … Abstract This paper proposes a theory of pricing premised upon the assumptions that customers dislike unfair prices—those marked up steeply over cost—and that firms take these concerns into account when setting prices. Because they do not observe firms’ costs, customers must extract costs from prices. The theory assumes that customers infer less than rationally: When a price rises due to a cost increase, customers partially misattribute the higher price to a higher markup—which they find unfair. Firms anticipate this response and trim their price increases, which drives the passthrough of costs into prices below one: Prices are somewhat rigid. Embedded in a New Keynesian model as a replacement for the usual pricing frictions, our theory produces monetary nonneutrality: When monetary policy loosens and inflation rises, customers misperceive markups as higher and feel unfairly treated; firms mitigate this perceived unfairness by reducing their markups; in general equilibrium, employment rises. The theory also features a hybrid short-run Phillips curve, realistic impulse responses of output and employment to monetary and technology shocks, and an upward-sloping long-run Phillips curve.
The use of dynamic pricing by profit-maximizing firms gives rise to demand fairness concerns, measured by discrepancies in consumer groups' demand responses to a given pricing strategy. Notably, dynamic pricing … The use of dynamic pricing by profit-maximizing firms gives rise to demand fairness concerns, measured by discrepancies in consumer groups' demand responses to a given pricing strategy. Notably, dynamic pricing may result in buyer distributions unreflective of those of the underlying population, which can be problematic in markets where fair representation is socially desirable. To address this, policy makers might leverage tools such as taxation and subsidy to adapt policy mechanisms dependent upon their social objective. In this paper, we explore the potential for AI methods to assist such intervention strategies. To this end, we design a basic simulated economy, wherein we introduce a dynamic social planner (SP) to generate corporate taxation schedules geared to incentivizing firms towards adopting fair pricing behaviours, and to use the collected tax budget to subsidize consumption among underrepresented groups. To cover a range of possible policy scenarios, we formulate our social planner's learning problem as a multi-armed bandit, a contextual bandit and finally as a full reinforcement learning (RL) problem, evaluating welfare outcomes from each case. To alleviate the difficulty in retaining meaningful tax rates that apply to less frequently occurring brackets, we introduce FairReplayBuffer, which ensures that our RL agent samples experiences uniformly across a discretized fairness space. We find that, upon deploying a learned tax and redistribution policy, social welfare improves on that of the fairness-agnostic baseline, and approaches that of the analytically optimal fairness-aware baseline for the multi-armed and contextual bandit settings, and surpassing it by 13.19% in the full RL setting.
We design a coordination mechanism for truck drivers that uses pricing schemes to alleviate traffic congestion in a general transportation network. We consider the user heterogeneity in Value-Of-Time (VOT) by … We design a coordination mechanism for truck drivers that uses pricing schemes to alleviate traffic congestion in a general transportation network. We consider the user heterogeneity in Value-Of-Time (VOT) by adopting a multi-class model with stochastic Origin-Destination (OD) demands for the truck drivers. A basic characteristic of the mechanism is that the coordinator asks the truck drivers to declare their desired OD pair, as well as their individual VOT from a set of $N$ available options, and guarantees that the resulting pricing scheme is Pareto-improving, i.e. every truck driver will be better-off compared to the User Equilibrium (UE) and that every truck driver will have an incentive to truthfully declare his/her VOT, while leading to a revenue-neutral (budget balanced) on average mechanism. We show that the Optimum Pricing Scheme (OPS) can be calculated by solving a nonconvex optimization problem. To achieve computational efficiency, we additionally propose an Approximately Optimum Pricing Scheme (AOPS) and we prove that it satisfies the aforementioned characteristics. Both pricing schemes are compared to the Congestion Pricing with Uniform Revenue Refunding (CPURR) scheme through extensive simulation experiments. Initially, we experimentally show for the single OD pair with two routes network, CPURR does not provide a significantly better solution compared to the UE in terms of expected total monetary cost whenever the OD demand is stochastic. For the same network, we also show that the difference in the expected total monetary cost of truck drivers between the OPS and the CPURR solutions becomes higher as the difference between the distinct classes of VOT becomes larger. Finally, the simulation results using the Sioux Falls network demonstrate that both OPS and AOPS consistently outperform CPURR both in expected total travel time and in expected total monetary cost.
Contextual pricing strategies are prevalent in online retailing, where the seller adjusts prices based on products' attributes and buyers' characteristics. Although such strategies can enhance seller's profits, they raise concerns … Contextual pricing strategies are prevalent in online retailing, where the seller adjusts prices based on products' attributes and buyers' characteristics. Although such strategies can enhance seller's profits, they raise concerns about fairness when significant price disparities emerge among specific groups, such as gender or race. These disparities can lead to adverse perceptions of fairness among buyers and may even violate the law and regulation. In contrast, price differences can incentivize disadvantaged buyers to strategically manipulate their group identity to obtain a lower price. In this paper, we investigate contextual dynamic pricing with fairness constraints, taking into account buyers' strategic behaviors when their group status is private and unobservable from the seller. We propose a dynamic pricing policy that simultaneously achieves price fairness and discourages strategic behaviors. Our policy achieves an upper bound of $O(\sqrt{T}+H(T))$ regret over $T$ time horizons, where the term $H(T)$ arises from buyers' assessment of the fairness of the pricing policy based on their learned price difference. When buyers are able to learn the fairness of the price policy, this upper bound reduces to $O(\sqrt{T})$. We also prove an $\Omega(\sqrt{T})$ regret lower bound of any pricing policy under our problem setting. We support our findings with extensive experimental evidence, showcasing our policy's effectiveness. In our real data analysis, we observe the existence of price discrimination against race in the loan application even after accounting for other contextual information. Our proposed pricing policy demonstrates a significant improvement, achieving 35.06% reduction in regret compared to the benchmark policy.
In incomplete market theory, the utility-based price and the indifference pricing have especially received much attention in pricing methods using utility function. This paper constructs the framework to unite these … In incomplete market theory, the utility-based price and the indifference pricing have especially received much attention in pricing methods using utility function. This paper constructs the framework to unite these two methods and analyzes the relationship between them, using the setting of the exponential utility. Furthermore, we deduce the equilibrium price under the framework of the utility-based price.
Constant function market makers (CFMMs) are a popular decentralized exchange mechanism and have recently been the subject of much research, but major CFMMs give traders no privacy. Prior work proposes … Constant function market makers (CFMMs) are a popular decentralized exchange mechanism and have recently been the subject of much research, but major CFMMs give traders no privacy. Prior work proposes randomly splitting and shuffling trades to give some privacy toall users \citechitra2022differential, or adding noise to the market state after each trade and charging afixed 'privacy fee' to all traders \citefrongillo2018bounded. In contrast, we propose a noisy CFMM mechanism where users specify personal privacy requirements and pay personalized fees. We show that the noise added for privacy protection createsadditional arbitrage opportunities. We call a mechanismpriceable if there exists a privacy fee that always matches the additional arbitrage loss in expectation. We show that a mechanism is priceable if and only if the noise added is zero-mean in the asset amount. We also show that priceability and setting the right fee are necessary for a mechanism to betruthful, and that this fee is inversely proportional to the CFMM's liquidity.
Nowadays, with the growth of applying multi agent design paradigms in recent applications, the idea of using mental models for enabling autonomous agents to collaborate is getting more attention. In … Nowadays, with the growth of applying multi agent design paradigms in recent applications, the idea of using mental models for enabling autonomous agents to collaborate is getting more attention. In this paper, an architecture is presented for sharing the mental models through a kind of transition from the agents' present mental states to some new states that can become as conflict less as much toward each other. This architecture has been inspired by a variety of mental models in humans and supports agents working in different simultaneous contexts. Our test bed to evaluate this architecture is complex scenarios which are likely impossible to be resolved by individual agents without sharing. We believe the proposed architecture would be able to provide agents with the capability of generating more consistent behaviors between them.
We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, … We study fairness in classification, where individuals are classified, e.g., admitted to a university, and the goal is to prevent discrimination against individuals based on their membership in some group, while maintaining utility for the classifier (the university). The main conceptual contribution of this paper is a framework for fair classification comprising (1) a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand; (2) an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly. We also present an adaptation of our approach to achieve the complementary goal of "fair affirmative action," which guarantees statistical parity (i.e., the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population), while treating similar individuals as similarly as possible. Finally, we discuss the relationship of fairness to privacy: when fairness implies privacy, and how tools developed in the context of differential privacy may be applied to fairness.
Today, many e-commerce websites personalize their content, including Netflix (movie recommendations), Amazon (product suggestions), and Yelp (business reviews). In many cases, personalization provides advantages for users: for example, when a … Today, many e-commerce websites personalize their content, including Netflix (movie recommendations), Amazon (product suggestions), and Yelp (business reviews). In many cases, personalization provides advantages for users: for example, when a user searches for an ambiguous query such as ``router,'' Amazon may be able to suggest the woodworking tool instead of the networking device. However, personalization on e-commerce sites may also be used to the user's disadvantage by manipulating the products shown (price steering) or by customizing the prices of products (price discrimination). Unfortunately, today, we lack the tools and techniques necessary to be able to detect such behavior.
Spectrum auctions allow a spectrum owner to allocate scarce spectrum resources quickly to the users that value them most. Previous solutions, while enabling reusability-driven and truthful spectrum allocation, are also … Spectrum auctions allow a spectrum owner to allocate scarce spectrum resources quickly to the users that value them most. Previous solutions, while enabling reusability-driven and truthful spectrum allocation, are also expected to provide collusion-resistance, price fairness for homogeneous channels, online auction with unknown and dynamic spectrum supply, and bounded system performance. Existing works, however, lack most of these desirable properties due to the inherent technically challenging nature in the spectrum auction design. In this paper, we focus on the problem of allocating idle channels to spectrum users with homogeneous demands in a setting where available channels are arriving in a dynamic and random order. Taking spectrum reusability into consideration, we first propose THEMIS-I: a novel and efficient spectrum auction algorithm that achieves fair pricing for homogeneous channels, online spectrum auction under dynamic spectrum supply, and a <inline-formula><tex-math notation="LaTeX">$\log$</tex-math></inline-formula> approximation to the optimal social welfare. To enhance the robustness of the system, we further propose THEMIS-II: a collusion-resistant design that can resist any number of coalition groups of small size while still possessing all the above desirable properties. We analytically show that THEMIS can achieve either truthfulness without collusion or <inline-formula> <tex-math notation="LaTeX">$t$</tex-math></inline-formula> -truthfulness tolerating a collusion group of size <inline-formula> <tex-math notation="LaTeX">$t$</tex-math></inline-formula> with high probability. To the best of our knowledge, we are the first to design truthful spectrum auctions enabling collusion-resistance and fair payments for homogenous channels simultaneously under dynamic spectrum supply. Experimental results show that THEMIS outperforms the existing benchmarks by providing perfect fairness of pricing for both the no-colluding case and the colluding case.
We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm.Using data from a unique randomized controlled … We study the welfare implications of personalized pricing, an extreme form of third-degree price discrimination implemented with machine learning for a large, digital firm.Using data from a unique randomized controlled pricing field experiment we train a demand model and conduct inference about the effects of personalized pricing on firm and consumer surplus.In a second experiment, we validate our predictions in the field.The initial experiment reveals unexercised market power that allows the firm to raise its price optimally, generating a 55% increase in profits.Personalized pricing improves the firm's expected posterior profits by an additional 19%, relative to the optimized uniform price, and by 86%, relative to the firm's unoptimized status quo price.Turning to welfare effects on the demand side, total consumer surplus declines 23% under personalized pricing relative to uniform pricing, and 47% relative to the firm's unoptimized status quo price.However, over 60% of consumers benefit from lower prices under personalization and total welfare can increase under standard inequity-averse welfare functions.Simulations with our demand estimates reveal a non-monotonic relationship between the granularity of the segmentation data and the total consumer surplus under personalization.These findings indicate a need for caution in the current public policy debate regarding data privacy and personalized pricing insofar as some data restrictions may not per se improve consumer welfare.
Online shops could offer each website customer a different price. Such personalized pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers, which may be … Online shops could offer each website customer a different price. Such personalized pricing can lead to advanced forms of price discrimination based on individual characteristics of consumers, which may be provided, obtained, or assumed. An online shop can recognize customers, for instance through cookies, and categorize them as price-sensitive or price-insensitive. Subsequently, it can charge (presumed) price-insensitive people higher prices. This paper explores personalized pricing from a legal and an economic perspective. From an economic perspective, there are valid arguments in favour of price discrimination, but its effect on total consumer welfare is ambiguous. Irrespectively, many people regard personalized pricing as unfair or manipulative. The paper analyses how this dislike of personalized pricing may be linked to economic analysis and to other norms or values. Next, the paper examines whether European data protection law applies to personalized pricing. Data protection law applies if personal data are processed, and this paper argues that that is generally the case when prices are personalized. Data protection law requires companies to be transparent about the purpose of personal data processing, which implies that they must inform customers if they personalize prices. Subsequently, consumers have to give consent. If enforced, data protection law could thereby play a significant role in mitigating any adverse effects of personalized pricing. It could help to unearth how prevalent personalized pricing is and how people respond to transparency about it.
The hospitality industry is facing a major disruption as a consequence of Airbnb and similar peer-to-peer platforms. This empirical research analyses the impact of perceived quality in pricing differences according … The hospitality industry is facing a major disruption as a consequence of Airbnb and similar peer-to-peer platforms. This empirical research analyses the impact of perceived quality in pricing differences according to seasonality. Upper-scale hotels show less difference comparing peak and low season prices than do middle-scale hotels. Airbnb landlords discriminate prices according to seasonality, but contrary to the hotels, there are generally no differences between weekday and weekend pricing. Perceived quality seems to have a lower effect on prices for Airbnb apartments compared to the hotel industry, suggesting the idea of two clearly different business models.
Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We … Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them.
In federated learning (FL), data owners "share" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate … In federated learning (FL), data owners "share" their local data in a privacy preserving manner in order to build a federated model, which in turn, can be used to generate revenues for the participants. However, in FL involving business participants, they might incur significant costs if several competitors join the same federation. Furthermore, the training and commercialization of the models will take time, resulting in delays before the federation accumulates enough budget to pay back the participants. The issues of costs and temporary mismatch between contributions and rewards have not been addressed by existing payoff-sharing schemes. In this paper, we propose the Federated Learning Incentivizer (FLI) payoff-sharing scheme. The scheme dynamically divides a given budget in a context-aware manner among data owners in a federation by jointly maximizing the collective utility while minimizing the inequality among the data owners, in terms of the payoff gained by them and the waiting time for receiving payoff. Extensive experimental comparisons with five state-of-the-art payoff-sharing schemes show that FLI is the most attractive to high quality data owners and achieves the highest expected revenue for a data federation.
We analyze the welfare consequences of a monopolist having additional information about consumers' tastes, beyond the prior distribution; the additional information can be used to charge different prices to different … We analyze the welfare consequences of a monopolist having additional information about consumers' tastes, beyond the prior distribution; the additional information can be used to charge different prices to different segments of the market, i.e., carry out “third degree price discrimination.” We show that the segmentation and pricing induced by the additional information can achieve every combination of consumer and producer surplus such that: (i) consumer sur plus is nonnegative, (ii) producer surplus is at least as high as profits under the uniform monopoly price, and (iii) total surplus does not exceed the surplus generated by efficient trade. (JEL D42, D83, L12)
Based on energy demand, consumers can be broadly categorized into low energy consumers (LECs) and high energy consumers (HECs). HECs use heavy load appliances, e.g., electric heaters and air conditioners, … Based on energy demand, consumers can be broadly categorized into low energy consumers (LECs) and high energy consumers (HECs). HECs use heavy load appliances, e.g., electric heaters and air conditioners, and LECs do not use heavy load appliances. Thus, HECs demand more energy compared to LECs. The usage of high energy consumption appliances by HECs leads to peak formation in various time intervals. Different pricing schemes, i.e., time of use (ToU), real time pricing (RTP), inclined block rate (IBR), and critical peak pricing (CPP), have been proposed previously. In ToU, an energy tariff is divided into three blocks, i.e., on-peak (high rates), off-peak (low rates), and mid-peak (between on-peak and off-peak rates) hours, and these rates are applied to all electricity users without distinction. The high energy demand by HECs causes the high peak formation; thus, higher rates should be applied to only HECs rather than all consumers, which is not the case in existing billing mechanisms. LECs are also charged higher rates in on-peak intervals and this billing mechanisms are unjustified. Thus, in this paper, a fair pricing scheme (FPS) based on power demand forecasting is developed to reduce extra bills of LECs. First, we developed a machine learning-based electricity load forecasting method, i.e., an extreme learning machine (ELM), in order to differentiate LECs and HECs. With the proposed FPS, electricity cost calculations for LECs and HECs are based on the actual energy consumption; thus, LECs do not subsidize HECs. Simulations were conducted for performance evaluation of our proposed FPS mechanism, and the results demonstrate LECs can reduce electricity cost up to 11.0075%, and HECs are charged relatively higher than previous pricing schemes as a penalty for their contribution to the on-peak formation. As a result, a fairer system is realized, and the total revenue of the utility company is assured.
We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of … We study the interplay of fairness, welfare, and equity considerations in personalized pricing based on customer features. Sellers are increasingly able to conduct price personalization based on predictive modeling of demand conditional on covariates: setting customized interest rates, targeted discounts of consumer goods, and personalized subsidies of scarce resources with positive externalities like vaccines and bed nets. These different application areas may lead to different concerns around fairness, welfare, and equity on different objectives: price burdens on consumers, price envy, firm revenue, access to a good, equal access, and distributional consequences when the good in question further impacts downstream outcomes of interest. We conduct a comprehensive literature review in order to disentangle these different normative considerations and propose a taxonomy of different objectives with mathematical definitions. We focus on observational metrics that do not assume access to an underlying valuation distribution which is either unobserved due to binary feedback or ill-defined due to overriding behavioral concerns regarding interpreting revealed preferences. In the setting of personalized pricing for the provision of goods with positive benefits, we discuss how price optimization may provide unambiguous benefit by achieving a "triple bottom line": personalized pricing enables expanding access, which in turn may lead to gains in welfare due to heterogeneous utility, and improve revenue or budget utilization. We empirically demonstrate the potential benefits of personalized pricing in two settings: pricing subsidies for an elective vaccine, and the effects of personalized interest rates on downstream outcomes in microcredit.
As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also … As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.
Federated learning is a setting where agents, each with access to their own data source, combine models learned from local data to create a global model. If agents are drawing … Federated learning is a setting where agents, each with access to their own data source, combine models learned from local data to create a global model. If agents are drawing their data from different distributions, though, federated learning might produce a biased global model that is not optimal for each agent. This means that agents face a fundamental question: should they join the global model or stay with their local model? In this work, we show how this situation can be naturally analyzed through the framework of coalitional game theory. Motivated by these considerations, we propose the following game: there are heterogeneous players with different model parameters governing their data distribution and different amounts of data they have noisily drawn from their own distribution. Each player's goal is to obtain a model with minimal expected mean squared error (MSE) on their own distribution. They have a choice of fitting a model based solely on their own data, or combining their learned parameters with those of some subset of the other players. Combining models reduces the variance component of their error through access to more data, but increases the bias because of the heterogeneity of distributions. In this work, we derive exact expected MSE values for problems in linear regression and mean estimation. We use these values to analyze the resulting game in the framework of hedonic game theory; we study how players might divide into coalitions, where each set of players within a coalition jointly constructs a single model. In a case with arbitrarily many players that each have either a "small" or "large" amount of data, we constructively show that there always exists a stable partition of players into coalitions.
Federated learning is an emerging framework that builds centralized machine learning models with training data distributed across multiple devices.Most of the previous works about federated learning focus on the privacy … Federated learning is an emerging framework that builds centralized machine learning models with training data distributed across multiple devices.Most of the previous works about federated learning focus on the privacy protection and communication cost reduction.However, how to achieve fairness in federated learning is underexplored and challenging especially when testing data distribution is different from training distribution or even unknown.Introducing simple fairness constraints on the centralized model cannot achieve model fairness on unknown testing data.In this paper, we develop a fairness-aware agnostic federated learning framework (Agnostic-Fair) to deal with the challenge of unknown testing distribution.We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.Therefore, the centralized model built from AgnosticFair can achieve high accuracy and fairness guarantee on unknown testing data.Moreover, the built model can be directly applied to local sites as it guarantees fairness on local data distributions.To our best knowledge, this is the first work to achieve fairness in federated learning.Experimental results on two real datasets demonstrate the effectiveness in terms of both utility and fairness under data shift scenarios.
Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment. … Training ML models which are fair across different demographic groups is of critical importance due to the increased integration of ML in crucial decision-making scenarios such as healthcare and recruitment. Federated learning has been viewed as a promising solution for collaboratively training machine learning models among multiple parties while maintaining their local data privacy. However, federated learning also poses new challenges in mitigating the potential bias against certain populations (e.g., demographic groups), as this typically requires centralized access to the sensitive information (e.g., race, gender) of each datapoint. Motivated by the importance and challenges of group fairness in federated learning, in this work, we propose FairFed, a novel algorithm for fairness-aware aggregation to enhance group fairness in federated learning. Our proposed approach is server-side and agnostic to the applied local debiasing thus allowing for flexible use of different local debiasing methods across clients. We evaluate FairFed empirically versus common baselines for fair ML and federated learning and demonstrate that it provides fairer models, particularly under highly heterogeneous data distributions across clients. We also demonstrate the benefits of FairFed in scenarios involving naturally distributed real-life data collected from different geographical locations or departments within an organization.
Abstract This chapter discusses the morality of online price discrimination. Price discrimination is a widespread type of market behaviour and it occurs, roughly, when a seller systematically charges different prices … Abstract This chapter discusses the morality of online price discrimination. Price discrimination is a widespread type of market behaviour and it occurs, roughly, when a seller systematically charges different prices for the same product when it is offered to different groups of customers. Price discrimination occurs both online and offline, but some find the practice particularly suspicious when deployed in online markets. This asymmetry, we argue, calls for an explanation. In the chapter, we define price discrimination and review a number of explanations of why, and when, price discrimination is morally objectionable. We argue that online price discrimination will often prove more problematic than its offline counterpart, but also that neither practice is necessarily morally wrong.
Price discrimination strategies, which offer different prices to customers based on differences in their valuations, have become common practice. Although it allows sellers to increase their profits, it also raises … Price discrimination strategies, which offer different prices to customers based on differences in their valuations, have become common practice. Although it allows sellers to increase their profits, it also raises several concerns in terms of fairness (e.g., by charging higher prices (or denying access) to protected minorities in case they have higher (or lower) valuations than the general population). This topic has received extensive attention from media, industry, and regulatory agencies. In this paper, we consider the problem of setting prices for different groups under fairness constraints. We first propose four definitions: fairness in price, demand, consumer surplus, and no-purchase valuation. We prove that satisfying more than one of these fairness constraints is impossible even under simple settings. We then analyze the pricing strategy of a profit-maximizing seller and the impact of imposing fairness on the seller’s profit, consumer surplus, and social welfare. Under a linear demand model, we find that imposing a small amount of price fairness increases social welfare, whereas too much price fairness may result in a lower welfare relative to imposing no fairness. On the other hand, imposing fairness in demand or consumer surplus always decreases social welfare. Finally, no-purchase valuation fairness always increases social welfare. We observe similar patterns under several extensions and for other common demand models numerically. Our results and insights provide a first step in understanding the impact of imposing fairness in the context of discriminatory pricing. This paper was accepted by Jayashankar Swaminathan, operations management. Funding: A. N. Elmachtoub was supported by the Division of Civil, Mechanical and Manufacturing Innovation [Grants 1763000 and 1944428]. Supplemental Material: The data files and online appendix are available at https://doi.org/10.1287/mnsc.2022.4317 .
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors. It has become common practice in many industries nowadays due to … Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors. It has become common practice in many industries nowadays due to the availability of a growing amount of high granular consumer data. The discriminatory nature of personalized pricing has triggered heated debates among policymakers and academics on how to design regulation policies to balance market efficiency and equity. In this paper, we propose two sound policy instruments, i.e., capping the range of the personalized prices or their ratios. We investigate the optimal pricing strategy of a profit-maximizing monopoly under both regulatory constraints and the impact of imposing them on consumer surplus, producer surplus, and social welfare. We theoretically prove that both proposed constraints can help balance consumer surplus and producer surplus at the expense of total surplus for common demand distributions, such as uniform, logistic, and exponential distributions. Experiments on both simulation and real-world datasets demonstrate the correctness of these theoretical results. Our findings and insights shed light on regulatory policy design for the increasingly monopolized business in the digital era.
Journal Article Fair Pricing Get access Julio J. Rotemberg Julio J. Rotemberg 1Harvard Business School Search for other works by this author on: Oxford Academic Google Scholar Journal of the … Journal Article Fair Pricing Get access Julio J. Rotemberg Julio J. Rotemberg 1Harvard Business School Search for other works by this author on: Oxford Academic Google Scholar Journal of the European Economic Association, Volume 9, Issue 5, 1 October 2011, Pages 952–981, https://doi.org/10.1111/j.1542-4774.2011.01036.x Published: 01 October 2011
Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. … Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.
In this paper we propose \texttt{GIFAIR-FL}: a framework that imposes \textbf{G}roup and \textbf{I}ndividual \textbf{FAIR}ness to \textbf{F}ederated \textbf{L}earning settings. By adding a regularization term, our algorithm penalizes the spread in the … In this paper we propose \texttt{GIFAIR-FL}: a framework that imposes \textbf{G}roup and \textbf{I}ndividual \textbf{FAIR}ness to \textbf{F}ederated \textbf{L}earning settings. By adding a regularization term, our algorithm penalizes the spread in the loss of client groups to drive the optimizer to fair solutions. Our framework \texttt{GIFAIR-FL} can accommodate both global and personalized settings. Theoretically, we show convergence in non-convex and strongly convex settings. Our convergence guarantees hold for both $i.i.d.$ and non-$i.i.d.$ data. To demonstrate the empirical performance of our algorithm, we apply our method to image classification and text prediction tasks. Compared to existing algorithms, our method shows improved fairness results while retaining superior or similar prediction accuracy.
In many real-world situations, data is distributed across multiple self-interested agents. These agents can collaborate to build a machine learning model based on data from multiple agents, potentially reducing the … In many real-world situations, data is distributed across multiple self-interested agents. These agents can collaborate to build a machine learning model based on data from multiple agents, potentially reducing the error each experiences. However, sharing models in this way raises questions of fairness: to what extent can the error experienced by one agent be significantly lower than the error experienced by another agent in the same coalition? In this work, we consider two notions of fairness that each may be appropriate in different circumstances: egalitarian fairness (which aims to bound how dissimilar error rates can be) and proportional fairness (which aims to reward players for contributing more data). We similarly consider two common methods of model aggregation, one where a single model is created for all agents (uniform), and one where an individualized model is created for each agent. For egalitarian fairness, we obtain a tight multiplicative bound on how widely error rates can diverge between agents collaborating (which holds for both aggregation methods). For proportional fairness, we show that the individualized aggregation method always gives a small player error that is upper bounded by proportionality. For uniform aggregation, we show that this upper bound is guaranteed for any individually rational coalition (where no player wishes to leave to do local learning).
In many online markets we "shop alone" — there is no way for us to know the prices other consumers paid for the same goods. Could this lack of price … In many online markets we "shop alone" — there is no way for us to know the prices other consumers paid for the same goods. Could this lack of price transparency lead to differential pricing? To answer this question, we present a generalized framework to audit online markets for differential pricing using automated agents. Consensus is a key idea in our work: for a successful black-box audit, both the experimenter and seller must agree on the agents' attributes. We audit two competitive online travel markets on kayak.com (flight and hotel markets) and construct queries representative of the demand for goods. Crucially, we assume ignorance of the sellers' pricing mechanisms while conducting these audits. We conservatively implement consensus with nine distinct profiles based on behavior, not demographics. We use a structural causal model for price differences and estimate model parameters using Bayesian inference. We can unambiguously show that many sellers (but not all) demonstrate behavior-driven differential pricing. In the flight market, some profiles are nearly more likely to see a worse price than the best performing profile, and nearly more likely in the hotel market. While the control profile (with no browsing history) was on average offered the best prices in the flight market, surprisingly, other profiles outperformed the control in the hotel market. The price difference between any pair of profiles occurring by chance is $ 0.44 in the flight market and $ 0.09 for hotels. However, the expected loss of welfare for any profile when compared to the best profile can be as much as $ 6.00 for flights and $ 3.00 for hotels (i.e., 15 × and 33 × the price difference by chance respectively). This illustrates the need for new market designs or policies that encourage more transparent market design to overcome differential pricing practices.
A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only … A seller is pricing identical copies of a good to a stream of unit-demand buyers. Each buyer has a value on the good as his private information. The seller only knows the empirical value distribution of the buyer population and chooses the revenue-optimal price. We consider a widely studied third-degree price discrimination model where an information intermediary with perfect knowledge of the arriving buyer's value sends a signal to the seller, hence changing the seller's posterior and inducing the seller to set a personalized posted price. Prior work of Bergemann, Brooks, and Morris (American Economic Review, 2015) has shown the existence of a signaling scheme that preserves seller revenue, while always selling the item, hence maximizing consumer surplus. In a departure from prior work, we ask whether the consumer surplus generated is fairly distributed among buyers with different values. To this end, we aim to maximize functions of buyers' welfare that reward more balanced surplus allocations.
In many prediction problems, the predictive model affects the distribution of the prediction target. This phenomenon is known as performativity and is often caused by the behavior of individuals with … In many prediction problems, the predictive model affects the distribution of the prediction target. This phenomenon is known as performativity and is often caused by the behavior of individuals with vested interests in the outcome of the predictive model. Although performativity is generally problematic because it manifests as distribution shifts, we develop algorithmic fairness practices that leverage performativity to achieve stronger group fairness guarantees in social classification problems (compared to what is achievable in non-performative settings). In particular, we leverage the policymaker's ability to steer the population to remedy inequities in the long term. A crucial benefit of this approach is that it is possible to resolve the incompatibilities between conflicting group fairness definitions.
Advertising funds a number of services that play a major role in our everyday online experiences, from social networking, to maps, search, and news. As the power and reach of … Advertising funds a number of services that play a major role in our everyday online experiences, from social networking, to maps, search, and news. As the power and reach of advertising platforms grow, so do the concerns about the potential for discrimination associated with targeted advertising. However, despite our ever-improving ability to measure and describe instances of unfair distribution of high-stakes ads—such as employment, housing, or credit—we lack the tools to model and predict the extent to which alternative systems could address such problems. In this paper, we simulate an ad distribution system to model the effects that enforcing popularly proposed fairness approaches would have on the utility of the advertising platforms and their users. We show that in many realistic scenarios, achieving statistical parity would come at a much higher utility cost to platforms than enforcing predictive parity or equality of opportunity. Additionally, we identify a tradeoff between different notions of fairness, i.e., enforcing one criterion leads to worse outcomes with respect to other criteria. We further describe how pursuing fairness in situations where one group of users is more expensive to advertise to is likely to result in "leveling down" effects, i.e., not benefiting any group of users. We show that these negative effects can be prevented by ensuring that it is the platforms that carry the cost of fairness rather than passing it on to their users or advertisers. Overall, our findings contribute to ongoing discussions on fair ad delivery. We show that fairness is not satisfied by default, that limiting targeting options is not sufficient to address potential discrimination and bias in online ad delivery, and that choices made by regulators and platforms may backfire if potential side-effects are not properly considered.
Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike. These two objectives, however, are incompatible when a scoring model is calibrated … Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike. These two objectives, however, are incompatible when a scoring model is calibrated through discontinuous probability functions, where individuals can be randomly assigned an outcome determined by a fixed probability. This procedure may provide two similar individuals from the same protected group with classification odds that are disparately different – a clear violation of individual fairness. Assigning unique odds to each protected sub-population may also prevent members of one sub-population from ever receiving the chances of a positive outcome available to individuals from another sub-population, which we argue is another type of unfairness called individual odds. We reconcile all this by constructing continuous probability functions between group thresholds that are constrained by their Lipschitz constant. Our solution preserves the model's predictive power, individual fairness and robustness while ensuring group fairness.
Recent conversations in the algorithmic fairness literature have raised several concerns with standard conceptions of fairness. First, constraining predictive algorithms to satisfy fairness benchmarks may sometimes lead to non-optimal outcomes … Recent conversations in the algorithmic fairness literature have raised several concerns with standard conceptions of fairness. First, constraining predictive algorithms to satisfy fairness benchmarks may sometimes lead to non-optimal outcomes for disadvantaged groups. Second, technical interventions are often ineffective by themselves, especially when divorced from an understanding of structural processes that generate social inequality. Inspired by both these critiques, we construct a common decision-making model, using mortgage loans as a running example. We show that under some conditions, any choice of decision threshold will inevitably perpetuate existing disparities in financial stability unless one deviates from the Pareto optimal policy. This confirms the intuition that technical interventions, such as fairness constraints, often do not sufficiently address persistent underlying inequities. Then, we model the effects of three different types of interventions: (1) policy changes in the algorithm's decision threshold, and external changes to parameters that govern the downstream effects of late payment for (2) the whole population or (3) disadvantaged subgroups. We show how different interventions are recommended depending on the difficulty of enacting structural change upon external parameters and depending on the policymaker's preferences for equity or efficiency. Counterintuitively, we demonstrate that preferences for efficiency over equity may sometimes lead to recommendations for interventions that target the under-resourced group alone. Finally, we simulate the effects of interventions on a dataset that combines HMDA and Fannie Mae loan data. This research highlights the ways that structural inequality can be perpetuated by seemingly unbiased decision mechanisms, and it shows that in many situations, technical solutions must be paired with external, context-aware interventions to enact social change.