We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi Tanpp.2940 - 2963Chapter DOI:https://doi.org/10.1137/1.9781611977073.114PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
Lately, Non-Fungible Tokens (NFTs), i.e., uniquely discernible assets on a blockchain, have skyrocketed in popularity by addressing a broad audience. However, the typical NFT auctioning procedures are conducted in various, …
Lately, Non-Fungible Tokens (NFTs), i.e., uniquely discernible assets on a blockchain, have skyrocketed in popularity by addressing a broad audience. However, the typical NFT auctioning procedures are conducted in various, ad hoc ways, while mostly ignoring the context that the blockchain provides, i.e., new possibilities, but at the same time new challenges in auction design. One of the main targets of this work is to shed light on the vastly unexplored design space of NFT Auction Mechanisms, especially in those characteristics that fundamentally differ from traditional and more contemporaneous forms of auctions. We focus on the case that bidders have a valuation for the auctioned NFT, i.e., what we term the single-item NFT auction case. In this setting, we formally define an NFT Auction Mechanism, give the properties that we would ideally like a perfect mechanism to satisfy (broadly known as incentive compatibility and collusion resistance) and prove that it is impossible to have such a perfect mechanism. Even though we cannot have an all-powerful protocol like that, we move on to consider relaxed notions of those properties that we may desire the protocol to satisfy, as a trade-off between implementability and economic guarantees. Specifically, we define the notion of an equilibrium-truthful auction, where neither the seller nor the bidders can improve their utility by acting non-truthfully, so long as the counter-party acts truthfully. We also define asymptotically second-price auctions, in which the seller does not lose asymptotically any revenue in comparison to the theoretically-optimal (static) second-price sealed-bid auction, in the case that the bidders' valuations are drawn independently from some distribution. We showcase why these two are very desirable properties for an auction mechanism to enjoy, and construct the first known NFT Auction Mechanism which provably possesses such formal guarantees.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with *passive* block producers that are motivated purely by the net rewards earned at the consensus layer. This paper …
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with *passive* block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of *active* block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the term ``MEV.'' The main results of this paper show that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: with active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for the current interest in augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques.
The goal of this paper is to rigorously interrogate conventional wisdom about centralization in block-building (due to, e.g., MEV and private order flow) and the outsourcing of block-building by validators …
The goal of this paper is to rigorously interrogate conventional wisdom about centralization in block-building (due to, e.g., MEV and private order flow) and the outsourcing of block-building by validators to specialists (i.e., proposer-builder separation): 1. Does heterogeneity in skills and knowledge across block producers inevitably lead to centralization? 2. Does proposer-builder separation eliminate heterogeneity and preserve decentralization among proposers? This paper develops mathematical models and results that offer answers to these questions: 1. In a game-theoretic model with endogenous staking, heterogeneous block producer rewards, and staking costs, we quantify the extent to which heterogeneous rewards lead to concentration in the equilibrium staking distribution. 2. In a stochastic model in which heterogeneous block producers repeatedly reinvest rewards into staking, we quantify, as a function of the block producer heterogeneity, the rate at which stake concentrates on the most sophisticated block producers. 3. In a model with heterogeneous proposers and specialized builders, we quantify, as a function of the competitiveness of the builder ecosystem, the extent to which proposer-builder separation reduces the heterogeneity in rewards across different proposers. Our models and results take advantage of connections to contest design, P\'olya urn processes, and auction theory.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the …
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
In a typical decentralized autonomous organization (DAO), people organize themselves into a group that is programmatically managed. DAOs can act as bidders in auctions, with a DAO's bid treated by …
In a typical decentralized autonomous organization (DAO), people organize themselves into a group that is programmatically managed. DAOs can act as bidders in auctions, with a DAO's bid treated by the auctioneer as if it had been submitted by an individual, without regard to the internal structure of the DAO. We study auctions in which the bidders are DAOs. More precisely, we consider the design of two-level auctions in which the "participants" are groups of bidders rather than individuals. Bidders form DAOs to pool resources, but must then also negotiate the terms by which the DAO's winnings are shared. We model the outcome of a DAO's negotiations by an aggregation function (which aggregates DAO members' bids into a single group bid), and a budget-balanced cost-sharing mechanism (that determines DAO members' access to the DAO's allocation and distributes the total payment demanded from the DAO to its members). We pursue two-level mechanisms that are incentive-compatible (with truthful bidding a dominant strategy for members of each DAO) and approximately welfare-optimal. We prove that, even in the case of a single-item auction, incentive-compatible welfare maximization is not possible: No matter what the outer mechanism and the cost-sharing mechanisms used by DAOs, the welfare of the resulting two-level mechanism can be a $\approx \ln n$ factor less than optimal. We complement this lower bound with a natural two-level mechanism that achieves a matching approximate welfare guarantee. Our upper bound also extends to multi-item auctions where individuals have additive valuations. Finally, we show that our positive results cannot be extended much further: Even in multi-item settings with unit-demand bidders, truthful two-level mechanisms form a highly restricted class and as a consequence cannot guarantee any non-trivial approximation of the maximum social welfare.
The goal of this paper is to rigorously interrogate conventional wisdom about centralization in block-building (due to, e.g., MEV and private order flow) and the outsourcing of block-building by validators …
The goal of this paper is to rigorously interrogate conventional wisdom about centralization in block-building (due to, e.g., MEV and private order flow) and the outsourcing of block-building by validators to specialists (i.e., proposer-builder separation): 1. Does heterogeneity in skills and knowledge across block producers inevitably lead to centralization? 2. Does proposer-builder separation eliminate heterogeneity and preserve decentralization among proposers? This paper develops mathematical models and results that offer answers to these questions: 1. In a game-theoretic model with endogenous staking, heterogeneous block producer rewards, and staking costs, we quantify the extent to which heterogeneous rewards lead to concentration in the equilibrium staking distribution. 2. In a stochastic model in which heterogeneous block producers repeatedly reinvest rewards into staking, we quantify, as a function of the block producer heterogeneity, the rate at which stake concentrates on the most sophisticated block producers. 3. In a model with heterogeneous proposers and specialized builders, we quantify, as a function of the competitiveness of the builder ecosystem, the extent to which proposer-builder separation reduces the heterogeneity in rewards across different proposers. Our models and results take advantage of connections to contest design, P\'olya urn processes, and auction theory.
In a typical decentralized autonomous organization (DAO), people organize themselves into a group that is programmatically managed. DAOs can act as bidders in auctions, with a DAO's bid treated by …
In a typical decentralized autonomous organization (DAO), people organize themselves into a group that is programmatically managed. DAOs can act as bidders in auctions, with a DAO's bid treated by the auctioneer as if it had been submitted by an individual, without regard to the internal structure of the DAO. We study auctions in which the bidders are DAOs. More precisely, we consider the design of two-level auctions in which the "participants" are groups of bidders rather than individuals. Bidders form DAOs to pool resources, but must then also negotiate the terms by which the DAO's winnings are shared. We model the outcome of a DAO's negotiations by an aggregation function (which aggregates DAO members' bids into a single group bid), and a budget-balanced cost-sharing mechanism (that determines DAO members' access to the DAO's allocation and distributes the total payment demanded from the DAO to its members). We pursue two-level mechanisms that are incentive-compatible (with truthful bidding a dominant strategy for members of each DAO) and approximately welfare-optimal. We prove that, even in the case of a single-item auction, incentive-compatible welfare maximization is not possible: No matter what the outer mechanism and the cost-sharing mechanisms used by DAOs, the welfare of the resulting two-level mechanism can be a $\approx \ln n$ factor less than optimal. We complement this lower bound with a natural two-level mechanism that achieves a matching approximate welfare guarantee. Our upper bound also extends to multi-item auctions where individuals have additive valuations. Finally, we show that our positive results cannot be extended much further: Even in multi-item settings with unit-demand bidders, truthful two-level mechanisms form a highly restricted class and as a consequence cannot guarantee any non-trivial approximation of the maximum social welfare.
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with *passive* block producers that are motivated purely by the net rewards earned at the consensus layer. This paper …
The incentive-compatibility properties of blockchain transaction fee mechanisms have been investigated with *passive* block producers that are motivated purely by the net rewards earned at the consensus layer. This paper introduces a model of *active* block producers that have their own private valuations for blocks (representing, for example, additional value derived from the application layer). The block producer surplus in our model can be interpreted as one of the more common colloquial meanings of the term ``MEV.'' The main results of this paper show that transaction fee mechanism design is fundamentally more difficult with active block producers than with passive ones: with active block producers, no non-trivial or approximately welfare-maximizing transaction fee mechanism can be incentive-compatible for both users and block producers. These results can be interpreted as a mathematical justification for the current interest in augmenting transaction fee mechanisms with additional components such as order flow auctions, block producer competition, trusted hardware, or cryptographic techniques.
Lately, Non-Fungible Tokens (NFTs), i.e., uniquely discernible assets on a blockchain, have skyrocketed in popularity by addressing a broad audience. However, the typical NFT auctioning procedures are conducted in various, …
Lately, Non-Fungible Tokens (NFTs), i.e., uniquely discernible assets on a blockchain, have skyrocketed in popularity by addressing a broad audience. However, the typical NFT auctioning procedures are conducted in various, ad hoc ways, while mostly ignoring the context that the blockchain provides, i.e., new possibilities, but at the same time new challenges in auction design. One of the main targets of this work is to shed light on the vastly unexplored design space of NFT Auction Mechanisms, especially in those characteristics that fundamentally differ from traditional and more contemporaneous forms of auctions. We focus on the case that bidders have a valuation for the auctioned NFT, i.e., what we term the single-item NFT auction case. In this setting, we formally define an NFT Auction Mechanism, give the properties that we would ideally like a perfect mechanism to satisfy (broadly known as incentive compatibility and collusion resistance) and prove that it is impossible to have such a perfect mechanism. Even though we cannot have an all-powerful protocol like that, we move on to consider relaxed notions of those properties that we may desire the protocol to satisfy, as a trade-off between implementability and economic guarantees. Specifically, we define the notion of an equilibrium-truthful auction, where neither the seller nor the bidders can improve their utility by acting non-truthfully, so long as the counter-party acts truthfully. We also define asymptotically second-price auctions, in which the seller does not lose asymptotically any revenue in comparison to the theoretically-optimal (static) second-price sealed-bid auction, in the case that the bidders' valuations are drawn independently from some distribution. We showcase why these two are very desirable properties for an auction mechanism to enjoy, and construct the first known NFT Auction Mechanism which provably possesses such formal guarantees.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Deterministic Budget-Feasible Clock AuctionsEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi TanEric Balkanski, Pranav Garimidi, Vasilis Gkatzelis, Daniel Schoepflin, and Xizhi Tanpp.2940 - 2963Chapter DOI:https://doi.org/10.1137/1.9781611977073.114PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the …
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealed-bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior …
We study the classic problem of fairly allocating a set of indivisible goods among a group of agents, and focus on the notion of approximate proportionality known as PROPm. Prior work showed that there exists an allocation that satisfies this notion of fairness for instances involving up to five agents, but fell short of proving that this is true in general. We extend this result to show that a PROPm allocation is guaranteed to exist for all instances, independent of the number of agents or goods. Our proof is constructive, providing an algorithm that computes such an allocation and, unlike prior work, the running time of this algorithm is polynomial in both the number of agents and the number of goods.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if …
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, …
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always …
The goal of division is to distribute resources among competing players in a fair way. Envy-freeness is the most extensively studied fairness notion in division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the division of indivisible goods with different classes of player valuations.
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial …
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality. The indivisibility of the goods is long known to pose highly non-trivial obstacles to achieving fairness, and a very vibrant line of research has aimed to circumvent them using appropriate notions of approximate fairness. Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances, while the best known achievable approximations (PROP1) are much weaker. We introduce the notion of proportionality up to the maximin item (PROPm) and show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations. PROPm provides a well-motivated middle-ground between PROP1 and PROPx, while also capturing some elements of the well-studied maximin share (MMS) benchmark: another relaxation of proportionality that has attracted a lot of attention.
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do …
The goal of fair division is to distribute resources among competing players in a “fair” way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms (SODA)A Little Charity Guarantees Almost Envy-FreenessBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini SgouritsaBhaskar Ray Chaudhury, Telikepalli Kavitha, Kurt Mehlhorn, and Alkmini Sgouritsapp.2658 - 2672Chapter DOI:https://doi.org/10.1137/1.9781611975994.162PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Fair division of indivisible goods is a very well-studied problem. The goal of this problem is to distribute m goods to n agents in a “fair” manner, where every agent has a valuation for each subset of goods. We assume general valuations. Envy-freeness is the most extensively studied notion of fairness. However, envy-free allocations do not always exist when goods are indivisible. The notion of fairness we consider here is “envy-freeness up to any good” (EFX) where no agent envies another agent after the removal of any single good from the other agent's bundle. It is not known if such an allocation always exists even when n = 3. We show there is always a partition of the set of goods into n + 1 subsets (X1, …, Xn, P) where for i ϵ [n], Xi is the bundle allocated to agent i and the set P is unallocated (or donated to charity) such that we have: (1)envy-freeness up to any good,(2)no agent values P higher than her own bundle, and(3)fewer than n goods go to charity, i.e., |P| < n (typically m ≫ n). Our proof is constructive. When agents have additive valuations and |P| is large (i.e., when |P| is close to n), our allocation also has a good maximin share (MMS) guarantee. Moreover, a minor variant of our algorithm also shows the existence of an allocation which is 4/7 groupwise maximin share (GMMS): this is a notion of fairness stronger than MMS. This improves upon the current best bound of 1/2 known for an approximate GMMS allocation. Previous chapter Next chapter RelatedDetails Published:2020eISBN:978-1-61197-599-4 https://doi.org/10.1137/1.9781611975994Book Series Name:ProceedingsBook Code:PRDA20Book Pages:xxii + 3011
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin …
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee.
We study the problem of a budget limited buyer who wants to buy a set of items, each from a different seller, to maximize her value. The budget feasible mechanism …
We study the problem of a budget limited buyer who wants to buy a set of items, each from a different seller, to maximize her value. The budget feasible mechanism design problem requires the design a mechanism that incentivizes the sellers to truthfully report their cost and maximizes the buyer’s value while guaranteeing that the total payment does not exceed her budget. Such budget feasible mechanisms can model a buyer in a crowdsourcing market interested in recruiting a set of workers (sellers) to accomplish a task for her. This budget feasible mechanism design problem was introduced by Singer in 2010. We consider the general case where the buyer’s valuation is a monotone submodular function. There are a number of truthful mechanisms known for this problem. We offer two general frameworks for simple mechanisms, and by combining these frameworks, we significantly improve on the best known results, while also simplifying the analysis. For example, we improve the approximation guarantee for the general monotone submodular case from 7.91 to 5 and for the case of large markets (where each individual item has negligible value) from 3 to 2.58. More generally, given an r approximation algorithm for the optimization problem (ignoring incentives), our mechanism is a r + 1 approximation mechanism for large markets, an improvement from 2 r 2 . We also provide a mechanism without the large market assumption, where we achieve a 4 r + 1 approximation guarantee. We also show how our results can be used for the problem of a principal hiring in a Crowdsourcing Market to select a set of tasks subject to a total budget.
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the …
In this paper, we show a tight approximation guarantee for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark, or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(2+ \sqrt{2})$ for the weaker integral benchmark.
We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin …
We consider the problem of allocating indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share , which is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focused on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations, a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee. Furthermore, we initiate the study of approximate maximin fair division under submodular valuations . Specifically, we show that when the valuations of the agents are nonnegative , monotone , and submodular, then a 0.21-approximate maximin fair allocation is guaranteed to exist. In fact, we show that such an allocation can be efficiently found by using a simple round-robin algorithm. A technical contribution of the article is to analyze the performance of this combinatorial algorithm by employing the concept of multilinear extensions .
We investigate the query complexity of the fair allocation of indivisible goods. For two agents with arbitrary monotonic valuations, we design an algorithm that computes an allocation satisfying envy-freeness up …
We investigate the query complexity of the fair allocation of indivisible goods. For two agents with arbitrary monotonic valuations, we design an algorithm that computes an allocation satisfying envy-freeness up to one good (EF1), a relaxation of envy-freeness, using a logarithmic number of queries. We show that the logarithmic query complexity bound also holds for three agents with additive valuations. These results suggest that it is possible to fairly allocate goods in practice even when the number of goods is extremely large. By contrast, we prove that computing an allocation satisfying envyfreeness and another of its relaxations, envy-freeness up to any good (EFX), requires a linear number of queries even when there are only two agents with identical additive valuations.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most …
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value …
We consider a monopolist seller with n heterogeneous items, facing a single buyer. The buyer has a value for each item drawn independently according to (non-identical) distributions, and her value for a set of items is additive. The seller aims to maximize his revenue. We suggest using the a priori better of two simple pricing methods: selling the items separately , each at its optimal price, and bundling together , in which the entire set of items is sold as one bundle at its optimal price. We show that for any distribution, this mechanism achieves a constant-factor approximation to the optimal revenue. Beyond its simplicity, this is the first computationally tractable mechanism to obtain a constant-factor approximation for this multi-parameter problem. We additionally discuss extensions to multiple buyers and to valuations that are correlated across items.
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. …
We study the problem of fair allocation of m indivisible items among n agents with additive valuations using the popular notion of maximin share (MMS) as our measure of fairness. An MMS allocation provides each agent a bundle worth at least her maximin share. While it is known that such an allocation need not exist [5, 7], a series of remarkable work [1-3, 6, 7] provided 2/3 approximation algorithms in which each agent receives a bundle worth at least 2/3 times her maximin share. More recently, [4] showed the existence of 3/4 MMS allocations and a PTAS to find a 3/4 - ε MMS allocation. Most of the previous works utilize intricate algorithms and require agents' approximate MMS values, which are computationally expensive to obtain.
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 16 January 2020Accepted: 10 January 2021Published online: 15 April 2021Keywordsenvy-freeness, fair division, algorithms, …
Related DatabasesWeb of Science You must be logged in with an active subscription to view this.Article DataHistorySubmitted: 16 January 2020Accepted: 10 January 2021Published online: 15 April 2021Keywordsenvy-freeness, fair division, algorithms, query complexityAMS Subject Headings68Q25, 91B32Publication DataISSN (print): 0895-4801ISSN (online): 1095-7146Publisher: Society for Industrial and Applied MathematicsCODEN: sjdmec
Demand for blockchains such as Bitcoin and Ethereum is far larger than supply, necessitating a mechanism that selects a subset of transactions to include "on-chain" from the pool of all …
Demand for blockchains such as Bitcoin and Ethereum is far larger than supply, necessitating a mechanism that selects a subset of transactions to include "on-chain" from the pool of all pending transactions. EIP-1559 is a proposal to make several tightly coupled changes to the Ethereum blockchain's transaction fee mechanism, including the introduction of variable-size blocks and a burned base fee that rises and falls with demand. These changes are slated for deployment in Ethereum's "London fork," scheduled for late summer 2021, at which point it will be the biggest economic change made to a major blockchain to date. The first goal of this paper is to formalize the problem of designing a transaction fee mechanism, taking into account the many idiosyncrasies of the blockchain setting (ranging from off-chain collusion between miners and users to the ease of money-burning). The second goal is to situate the specific mechanism proposed in EIP-1559 in this framework and rigorously interrogate its game-theoretic properties. The third goal is to suggest competing designs that offer alternative sets of trade-offs. The final goal is to highlight research opportunities for the EC community that could help shape the future of blockchain transaction fee mechanisms.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a …
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents …
We consider the problem of budget feasible mechanism design proposed by Singer, but in a Bayesian setting. A principal has a public value for hiring a subset of the agents and a budget, while the agents have private costs for being hired. We consider both additive and submodular value functions of the principal. We show that there are simple, practical, ex post budget balanced posted pricing mechanisms that approximate the value obtained by the Bayesian optimal mechanism that is budget balanced only in expectation. A main motivating application for this work is crowdsourcing, e.g., on Mechanical Turk, where workers are drawn from a large population and posted pricing is standard. Our analysis methods relate to contention resolution schemes in submodular optimization of Vondràk et al. and the correlation gap analysis of Yan.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there …
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) …
We consider the problem of conducting a survey with the goal of obtaining an unbiased estimator of some population statistic when individuals have unknown costs (drawn from a known prior) for participating in the survey. Individuals must be compensated for their participation and are strategic agents, and so the payment scheme must incentivize truthful behavior. We derive optimal truthful mechanisms for this problem for the two goals of minimizing the variance of the estimator given a fixed budget, and minimizing the expected cost of the survey given a fixed variance goal.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, …
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
We study combinatorial procurement auctions, where a buyer with a valuation function v and budget B wishes to buy a set of items. Each item i has a cost ci …
We study combinatorial procurement auctions, where a buyer with a valuation function v and budget B wishes to buy a set of items. Each item i has a cost ci and the buyer is interested in a set S that maximizes v(S) subject to ∑i∈Sci ≤ β. Special cases of combinatorial procurement auctions are well-studied problems from submodular optimization. In particular, when the costs are all equal (cardinality constraint), a classic result by Nemhauser et al shows that the greedy algorithm provides an e/e-1 approximation.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget …
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization …
We consider a monopolist seller facing a single buyer with additive valuations over n heterogeneous, independent items. It is known that in this important setting optimal mechanisms may require randomization [12], use menus of infinite size [9], and may be computationally intractable [8]. This has sparked recent interest in finding simple mechanisms that obtain reasonable approximations to the optimal revenue [10, 15, 3]. In this work we attempt to find the optimal simple mechanism.