In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the …
In this article, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O ( m log m ) and O ( n 3 ), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.
In the usual models of cooperative game theory, the outcome of a coalition formation process is either the grand coalition or a coalition structure that consists of disjoint coalitions. However, …
In the usual models of cooperative game theory, the outcome of a coalition formation process is either the grand coalition or a coalition structure that consists of disjoint coalitions. However, in many domains where coalitions are associated with tasks, an agent may be involved in executing more than one task, and thus may distribute his resources among several coalitions. To tackle such scenarios, we introduce a model for cooperative games with overlapping coalitions--or overlapping coalition formation (OCF) games. We then explore the issue of stability in this setting. In particular, we introduce a notion of the core, which generalizes the corresponding notion in the traditional (non-overlapping) scenario. Then, under some quite general conditions, we characterize the elements of the core, and show that any element of the core maximizes the social welfare. We also introduce a concept of balancedness for overlapping coalitional games, and use it to characterize coalition structures that can be extended to elements of the core. Finally, we generalize the notion of convexity to our setting, and show that under some natural assumptions convex games have a non-empty core. Moreover, we introduce two alternative notions of stability in OCF that allow a wider range of deviations, and explore the relationships among the corresponding definitions of the core, as well as the classic (non-overlapping) core and the Aubin core. We illustrate the general properties of the three cores, and also study them from a computational perspective, thus obtaining additional insights into their fundamental structure.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a …
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is …
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is the best she can guarantee to herself, if she would partition the items in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then is to find an allocation, so that each player is guaranteed her maximin share. Previous works have studied this problem mostly algorithmically, providing constant factor approximation algorithms. In this work we embark on a mechanism design approach and investigate the existence of truthful mechanisms. We propose three models regarding the information that the mechanism attempts to elicit from the players, based on the cardinal and ordinal representation of preferences. We establish positive and negative (impossibility) results for each model and highlight the limitations imposed by truthfulness on the approximability of the problem. Finally, we pay particular attention to the case of two players, which already leads to challenging questions.
The family of Groves mechanisms, which includes the well-known VCG mechanism (also known as the Clarke mechanism), is a family of efficient and strategy-proof mechanisms. Unfortunately, the Groves mechanisms are …
The family of Groves mechanisms, which includes the well-known VCG mechanism (also known as the Clarke mechanism), is a family of efficient and strategy-proof mechanisms. Unfortunately, the Groves mechanisms are generally not budget balanced. That is, under such mechanisms, payments may flow into or out of the system of the agents, resulting in deficits or reduced utilities for the agents. We consider the following problem: within the family of Groves mechanisms, we want to identify mechanisms that give the agents the highest utilities, under the constraint that these mechanisms must never incur deficits. We adopt a prior-free approach. We introduce two general measures for comparing mechanisms in prior-free settings. We say that a non-deficit Groves mechanism M individually dominates another non-deficit Groves mechanism M' if for every type profile, every agent's utility under M is no less than that under M', and this holds with strict inequality for at least one type profile and one agent. We say that a non-deficit Groves mechanism M collectively dominates another non-deficit Groves mechanism M' if for every type profile, the agents' total utility under M is no less than that under M', and this holds with strict inequality for at least one type profile. The above definitions induce two partial orders on non-deficit Groves mechanisms. We study the maximal elements corresponding to these two partial orders, which we call the individually undominated mechanisms and the collectively undominated mechanisms, respectively.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives.We characterize social networks for which adoption …
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives.We characterize social networks for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed.These characterizations directly yield polynomial time algorithms that allow us to determine whether a given social network satisfies one of the above properties.We also study algorithmic questions for networks without unique outcomes.We show that the problem of determining whether a final network exists in which all nodes adopted some product is NP-complete.In turn, we also resolve the complexity of the problems of determining whether a given node adopts some (respectively, a given) product in some (respectively, all) network(s).Further, we show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than Ω(n), in contrast to the maximum spread, which is efficiently computable.Finally, we clarify that some of the above problems can be solved in polynomial time when there are only two products.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several …
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight, or almost tight, results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most …
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
Recently, we introduced in arXiv:1105.2434 a model for product adoption in social networks with multiple products, where the agents, influenced by their neighbours, can adopt one out of several alternatives. …
Recently, we introduced in arXiv:1105.2434 a model for product adoption in social networks with multiple products, where the agents, influenced by their neighbours, can adopt one out of several alternatives. We identify and analyze here four types of paradoxes that can arise in these networks. To this end, we use social network games that we recently introduced in arxiv:1202.2209. These paradoxes shed light on possible inefficiencies arising when one modifies the sets of products available to the agents forming a social network. One of the paradoxes corresponds to the well-known Braess paradox in congestion games and shows that by adding more choices to a node, the network may end up in a situation that is worse for everybody. We exhibit a dual version of this, where removing available choices from someone can eventually make everybody better off. The other paradoxes that we identify show that by adding or removing a product from the choice set of some node may lead to permanent instability. Finally, we also identify conditions under which some of these paradoxes cannot arise.
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We characterize the graphs for which …
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We characterize the graphs for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed. These characterizations directly yield polynomial time algorithms that allow us to determine whether a given social network satisfies one of the above properties.
We also study algorithmic questions for networks without unique outcomes. We show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than $\Omega(n)$, in contrast to the maximum spread, which is efficiently computable. We then move on to questions regarding the behavior of a node with respect to adopting some (resp. a given) product. We show that the problem of determining whether a given node has to adopt some (resp. a given) product in all final networks is co-NP-complete.
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential …
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential dependencies. Under this rule, voters are allowed to declare dependencies between different issues, but the price we have to pay for this higher level of expressiveness is that we end up with a computationally hard rule. Motivated by this, we first focus on finding special cases that admit efficient algorithms for CMS. Our main result in this direction is that we identify the condition of bounded treewidth (of an appropriate graph, emerging from the provided ballots) as the necessary and sufficient condition for exact polynomial algorithms, under common complexity assumptions. We then move to the design of approximation algorithms. For the (still hard) case of binary issues, we identify natural restrictions on the voters' ballots, under which we provide the first multiplicative approximation algorithms for the problem. The restrictions involve upper bounds on the number of dependencies an issue can have on the others and on the number of alternatives per issue that a voter can approve. Finally, we also investigate the complexity of problems related to the strategic control of conditional approval elections by adding or deleting either voters or alternatives and we show that in most variants of these problems, CMS is computationally resistant against control. Overall, we conclude that CMS can be viewed as a solution that achieves a satisfactory tradeoff between expressiveness and computational efficiency, when we have a limited number of dependencies among issues, while at the same time exhibiting sufficient resistance to control.
We study two standard multi-unit auction formats for allocating multiple units of a single good to multi-demand bidders. The first one is the Discriminatory Auction, which charges every winner his …
We study two standard multi-unit auction formats for allocating multiple units of a single good to multi-demand bidders. The first one is the Discriminatory Auction, which charges every winner his winning bids. The second is the Uniform Price Auction, which determines a uniform price to be paid per unit. Variants of both formats find applications ranging from the allocation of state bonds to investors, to online sales over the internet, facilitated by popular online brokers. For these formats, we consider two bidding interfaces: (i) standard bidding, which is most prevalent in the scientific literature, and (ii) uniform bidding, which is more popular in practice. In this work, we evaluate the economic inefficiency of both multi-unit auction formats for both bidding interfaces, by means of upper and lower bounds on the Price of Anarchy for pure Nash equilibria and mixed Bayes-Nash equilibria. Our developments improve significantly upon bounds that have been obtained recently in [Markakis, Telelis, ToCS 2014] and [Syrgkanis, Tardos, STOC 2013] for submodular valuation functions. Moreover, we consider for the first time bidders with subadditive valuation functions for these auction formats. Our results signify that these auctions are nearly efficient, which provides further justification for their use in practice.
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so …
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible …
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis [ 38 ], with an approximation guarantee of (0.3393+δ), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a \((\frac{1}{3}+\delta)\) -Nash equilibrium, for any constant δ > 0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of [ 38 ], and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
We are interested in mechanisms that maximize social welfare. In [1] this problem was studied for multi-unit auctions with unit demand bidders and for the public project problem, and in …
We are interested in mechanisms that maximize social welfare. In [1] this problem was studied for multi-unit auctions with unit demand bidders and for the public project problem, and in each case social welfare undominated mechanisms in the class of feasible and incentive compatible mechanisms were identified. One way to improve upon these optimality results is by allowing the players to move sequentially. With this in mind, we study here sequential versions of two feasible Groves mechanisms used for single item auctions: the Vickrey auction and the Bailey-Cavallo mechanism. Because of the absence of dominant strategies in this sequential setting, we focus on a weaker concept of an optimal strategy. For each mechanism we introduce natural optimal strategies and observe that in each mechanism these strategies exhibit different behaviour. However, we then show that among all optimal strategies, the one we introduce for each mechanism maximizes the social welfare when each player follows it. The resulting social welfare can be larger than the one obtained in the simultaneous setting. Finally, we show that, when interpreting both mechanisms as simultaneous ones, the vectors of the proposed strategies form a Pareto optimal Nash equilibrium in the class of optimal strategies.
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., …
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., prices offered to neighboring vertices should not be significantly different. We first provide approximation algorithms for a natural class of instances, referred to as the class of single-value revenue functions. Our results improve on the current state of the art, especially when the number of distinct prices is small. This applies, for example, to settings where the seller will only consider a fixed number of discount types or special offers. We then resolve one of the open questions posed in Alon et al., by establishing APX-hardness for the problem. Surprisingly, we further show that the problem is NP-complete even when the price differences are allowed to be large, or even when the number of allowed distinct prices is as small as three. Finally, we provide some extensions of the model, regarding either the allowed set of prices or the demand type of the clients.
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would …
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would like to select a subset of the resources so as to maximize his valuation function, without exceeding a given budget. As the resources are owned by strategic agents however, our overall goal is to design mechanisms that are truthful, budget-feasible, and obtain a good approximation to the optimal value. Budget-feasibility creates additional challenges, making several approaches inapplicable in this setting. Previous results on budget-feasible mechanisms have considered mostly monotone valuation functions. In this work, we mainly focus on symmetric submodular valuations, a prominent class of non-monotone submodular functions that includes cut functions. We begin first with a purely algorithmic result, obtaining a $\frac{2e}{e-1}$-approximation for maximizing symmetric submodular functions under a budget constraint. We view this as a standalone result of independent interest, as it is the best known factor achieved by a deterministic algorithm. We then proceed to propose truthful, budget feasible mechanisms (both deterministic and randomized), paying particular attention on the Budgeted Max Cut problem. Our results significantly improve the known approximation ratios for these objectives, while establishing polynomial running time for cases where only exponential mechanisms were known. At the heart of our approach lies an appropriate combination of local search algorithms with results for monotone submodular valuations, applied to the derived local optima.
We give an algorithm for learning symmetric k-juntas (boolean functions of $n$ boolean variables which depend only on an unknown set of $k$ of these variables) in the PAC model …
We give an algorithm for learning symmetric k-juntas (boolean functions of $n$ boolean variables which depend only on an unknown set of $k$ of these variables) in the PAC model under the uniform distribution, which runs in time n^{O(k/\log k)}. Our bound is obtained by proving the following result: Every symmetric boolean function on k variables, except for the parity and the constant functions, has a non-zero Fourier coefficient of order at least 1 and at most O(k/\log k). This improves the previously best known bound of (3/31)k, and provides the first n^{o(k)} time algorithm for learning symmetric juntas.
We present a systematic study of Plurality elections with strategic voters who, in addition to having preferences over election winners, have secondary preferences, which govern their behavior when their vote …
We present a systematic study of Plurality elections with strategic voters who, in addition to having preferences over election winners, have secondary preferences, which govern their behavior when their vote cannot affect the election outcome. Specifically, we study two models that have been recently considered in the literature: lazy voters, who prefer to abstain when they are not pivotal, and truth-biased voters, who prefer to vote truthfully when they are not pivotal. We extend prior work by investigating the behavior of both lazy and truth-biased voters under different tie-breaking rules (lexicographic rule, random voter rule, random candidate rule). Two of these six combinations of secondary preferences and a tie-breaking rule have been studied in prior work. In order to understand the impact of different secondary preferences and tie-breaking rules on the election outcomes, we study the remaining four combinations. We characterize pure Nash equilibria (PNE) of the resulting strategic games and study the complexity of related computational problems. Our results extend to settings where some of the voters may be non-strategic.
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a …
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a cost, so as not to exceed a given budget and at the same time maximize a given valuation function. This framework captures the budgeted version of several well known optimization problems, and when the resources are owned by strategic agents the goal is to design truthful and budget feasible mechanisms, i.e. elicit the true cost of the resources and ensure the payments of the mechanism do not exceed the budget. Budget feasibility introduces more challenges in mechanism design, and we study instantiations of this problem for certain classes of submodular and XOS valuation functions. We first obtain mechanisms with an improved approximation ratio for weighted coverage valuations, a special class of submodular functions that has already attracted attention in previous works. We then provide a general scheme for designing randomized and deterministic polynomial time mechanisms for a class of XOS problems. This class contains problems whose feasible set forms an independence system (a more general structure than matroids), and some representative problems include, among others, finding maximum weighted matchings, maximum weighted matroid members, and maximum weighted 3D-matchings. For most of these problems, only randomized mechanisms with very high approximation ratios were known prior to our results.
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., …
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., prices offered to neighboring vertices should not be significantly different. We first provide approximation algorithms for a natural class of instances, referred to as the class of single-value revenue functions. Our results improve on the current state of the art, especially when the number of distinct prices is small. This applies, for example, to settings where the seller will only consider a fixed number of discount types or special offers. We then resolve one of the open questions posed in Alon et al., by establishing APX-hardness for the problem. Surprisingly, we further show that the problem is NP-complete even when the price differences are allowed to be large, or even when the number of allowed distinct prices is as small as three. Finally, we provide some extensions of the model, regarding either the allowed set of prices or the demand type of the clients.
In many multiagent scenarios, agents distribute resources, such as time or energy, among several tasks. Having completed their tasks and generated profits, task payoffs must be divided among the agents …
In many multiagent scenarios, agents distribute resources, such as time or energy, among several tasks. Having completed their tasks and generated profits, task payoffs must be divided among the agents in some reasonable manner. Cooperative games with overlapping coalitions (OCF games) are a recent framework proposed by Chalkiadakis et al. (2010), generalizing classic cooperative games to the case where agents may belong to more than one coalition. Having formed overlapping coalitions and divided profits, some agents may feel dissatisfied with their share of the profits, and would like to deviate from the given outcome. However, deviation in OCF games is a complicated matter: agents may decide to withdraw only some of their weight from some of the coalitions they belong to; that is, even after deviation, it is possible that agents will still be involved in tasks with non-deviators. This means that the desirability of a deviation, and the stability of formed coalitions, is to a great extent determined by the reaction of non-deviators. In this work, we explore algorithmic aspects of OCF games, focusing on the core in OCF games. We study the problem of deciding if the core of an OCF game is not empty, and whether a core payoff division can be found in polynomial time; moreover, we identify conditions that ensure that the problem admits polynomial time algorithms. Finally, we introduce and study a natural class of OCF games, Linear Bottleneck Games. Interestingly, we show that such games always have a non-empty core, even assuming a highly lenient reaction to deviations.
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of ε-well-supported Nash equilibrium, where ε ∈ [0,1] corresponds to the approximation guarantee. Put simply, in an ε-well-supported equilibrium, every player chooses with positive probability actions that are within ε of the maximum achievable payoff, against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm, that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.* The full version of the paper can be accessed at https://arxiv.org/abs/2207.07007
In this paper, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the …
In this paper, we will formalize the method of dual fitting and the idea of factor-revealing LP. This combination is used to design and analyze two greedy algorithms for the metric uncapacitated facility location problem. Their approximation factors are 1.861 and 1.61, with running times of O(mlog m) and O(n^3), respectively, where n is the total number of vertices and m is the number of edges in the underlying complete bipartite graph between cities and facilities. The algorithms are used to improve recent results for several variants of the problem.
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism …
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently \cite{DobzinskiO17}. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, $O(1)$-budget-balanced and $O(H_n)$-approximate with respect to the social cost. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of $n$. We identify conditions under which the mechanism achieves improved social cost approximation guarantees.
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of $\varepsilon$-well-supported Nash equilibrium, where $\varepsilon \in [0,1]$ corresponds to the approximation guarantee. Put simply, in an $\varepsilon$-well-supported equilibrium, every player chooses with positive probability actions that are within $\varepsilon$ of the maximum achievable payoff, against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm, that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror …
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror Descent \cite{DBLP:conf/iclr/MertikopoulosLZ19}, uses a large learning rate for the intermediate gradient step which essentially leads to computing (approximate) best response strategies against the profile of the previous iteration. Although counter-intuitive at first sight due to the irrationally large, for an iterative algorithm, intermediate learning step, we prove that the method guarantees last-iterate convergence to an equilibrium. Particularly, we show that the algorithm reaches first an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate, $\eta$, until the method becomes a contracting map, and converges to the exact equilibrium. Furthermore, we perform experimental comparisons with the optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ} and show that our algorithm has significant practical potential since it offers substantial gains in terms of accelerated convergence.
In recent years, the study of various models and questions related to Liquid Democracy has been of growing interest among the community of Computational Social Choice. A concern that has …
In recent years, the study of various models and questions related to Liquid Democracy has been of growing interest among the community of Computational Social Choice. A concern that has been raised, is that current academic literature focuses solely on static inputs, concealing a key characteristic of Liquid Democracy: the right for a voter to change her mind as time goes by, regarding her options of whether to vote herself or delegate her vote to other participants, till the final voting deadline. In real life, a period of extended deliberation preceding the election-day motivates voters to adapt their behaviour over time, either based on observations of the remaining electorate or on information acquired for the topic at hand. By adding a temporal dimension to Liquid Democracy, such adaptations can increase the number of possible delegation paths and reduce the loss of votes due to delegation cycles or delegating paths towards abstaining agents, ultimately enhancing participation. Our work takes a first step to integrate a time horizon into decision-making problems in Liquid Democracy systems. Our approach, via a computational complexity analysis, exploits concepts and tools from temporal graph theory which turn out to be convenient for our framework.
.Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
.Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of an \(\varepsilon\) -well-supported Nash equilibrium, where \(\varepsilon \in [0,1]\) corresponds to the approximation guarantee. Put simply, in an \(\varepsilon\) -well-supported equilibrium, every player chooses with positive probability actions that are within \(\varepsilon\) of the maximum achievable payoff against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.KeywordsNash equilibriawell-supported Nash equilibriaquery complexitybimatrix gamesMSC codes68Q2591A05
We make two observations regarding the invertibility of Keller maps. i.e., polynomial maps for which the determinant of their Jacobian matrix is identically equal to 1. In our first result, …
We make two observations regarding the invertibility of Keller maps. i.e., polynomial maps for which the determinant of their Jacobian matrix is identically equal to 1. In our first result, we show that if P is a n-dimensional Keller map, defined over any extension of Q, then P has a polynomial inverse if and only if the range of P contains the cartesian product of n universal Hilbert sets. In our second result, we show that if P is a 2-dimensional Keller map, defined over any algebraic number field, then P is invertible on a set that contains almost all rational integers of K.
We present our results on Uniform Price Auctions, one of the standard sealed-bid multi-unit auction formats, for selling multiple identical units of a single good to multi-demand bidders. Contrary to …
We present our results on Uniform Price Auctions, one of the standard sealed-bid multi-unit auction formats, for selling multiple identical units of a single good to multi-demand bidders. Contrary to the truthful and economically efficient multi-unit Vickrey auction, the Uniform Price Auction encourages strategic bidding and is socially inefficient in general. The uniform pricing rule is, however, widely popular by its appeal to the natural anticipation, that identical items should be identically priced. In this work we study equilibria of the Uniform Price Auction for bidders with (symmetric) submodular valuation functions, over the number of units that they win. We investigate pure Nash equilibria of the auction in undominated strategies; we produce a characterization of these equilibria that allows us to prove that a fraction 1-1/e of the optimum social welfare is always recovered in undominated pure Nash equilibrium -- and this bound is essentially tight. Subsequently, we study the auction under the incomplete information setting and prove a bound of 4-2/k on the economic inefficiency of (mixed) Bayes Nash equilibria that are supported by undominated strategies.
A common objective in mechanism design is to choose the outcome (for example, allocation of resources) that maximizes the sum of the agents' valuations, without introducing incentives for agents to …
A common objective in mechanism design is to choose the outcome (for example, allocation of resources) that maximizes the sum of the agents' valuations, without introducing incentives for agents to misreport their preferences. The class of Groves mechanisms achieves this; however, these mechanisms require the agents to make payments, thereby reducing the agents' total welfare. In this paper we introduce a measure for comparing two mechanisms with respect to the final welfare they generate. This measure induces a partial order on mechanisms and we study the question of finding minimal elements with respect to this partial order. In particular, we say a non-deficit Groves mechanism is welfare undominated if there exists no other non-deficit Groves mechanism that always has a smaller or equal sum of payments. We focus on two domains: (i) auctions with multiple identical units and unit-demand bidders, and (ii) mechanisms for public project problems. In the first domain we analytically characterize all welfare undominated Groves mechanisms that are anonymous and have linear payment functions, by showing that the family of optimal-in-expectation linear redistribution mechanisms, which were introduced in [6] and include the Bailey-Cavallo mechanism [1,2], coincides with the family of welfare undominated Groves mechanisms that are anonymous and linear in the setting we study. In the second domain we show that the classic VCG (Clarke) mechanism is welfare undominated for the class of public project problems with equal participation costs, but is not undominated for a more general class.
We focus on the design of algorithms for finding equilibria in 2-player zero-sum games. Although it is well known that such problems can be solved by a single linear program, …
We focus on the design of algorithms for finding equilibria in 2-player zero-sum games. Although it is well known that such problems can be solved by a single linear program, there has been a surge of interest in recent years for simpler algorithms, motivated in part by applications in machine learning. Our work proposes such a method, inspired by the observation that the duality gap (a standard metric for evaluating convergence in min-max optimization problems) is a convex function for bilinear zero-sum games. To this end, we analyze a descent-based approach, variants of which have also been used as a subroutine in a series of algorithms for approximating Nash equilibria in general non-zero-sum games. In particular, we study a steepest descent approach, by finding the direction that minimises the directional derivative of the duality gap function. Our main theoretical result is that the derived algorithms achieve a geometric decrease in the duality gap and improved complexity bounds until we reach an approximate equilibrium. Finally, we complement this with an experimental evaluation, which provides promising findings. Our algorithm is comparable with (and in some cases outperforms) some of the standard approaches for solving 0-sum games, such as OGDA (Optimistic Gradient Descent/Ascent), even with thousands of available strategies per player.
This work examines the Conditional Approval Framework for elections involving multiple interdependent issues, specifically focusing on the Conditional Minisum Approval Voting Rule. We first conduct a detailed analysis of the …
This work examines the Conditional Approval Framework for elections involving multiple interdependent issues, specifically focusing on the Conditional Minisum Approval Voting Rule. We first conduct a detailed analysis of the computational complexity of this rule, demonstrating that no approach can significantly outperform the brute-force algorithm under common computational complexity assumptions and various natural input restrictions. In response, we propose two practical restrictions (the first in the literature) that make the problem computationally tractable and show that these restrictions are essentially tight. Overall, this work provides a clear picture of the tractability landscape of the problem, contributing to a comprehensive understanding of the complications introduced by conditional ballots and indicating that conditional approval voting can be applied in practice, albeit under specific conditions.
In this paper, we investigate the impact of reward schemes and committee sizes motivated by governance systems over blockchain communities. We introduce a model for elections with a binary outcome …
In this paper, we investigate the impact of reward schemes and committee sizes motivated by governance systems over blockchain communities. We introduce a model for elections with a binary outcome space where there is a ground truth (i.e., a "correct" outcome), and where stakeholders can only choose to delegate their voting power to a set of delegation representatives (DReps). Moreover, the effort (cost) invested by each DRep positively influences both (i) her ability to vote correctly and (ii) the total delegation that she attracts, thereby increasing her voting power. This model constitutes the natural counterpart of delegated proof-of-stake (PoS) protocols, where delegated stakes are used to elect the block builders. As a way to motivate the representatives to exert effort, a reward scheme can be used based on the delegation attracted by each DRep. We analyze both the game-theoretic aspects and the optimization counterpart of this model. Our primary focus is on selecting a committee that maximizes the probability of reaching the correct outcome, given a fixed monetary budget allocated for rewarding the delegates. Our findings provide insights into the design of effective reward mechanisms and optimal committee structures (i.e., how many DReps are enough) in these PoS-like governance systems.
Multi-winner approval-based voting has received considerable attention recently. A voting rule in this setting takes as input ballots in which each agent approves a subset of the available alternatives and …
Multi-winner approval-based voting has received considerable attention recently. A voting rule in this setting takes as input ballots in which each agent approves a subset of the available alternatives and outputs a committee of alternatives of given size $k$. We consider the scenario when a coalition of agents can act strategically and alter their ballots so that the new outcome is strictly better for a coalition member and at least as good for anyone else in the coalition. Voting rules that are robust against this strategic behaviour are called strongly group-strategyproof. We prove that, for $k\in \{1,2, ..., m-2\}$, strongly group-strategyproof multi-winner approval-based voting rules which furthermore satisfy the minimum efficiency requirement of unanimity do not exist, where $m$ is the number of available alternatives. Our proof builds a connection to single-winner voting with ranking-based ballots and exploits the infamous Gibbard-Satterthwaite theorem to reach the desired impossibility result. Our result has implications for paradigmatic problems from the area of approximate mechanism design without money and indicates that strongly group-strategyproof mechanisms for minimax approval voting, variants of facility location, and classification can only have an unbounded approximation ratio.
.Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
.Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of an \(\varepsilon\) -well-supported Nash equilibrium, where \(\varepsilon \in [0,1]\) corresponds to the approximation guarantee. Put simply, in an \(\varepsilon\) -well-supported equilibrium, every player chooses with positive probability actions that are within \(\varepsilon\) of the maximum achievable payoff against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.KeywordsNash equilibriawell-supported Nash equilibriaquery complexitybimatrix gamesMSC codes68Q2591A05
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible …
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis [ 38 ], with an approximation guarantee of (0.3393+δ), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a \((\frac{1}{3}+\delta)\) -Nash equilibrium, for any constant δ > 0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of [ 38 ], and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of ε-well-supported Nash equilibrium, where ε ∈ [0,1] corresponds to the approximation guarantee. Put simply, in an ε-well-supported equilibrium, every player chooses with positive probability actions that are within ε of the maximum achievable payoff, against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm, that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.* The full version of the paper can be accessed at https://arxiv.org/abs/2207.07007
Budget-feasible procurement has been a major paradigm in mechanism design since its introduction by Singer (2010). An auctioneer (buyer) with a strict budget constraint is interested in buying goods or …
Budget-feasible procurement has been a major paradigm in mechanism design since its introduction by Singer (2010). An auctioneer (buyer) with a strict budget constraint is interested in buying goods or services from a group of strategic agents (sellers). In many scenarios it makes sense to allow the auctioneer to only partially buy what an agent offers, e.g., an agent might have multiple copies of an item to sell, they might offer multiple levels of a service, or they may be available to perform a task for any fraction of a specified time interval. Nevertheless, the focus of the related literature has been on settings where each agent's services are either fully acquired or not at all. The main reason for this, is that in settings with partial allocations like the ones mentioned, there are strong inapproximability results. Under the mild assumption of being able to afford each agent entirely, we are able to circumvent such results in this work. We design a polynomial-time, deterministic, truthful, budget-feasible $(2+\sqrt{3})$-approximation mechanism for the setting where each agent offers multiple levels of service and the auctioneer has a discrete separable concave valuation function. We then use this result to design a deterministic, truthful and budget-feasible mechanism for the setting where any fraction of a service can be acquired and the auctioneer's valuation function is separable concave (i.e., the sum of concave functions). The approximation ratio of this mechanism depends on how `nice' the concave functions are, and is $O(1)$ for valuation functions that are sums of $O(1)$-regular functions (e.g., functions like $\log(1+x)$). For the special case of a linear valuation function, we improve the best known approximation ratio for the problem from $1+\phi$ (by Klumper & Sch\"afer (2022)) to $2$. This establishes a separation between this setting and its indivisible counterpart.
In recent years, the study of various models and questions related to Liquid Democracy has been of growing interest among the community of Computational Social Choice. A concern that has …
In recent years, the study of various models and questions related to Liquid Democracy has been of growing interest among the community of Computational Social Choice. A concern that has been raised, is that current academic literature focuses solely on static inputs, concealing a key characteristic of Liquid Democracy: the right for a voter to change her mind as time goes by, regarding her options of whether to vote herself or delegate her vote to other participants, till the final voting deadline. In real life, a period of extended deliberation preceding the election-day motivates voters to adapt their behaviour over time, either based on observations of the remaining electorate or on information acquired for the topic at hand. By adding a temporal dimension to Liquid Democracy, such adaptations can increase the number of possible delegation paths and reduce the loss of votes due to delegation cycles or delegating paths towards abstaining agents, ultimately enhancing participation. Our work takes a first step to integrate a time horizon into decision-making problems in Liquid Democracy systems. Our approach, via a computational complexity analysis, exploits concepts and tools from temporal graph theory which turn out to be convenient for our framework.
Our work studies the fair allocation of indivisible items to a set of agents, and falls within the scope of establishing improved approximation guarantees. It is well known by now …
Our work studies the fair allocation of indivisible items to a set of agents, and falls within the scope of establishing improved approximation guarantees. It is well known by now that the classic solution concepts in fair division, such as envy-freeness and proportionality, fail to exist in the presence of indivisible items. Unfortunately, the lack of existence remains unresolved even for some relaxations of envy-freeness, and most notably for the notion of EFX, which has attracted significant attention in the relevant literature. This in turn has motivated the quest for approximation algorithms, resulting in the currently best known $(\phi-1)$-approximation guarantee by Amanatidis et al (2020), where $\phi$ equals the golden ratio. So far, it has been notoriously hard to obtain any further advancements beyond this factor. Our main contribution is that we achieve better approximations, for certain special cases, where the agents agree on their perception of some items in terms of their worth. In particular, we first provide an algorithm with a $2/3$-approximation, when the agents agree on what are the top $n$ items (but not necessarily on their exact ranking), with $n$ being the number of agents. To do so, we also study a general framework that can be of independent interest for obtaining further guarantees.
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential …
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential dependencies. Under this rule, voters are allowed to declare dependencies between different issues, but the price we have to pay for this higher level of expressiveness is that we end up with a computationally hard rule. Motivated by this, we first focus on finding special cases that admit efficient algorithms for CMS. Our main result in this direction is that we identify the condition of bounded treewidth (of an appropriate graph, emerging from the provided ballots) as the necessary and sufficient condition for exact polynomial algorithms, under common complexity assumptions. We then move to the design of approximation algorithms. For the (still hard) case of binary issues, we identify restrictions, under which we provide the first multiplicative approximation algorithms for the problem. The restrictions involve upper bounds on the number of dependencies an issue can have on the others and lower bounds on the cost of the optimal solution. Finally, we also investigate the complexity of problems related to the strategic control of conditional approval elections by adding or deleting either voters or alternatives and we show that in most variants of these problems, CMS is computationally resistant against control. Overall, we conclude that CMS can be viewed as a solution with a satisfactory tradeoff between expressiveness and computational efficiency, when we have a limited number of dependencies among issues, while at the same time exhibiting sufficient resistance to control.
We study game-theoretic models for capturing participation in blockchain systems. Permissionless blockchains can be naturally viewed as games, where a set of potentially interested users is faced with the dilemma …
We study game-theoretic models for capturing participation in blockchain systems. Permissionless blockchains can be naturally viewed as games, where a set of potentially interested users is faced with the dilemma of whether to engage with the protocol or not. Engagement here implies that the user will be asked to complete certain tasks, whenever they are selected to contribute (typically according to some stochastic process) and be rewarded if they choose to do so. Apart from the basic dilemma of engaging or not, even more strategic considerations arise in settings where users may be able to declare participation and then retract before completing their tasks (but are still able to receive rewards) or are rewarded independently of whether they contribute. Such variations occur naturally in the blockchain setting due to the complexity of tracking ``on-chain'' the behavior of the participants. We capture these participation considerations offering a series of models that enable us to reason about the basic dilemma, the case where retraction effects influence the outcome and the case when payments are given universally irrespective of the stochastic process. In all cases we provide characterization results or necessary conditions on the structure of Nash equilibria. Our findings reveal that appropriate reward mechanisms can be used to stimulate participation and avoid negative effects of free riding, results that are in line but also can inform real world blockchain system deployments.
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute $\varepsilon$-approximate Nash equilibria. Finding the best possible …
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute $\varepsilon$-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis, with an approximation guarantee of $(0.3393+\delta)$, remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a $(\frac{1}{3}+\delta)$-Nash equilibrium, for any constant $\delta>0$. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of Tsaknakis and Spirakis, and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, …
Since the seminal PPAD-completeness result for computing a Nash equilibrium even in two-player games, an important line of research has focused on relaxations achievable in polynomial time. In this paper, we consider the notion of $\varepsilon$-well-supported Nash equilibrium, where $\varepsilon \in [0,1]$ corresponds to the approximation guarantee. Put simply, in an $\varepsilon$-well-supported equilibrium, every player chooses with positive probability actions that are within $\varepsilon$ of the maximum achievable payoff, against the other player's strategy. Ever since the initial approximation guarantee of 2/3 for well-supported equilibria, which was established more than a decade ago, the progress on this problem has been extremely slow and incremental. Notably, the small improvements to 0.6608, and finally to 0.6528, were achieved by algorithms of growing complexity. Our main result is a simple and intuitive algorithm, that improves the approximation guarantee to 1/2. Our algorithm is based on linear programming and in particular on exploiting suitably defined zero-sum games that arise from the payoff matrices of the two players. As a byproduct, we show how to achieve the same approximation guarantee in a query-efficient way.
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror …
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror Descent \cite{DBLP:conf/iclr/MertikopoulosLZ19}, uses a large learning rate for the intermediate gradient step which essentially leads to computing (approximate) best response strategies against the profile of the previous iteration. Although counter-intuitive at first sight due to the irrationally large, for an iterative algorithm, intermediate learning step, we prove that the method guarantees last-iterate convergence to an equilibrium. Particularly, we show that the algorithm reaches first an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate, $\eta$, until the method becomes a contracting map, and converges to the exact equilibrium. Furthermore, we perform experimental comparisons with the optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ} and show that our algorithm has significant practical potential since it offers substantial gains in terms of accelerated convergence.
We propose a novel variant of the \emph{multiplicative weights update method} with forward-looking best-response strategies, that guarantees last-iterate convergence for \emph{zero-sum games} with a unique \emph{Nash equilibrium}. Particularly, we show …
We propose a novel variant of the \emph{multiplicative weights update method} with forward-looking best-response strategies, that guarantees last-iterate convergence for \emph{zero-sum games} with a unique \emph{Nash equilibrium}. Particularly, we show that the proposed algorithm converges to an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by a rate of at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate $\eta$. When our method enters a sufficiently small neighborhood of the solution, it becomes a contraction and converges to the Nash equilibrium of the game. Furthermore, we perform an experimental comparison with the recently proposed optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ}, which has also been proved to attain last-iterate convergence. Our findings reveal that our algorithm offers substantial gains both in terms of the convergence rate and the region of contraction relative to the previous approach.
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror …
Our work focuses on extra gradient learning algorithms for finding Nash equilibria in bilinear zero-sum games. The proposed method, which can be formally considered as a variant of Optimistic Mirror Descent \cite{DBLP:conf/iclr/MertikopoulosLZ19}, uses a large learning rate for the intermediate gradient step which essentially leads to computing (approximate) best response strategies against the profile of the previous iteration. Although counter-intuitive at first sight due to the irrationally large, for an iterative algorithm, intermediate learning step, we prove that the method guarantees last-iterate convergence to an equilibrium. Particularly, we show that the algorithm reaches first an $\eta^{1/\rho}$-approximate Nash equilibrium, with $\rho > 1$, by decreasing the Kullback-Leibler divergence of each iterate by at least $\Omega(\eta^{1+\frac{1}{\rho}})$, for sufficiently small learning rate, $\eta$, until the method becomes a contracting map, and converges to the exact equilibrium. Furthermore, we perform experimental comparisons with the optimistic variant of the multiplicative weights update method, by \cite{Daskalakis2019LastIterateCZ} and show that our algorithm has significant practical potential since it offers substantial gains in terms of accelerated convergence.
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential …
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential dependencies. Under this rule, voters are allowed to declare dependencies between different issues, but the price we have to pay for this higher level of expressiveness is that we end up with a computationally hard rule. Motivated by this, we first focus on finding special cases that admit efficient algorithms for CMS. Our main result in this direction is that we identify the condition of bounded treewidth (of an appropriate graph, emerging from the provided ballots) as the necessary and sufficient condition for exact polynomial algorithms, under common complexity assumptions. We then move to the design of approximation algorithms. For the (still hard) case of binary issues, we identify natural restrictions on the voters' ballots, under which we provide the first multiplicative approximation algorithms for the problem. The restrictions involve upper bounds on the number of dependencies an issue can have on the others and on the number of alternatives per issue that a voter can approve. Finally, we also investigate the complexity of problems related to the strategic control of conditional approval elections by adding or deleting either voters or alternatives and we show that in most variants of these problems, CMS is computationally resistant against control. Overall, we conclude that CMS can be viewed as a solution that achieves a satisfactory tradeoff between expressiveness and computational efficiency, when we have a limited number of dependencies among issues, while at the same time exhibiting sufficient resistance to control.
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most …
Several relaxations of envy-freeness, tailored to fair division in settings with indivisible goods, have been introduced within the last decade. Due to the lack of general existence results for most of these concepts, great attention has been paid to establishing approximation guarantees. In this work, we propose a simple algorithm that is universally fair in the sense that it returns allocations that have good approximation guarantees with respect to four such fairness notions at once. In particular, this is the first algorithm achieving a (φ−1)-approximation of envy-freeness up to any good (EFX) and a 2/φ+2 -approximation of groupwise maximin share fairness (GMMS), where φ is the golden ratio. The best known approximation factor, in polynomial time, for either one of these fairness notions prior to this work was 1/2. Moreover, the returned allocation achieves envy-freeness up to one good (EF1) and a 2/3-approximation of pairwise maximin share fairness (PMMS). While EFX is our primary focus, we also exhibit how to fine-tune our algorithm and improve further the guarantees for GMMS or PMMS.Finally, we show that GMMS—and thus PMMS and EFX—allocations always exist when the number of goods does not exceed the number of agents by more than two.
We study mechanism design for combinatorial cost sharing models. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a …
We study mechanism design for combinatorial cost sharing models. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently [7]. Still, many questions about the interplay between strategyproofness, cost recovery and economic efficiency remain unanswered. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (which we term trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations (complementary to a certain extent) of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, O(1)-budget-balanced and O(Hn)-approximate with respect to the social cost. Further, we show that our mechanism is budget-balanced and Hn-approximate if both the valuations and the cost functions are symmetric submodular; given existing impossibility results, this is best possible. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of n. We identify conditions under which the mechanism achieves improved social cost approximation guarantees. In particular, we derive improved mechanisms for fundamental cost sharing problems, including Vertex Cover and Set Cover.
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism …
We study mechanism design for combinatorial cost sharing. Imagine that multiple items or services are available to be shared among a set of interested agents. The outcome of a mechanism in this setting consists of an assignment, determining for each item the set of players who are granted service, together with respective payments. Although there are several works studying specialized versions of such problems, there has been almost no progress for general combinatorial cost sharing domains until recently \cite{DobzinskiO17}. The main goal of our work is to further understand this interplay in terms of budget balance and social cost approximation. Towards this, we provide a refinement of cross-monotonicity (trace-monotonicity) that is applicable to iterative mechanisms. The trace here refers to the order in which players become finalized. On top of this, we also provide two parameterizations of cost functions which capture the behavior of their average cost-shares. Based on our trace-monotonicity property, we design a scheme of ascending cost sharing mechanisms which is applicable to the combinatorial cost sharing setting with symmetric submodular valuations. Using our first cost function parameterization, we identify conditions under which our mechanism is weakly group-strategyproof, $O(1)$-budget-balanced and $O(H_n)$-approximate with respect to the social cost. Finally, we consider general valuation functions and exploit our second parameterization to derive a more fine-grained analysis of the Sequential Mechanism introduced by Moulin. This mechanism is budget balanced by construction, but in general only guarantees a poor social cost approximation of $n$. We identify conditions under which the mechanism achieves improved social cost approximation guarantees.
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so …
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several …
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight, or almost tight, results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so …
We consider the classic cake-cutting problem of producing envy-free allocations, restricted to the case of four agents. The problem asks for a partition of the cake to four agents, so that every agent finds her piece at least as valuable as every other agent's piece. The problem has had an interesting history so far. Although the case of three agents is solvable with less than 15 queries, for four agents no bounded procedure was known until the recent breakthroughs of Aziz and Mackenzie (STOC 2016, FOCS 2016). The main drawback of these new algorithms, however, is that they are quite complicated and with a very high query complexity. With four agents, the number of queries required is close to 600. In this work we provide an improved algorithm for four agents, which reduces the current complexity by a factor of 3.4. Our algorithm builds on the approach of Aziz and Mackenzie (STOC 2016) by incorporating new insights and simplifying several steps. Overall, this yields an easier to grasp procedure with lower complexity.
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several …
In fair division problems with indivisible goods it is well known that one cannot have any guarantees for the classic fairness notions of envy-freeness and proportionality. As a result, several relaxations have been introduced, most of which in quite recent works. We focus on four such notions, namely envy-freeness up to one good (EF1), envy-freeness up to any good (EFX), maximin share fairness (MMS), and pairwise maximin share fairness (PMMS). Since obtaining these relaxations also turns out to be problematic in several scenarios, approximate versions of them have been considered. In this work, we investigate further the connections between the four notions mentioned above and their approximate versions. We establish several tight, or almost tight, results concerning the approximation quality that any of these notions guarantees for the others, providing an almost complete picture of this landscape. Some of our findings reveal interesting and surprising consequences regarding the power of these notions, e.g., PMMS and EFX provide the same worst-case guarantee for MMS, despite PMMS being a strictly stronger notion than EFX. We believe such implications provide further insight on the quality of approximately fair solutions.
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a …
We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of $n$ agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, so we resort to approximation algorithms. Our main result is a $2/3$-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang, which also produces a $2/3$-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of their algorithm. Furthermore, motivated by the apparent difficulty, both theoretically and experimentally, in finding lower bounds on the existence of approximate solutions, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in relevant works. Finally, we provide further positive results for two special cases that arise from previous works. The first one is the intriguing case of $3$ agents, for which it is already known that exact maximin share allocations do not always exist (contrary to the case of $2$ agents). We provide a $7/8$-approximation algorithm, improving the previously known result of $3/4$. The second case is when all item values belong to $\{0, 1, 2\}$, extending the $\{0, 1\}$ setting studied in Bouveret and Lema\^itre. We obtain an exact algorithm for any number of agents in this case.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would …
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would like to select a subset of the resources so as to maximize his valuation function, without exceeding a given budget. As the resources are owned by strategic agents however, our overall goal is to design mechanisms that are truthful, budget-feasible, and obtain a good approximation to the optimal value. Budget-feasibility creates additional challenges, making several approaches inapplicable in this setting. Previous results on budget-feasible mechanisms have considered mostly monotone valuation functions. In this work, we mainly focus on symmetric submodular valuations, a prominent class of non-monotone submodular functions that includes cut functions. We begin first with a purely algorithmic result, obtaining a $\frac{2e}{e-1}$-approximation for maximizing symmetric submodular functions under a budget constraint. We view this as a standalone result of independent interest, as it is the best known factor achieved by a deterministic algorithm. We then proceed to propose truthful, budget feasible mechanisms (both deterministic and randomized), paying particular attention on the Budgeted Max Cut problem. Our results significantly improve the known approximation ratios for these objectives, while establishing polynomial running time for cases where only exponential mechanisms were known. At the heart of our approach lies an appropriate combination of local search algorithms with results for monotone submodular valuations, applied to the derived local optima.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would …
We study a class of procurement auctions with a budget constraint, where an auctioneer is interested in buying resources or services from a set of agents. Ideally, the auctioneer would like to select a subset of the resources so as to maximize his valuation function, without exceeding a given budget. As the resources are owned by strategic agents however, our overall goal is to design mechanisms that are truthful, budget-feasible, and obtain a good approximation to the optimal value. Budget-feasibility creates additional challenges, making several approaches inapplicable in this setting. Previous results on budget-feasible mechanisms have considered mostly monotone valuation functions. In this work, we mainly focus on symmetric submodular valuations, a prominent class of non-monotone submodular functions that includes cut functions. We begin first with a purely algorithmic result, obtaining a $\frac{2e}{e-1}$-approximation for maximizing symmetric submodular functions under a budget constraint. We view this as a standalone result of independent interest, as it is the best known factor achieved by a deterministic algorithm. We then proceed to propose truthful, budget feasible mechanisms (both deterministic and randomized), paying particular attention on the Budgeted Max Cut problem. Our results significantly improve the known approximation ratios for these objectives, while establishing polynomial running time for cases where only exponential mechanisms were known. At the heart of our approach lies an appropriate combination of local search algorithms with results for monotone submodular valuations, applied to the derived local optima.
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a …
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a cost, so as not to exceed a given budget and at the same time maximize a given valuation function. This framework captures the budgeted version of several well known optimization problems, and when the resources are owned by strategic agents the goal is to design truthful and budget feasible mechanisms, i.e. elicit the true cost of the resources and ensure the payments of the mechanism do not exceed the budget. Budget feasibility introduces more challenges in mechanism design, and we study instantiations of this problem for certain classes of submodular and XOS valuation functions. We first obtain mechanisms with an improved approximation ratio for weighted coverage valuations, a special class of submodular functions that has already attracted attention in previous works. We then provide a general scheme for designing randomized and deterministic polynomial time mechanisms for a class of XOS problems. This class contains problems whose feasible set forms an independence system (a more general structure than matroids), and some representative problems include, among others, finding maximum weighted matchings, maximum weighted matroid members, and maximum weighted 3D-matchings. For most of these problems, only randomized mechanisms with very high approximation ratios were known prior to our results.
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., …
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., prices offered to neighboring vertices should not be significantly different. We first provide approximation algorithms for a natural class of instances, referred to as the class of single-value revenue functions. Our results improve on the current state of the art, especially when the number of distinct prices is small. This applies, for example, to settings where the seller will only consider a fixed number of discount types or special offers. We then resolve one of the open questions posed in Alon et al., by establishing APX-hardness for the problem. Surprisingly, we further show that the problem is NP-complete even when the price differences are allowed to be large, or even when the number of allowed distinct prices is as small as three. Finally, we provide some extensions of the model, regarding either the allowed set of prices or the demand type of the clients.
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is …
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is the best she can guarantee to herself, if she would partition the items in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then is to find an allocation, so that each player is guaranteed her maximin share. Previous works have studied this problem mostly algorithmically, providing constant factor approximation algorithms. In this work we embark on a mechanism design approach and investigate the existence of truthful mechanisms. We propose three models regarding the information that the mechanism attempts to elicit from the players, based on the cardinal and ordinal representation of preferences. We establish positive and negative (impossibility) results for each model and highlight the limitations imposed by truthfulness on the approximability of the problem. Finally, we pay particular attention to the case of two players, which already leads to challenging questions.
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a …
We study a type of reverse (procurement) auction problems in the presence of budget constraints. The general algorithmic problem is to purchase a set of resources, which come at a cost, so as not to exceed a given budget and at the same time maximize a given valuation function. This framework captures the budgeted version of several well known optimization problems, and when the resources are owned by strategic agents the goal is to design truthful and budget feasible mechanisms, i.e. elicit the true cost of the resources and ensure the payments of the mechanism do not exceed the budget. Budget feasibility introduces more challenges in mechanism design, and we study instantiations of this problem for certain classes of submodular and XOS valuation functions. We first obtain mechanisms with an improved approximation ratio for weighted coverage valuations, a special class of submodular functions that has already attracted attention in previous works. We then provide a general scheme for designing randomized and deterministic polynomial time mechanisms for a class of XOS problems. This class contains problems whose feasible set forms an independence system (a more general structure than matroids), and some representative problems include, among others, finding maximum weighted matchings, maximum weighted matroid members, and maximum weighted 3D-matchings. For most of these problems, only randomized mechanisms with very high approximation ratios were known prior to our results.
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., …
We study a revenue maximization problem in the context of social networks. Namely, we consider a model introduced by Alon, Mansour, and Tennenholtz (EC 2013) that captures inequity aversion, i.e., prices offered to neighboring vertices should not be significantly different. We first provide approximation algorithms for a natural class of instances, referred to as the class of single-value revenue functions. Our results improve on the current state of the art, especially when the number of distinct prices is small. This applies, for example, to settings where the seller will only consider a fixed number of discount types or special offers. We then resolve one of the open questions posed in Alon et al., by establishing APX-hardness for the problem. Surprisingly, we further show that the problem is NP-complete even when the price differences are allowed to be large, or even when the number of allowed distinct prices is as small as three. Finally, we provide some extensions of the model, regarding either the allowed set of prices or the demand type of the clients.
We present a systematic study of Plurality elections with strategic voters who, in addition to having preferences over election winners, have secondary preferences, which govern their behavior when their vote …
We present a systematic study of Plurality elections with strategic voters who, in addition to having preferences over election winners, have secondary preferences, which govern their behavior when their vote cannot affect the election outcome. Specifically, we study two models that have been recently considered in the literature: lazy voters, who prefer to abstain when they are not pivotal, and truth-biased voters, who prefer to vote truthfully when they are not pivotal. We extend prior work by investigating the behavior of both lazy and truth-biased voters under different tie-breaking rules (lexicographic rule, random voter rule, random candidate rule). Two of these six combinations of secondary preferences and a tie-breaking rule have been studied in prior work. In order to understand the impact of different secondary preferences and tie-breaking rules on the election outcomes, we study the remaining four combinations. We characterize pure Nash equilibria (PNE) of the resulting strategic games and study the complexity of related computational problems. Our results extend to settings where some of the voters may be non-strategic.
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets …
Budget feasible mechanism design studies procurement combinatorial auctions in the sellers have private costs to produce items, and the buyer (auctioneer) aims to maximize a social valuation function on subsets of items, under the budget constraint on the total payment. One of the most important questions in the field is which valuation domains admit truthful budget feasible mechanisms with 'small' approximations (compared to the social optimum)? Singer [35] showed that additive and submodular functions have a constant approximation mechanism. Recently, Dobzinski, Papadimitriou, and Singer [20] gave an O(log2n) approximation mechanism for subadditive functions; further, they remarked that: A fundamental question is whether, regardless of computational constraints, a constant-factor budget feasible mechanism exists for subadditive In this paper, we address this question from two viewpoints: prior-free worst case analysis and Bayesian analysis, are two standard approaches from computer science and economics, respectively. - For the prior-free framework, we use a linear program (LP) that describes the fractional cover of the valuation function; the LP is also connected to the concept of approximate core in cooperative game theory. We provide a mechanism for subadditive functions whose approximation is O(I), via the worst case integrality gap I of this LP. This implies an O(log n)-approximation for subadditive valuations, O(1)-approximation for XOS valuations, as well as for valuations having a constant integrality gap. XOS valuations are an important class of functions and lie between the submodular and the subadditive classes of valuations. We further give another polynomial time O(log n/(log log n)) sub-logarithmic approximation mechanism for subadditive functions. Both of our mechanisms improve the best known approximation ratio O(log2 n). - For the Bayesian framework, we provide a constant approximation mechanism for all subadditive functions, using the above prior-free mechanism for XOS valuations as a subroutine. Our mechanism allows correlations in the distribution of private information and is universally truthful.
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is …
We study a fair division problem with indivisible items, namely the computation of maximin share allocations. Given a set of $n$ players, the maximin share of a single player is the best she can guarantee to herself, if she would partition the items in any way she prefers, into $n$ bundles, and then receive her least desirable bundle. The objective then is to find an allocation, so that each player is guaranteed her maximin share. Previous works have studied this problem mostly algorithmically, providing constant factor approximation algorithms. In this work we embark on a mechanism design approach and investigate the existence of truthful mechanisms. We propose three models regarding the information that the mechanism attempts to elicit from the players, based on the cardinal and ordinal representation of preferences. We establish positive and negative (impossibility) results for each model and highlight the limitations imposed by truthfulness on the approximability of the problem. Finally, we pay particular attention to the case of two players, which already leads to challenging questions.
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, …
We study a novel class of mechanism design problems in which the outcomes are constrained by the payments. This basic class of mechanism design problems captures many common economic situations, and yet it has not been studied, to our knowledge, in the past. We focus on the case of procurement auctions in which sellers have private costs, and the auctioneer aims to maximize a utility function on subsets of items, under the constraint that the sum of the payments provided by the mechanism does not exceed a given budget. Standard mechanism design ideas such as the VCG mechanism and its variants are not applicable here. We show that, for general functions, the budget constraint can render mechanisms arbitrarily bad in terms of the utility of the buyer. However, our main result shows that for the important class of sub modular functions, a bounded approximation ratio is achievable. Better approximation results are obtained for subclasses of the sub modular functions. We explore the space of budget feasible mechanisms in other domains and give a characterization under more restricted conditions.
It is proved that Groves’ scheme is unique on restricted domains which are smoothly connected, in particular convex domains. This generalizes earlier uniqueness results by Green and Laffont and Walker. …
It is proved that Groves’ scheme is unique on restricted domains which are smoothly connected, in particular convex domains. This generalizes earlier uniqueness results by Green and Laffont and Walker. An example shows that uniqueness may be lost if the domain is not smoothly connected.
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how …
We study the mechanism design problem of allocating a set of indivisible items without monetary transfers. Despite the vast literature on this very standard model, it still remains unclear how do truthful mechanisms look like. We focus on the case of two players with additive valuation functions and our purpose is twofold. First, our main result provides a complete characterization of truthful mechanisms that allocate all the items to the players. Our characterization reveals an interesting structure underlying all truthful mechanisms, showing that they can be decomposed into two components: a selection part where players pick their best subset among prespecified choices determined by the mechanism, and an exchange part where players are offered the chance to exchange certain subsets if it is favorable to do so. In the remaining paper, we apply our main result and derive several consequences on the design of mechanisms with fairness guarantees. We consider various notions of fairness, (indicatively, maximin share guarantees and envy-freeness up to one item) and provide tight bounds for their approximability. Our work settles some of the open problems in this agenda, and we conclude by discussing possible extensions to more players.
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and …
A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying the above properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots.
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential …
We focus on a generalization of the classic Minisum approval voting rule, introduced by Barrot and Lang (2016), and referred to as Conditional Minisum (CMS), for multi-issue elections with preferential dependencies. Under this rule, voters are allowed to declare dependencies between different issues, but the price we have to pay for this higher level of expressiveness is that we end up with a computationally hard rule. Motivated by this, we first focus on finding special cases that admit efficient algorithms for CMS. Our main result in this direction is that we identify the condition of bounded treewidth (of an appropriate graph, emerging from the provided ballots) as the necessary and sufficient condition for exact polynomial algorithms, under common complexity assumptions. We then move to the design of approximation algorithms. For the (still hard) case of binary issues, we identify natural restrictions on the voters' ballots, under which we provide the first multiplicative approximation algorithms for the problem. The restrictions involve upper bounds on the number of dependencies an issue can have on the others and on the number of alternatives per issue that a voter can approve. Finally, we also investigate the complexity of problems related to the strategic control of conditional approval elections by adding or deleting either voters or alternatives and we show that in most variants of these problems, CMS is computationally resistant against control. Overall, we conclude that CMS can be viewed as a solution that achieves a satisfactory tradeoff between expressiveness and computational efficiency, when we have a limited number of dependencies among issues, while at the same time exhibiting sufficient resistance to control.
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget …
Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group.Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions.
We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has …
We consider the well-studied cake cutting problem in which the goal is to identify an envy-free allocation based on a minimal number of queries from the agents. The problem has attracted considerable attention within various branches of computer science, mathematics, and economics. Although, the elegant Selfridge-Conway envy-free protocol for three agents has been known since 1960, it has been a major open problem to obtain a bounded envy-free protocol for more than three agents. The problem has been termed the central open problem in cake cutting. We solve this problem by proposing a discrete and bounded envy-free protocol for four agents.
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. …
We consider a general class of Bayesian Games where each players utility depends on his type (possibly multidimensional) and on the strategy profile and where players' types are distributed independently. We show that if their full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy. We show how some proofs from the literature (item bidding auctions, greedy auctions) can be cast as smoothness proofs or be simplified using smoothness. For first price item bidding with fractionally subadditive bidders we actually manage to improve by much the existing result \cite{Hassidim2011a} from 4 to $\frac{e}{e-1}\approx 1.58$. This also shows a very interesting separation between first and second price item bidding since second price item bidding has PoA at least 2 even under complete information. For a larger class of Bayesian Games where the strategy space of a player also changes with his type we are able to show that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound. We show how weighted congestion games actually satisfy this stronger definition of smoothness. This allows us to show that the inefficiency bounds of weighted congestion games known in the literature carry over to incomplete versions where the weights of the players are private information. We also show how an incomplete version of a natural class of monotone valid utility games, called effort market games are universally $(1,1)$-smooth. Hence, we show that incomplete versions of effort market games where the abilities of the players and their budgets are private information has Bayes-Nash PoA at most 2.
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, …
Previous chapter Next chapter Full AccessProceedings Proceedings of the 2011 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)On the Approximability of Budget Feasible MechanismsNing Chen, Nick Gravin, and Pinyan LuNing Chen, Nick Gravin, and Pinyan Lupp.685 - 699Chapter DOI:https://doi.org/10.1137/1.9781611973082.54PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract Budget feasible mechanisms, recently initiated by Singer (FOCS 2010), extend algorithmic mechanism design problems to a realistic setting with a budget constraint. We consider the problem of designing truthful budget feasible mechanisms for monotone submodular functions: We give a randomized mechanism with an approximation ratio of 7.91 (improving on the previous best-known result 233.83), and a deterministic mechanism with an approximation ratio of 8.34. We also study the knapsack problem, which is a special submodular function, give a 2 + √2 approximation deterministic mechanism (improving on the previous best-known result 5), and a 3 approximation randomized mechanism. We provide similar results for an extended knapsack problem with heterogeneous items, where items are divided into groups and one can pick at most one item from each group. Finally we show a lower bound of 1 + √2 for the approximation ratio of deterministic mechanisms and 2 for randomized mechanisms for knapsack, as well as the general monotone submodular functions. Our lower bounds are unconditional, and do not rely on any computational or complexity assumptions. Previous chapter Next chapter RelatedDetails Published:2011ISBN:978-0-89871-993-2eISBN:978-1-61197-308-2 https://doi.org/10.1137/1.9781611973082Book Series Name:ProceedingsBook Code:PR138Book Pages:xviii-1788
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it …
Internet search companies sell advertisement slots based on users' search queries via an auction. While there has been previous work onthe auction process and its game-theoretic aspects, most of it focuses on the Internet company. In this work, we focus on the advertisers, who must solve a complex optimization problem to decide how to place bids on keywords to maximize their return (the number of user clicks on their ads) for a given budget. We model the entire process and study this budget optimization problem. While most variants are NP-hard, we show, perhaps surprisingly, that simply randomizing between two uniform strategies that bid equally on all the keywordsworks well. More precisely, this strategy gets at least a 1-1/e fraction of the maximum clicks possible. As our preliminary experiments show, such uniform strategies are likely to be practical. We also present inapproximability results, and optimal algorithms for variants of the budget optimization problem.
We develop a game-theoretic framework for the study of competition between firms who have budgets to "seed" the initial adoption of their products by consumers located in a social network. …
We develop a game-theoretic framework for the study of competition between firms who have budgets to "seed" the initial adoption of their products by consumers located in a social network. The payoffs to the firms are the eventual number of adoptions of their product through a competitive stochastic diffusion process in the network. This framework yields a rich class of competitive strategies, which depend in subtle ways on the stochastic dynamics of adoption, the relative budgets of the players, and the underlying structure of the social network.
Coalition formation is a fundamental type of interaction that involves the creation of coherent groupings of distinct, autonomous, agents in order to efficiently achieve their individual or collective goals. Forming …
Coalition formation is a fundamental type of interaction that involves the creation of coherent groupings of distinct, autonomous, agents in order to efficiently achieve their individual or collective goals. Forming effective coalitions is a major research challenge in the field of multi-agent systems. Central to this endeavour is the problem of determining which of the many possible coalitions to form in order to achieve some goal. This usually requires calculating a value for every possible coalition, known as the coalition value, which indicates how beneficial that coalition would be if it was formed. Once these values are calculated, the agents usually need to find a combination of coalitions, in which every agent belongs to exactly one coalition, and by which the overall outcome of the system is maximized. However, this coalition structure generation problem is extremely challenging due to the number of possible solutions that need to be examined, which grows exponentially with the number of agents involved. To date, therefore, many algorithms have been proposed to solve this problem using different techniques ranging from dynamic programming, to integer programming, to stochastic search all of which suffer from major limitations relating to execution time, solution quality, and memory requirements. With this in mind, we develop an anytime algorithm to solve the coalition structure generation problem. Specifically, the algorithm uses a novel representation of the search space, which partitions the space of possible solutions into sub-spaces such that it is possible to compute upper and lower bounds on the values of the best coalition structures in them. These bounds are then used to identify the sub-spaces that have no potential of containing the optimal solution so that they can be pruned. The algorithm, then, searches through the remaining sub-spaces very efficiently using a branch-and-bound technique to avoid examining all the solutions within the searched subspace(s). In this setting, we prove that our algorithm enumerates all coalition structures efficiently by avoiding redundant and invalid solutions automatically. Moreover, in order to effectively test our algorithm we develop a new type of input distribution which allows us to generate more reliable benchmarks compared to the input distributions previously used in the field. Given this new distribution, we show that for 27 agents our algorithm is able to find solutions that are optimal in 0.175% of the time required by the fastest available algorithm in the literature. The algorithm is anytime, and if interrupted before it would have normally terminated, it can still provide a solution that is guaranteed to be within a bound from the optimal one. Moreover, the guarantees we provide on the quality of the solution are significantly better than those provided by the previous state of the art algorithms designed for this purpose. For example, for the worst case distribution given 25 agents, our algorithm is able to find a 90% efficient solution in around 10% of time it takes to find the optimal solution.
We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related …
We initiate the study of efficient mechanism design with guaranteed good properties even when players participate in multiple mechanisms simultaneously or sequentially. We define the class of smooth mechanisms, related to smooth games defined by Roughgarden, that can be thought of as mechanisms that generate approximately market clearing prices. We show that smooth mechanisms result in high quality outcome both in equilibrium and in learning outcomes in the full information setting, as well as in Bayesian equilibrium with uncertainty about participants. Our main result is to show that smooth mechanisms compose well: smoothness locally at each mechanism implies global efficiency.
The Generalized Second Price (GSP) auction is the primary auction used for monetizing the use of the Internet. It is well-known that truthtelling is not a dominant strategy in this …
The Generalized Second Price (GSP) auction is the primary auction used for monetizing the use of the Internet. It is well-known that truthtelling is not a dominant strategy in this auction and that inefficient equilibria can arise. In this paper we study the space of equilibria in GSP, and quantify the efficiency loss that can arise in equilibria under a wide range of sources of uncertainty, as well as in the full information setting. The traditional Bayesian game models uncertainty in the valuations (types) of the participants. The Generalized Second Price (GSP) auction gives rise to a further form of uncertainty: the selection of quality factors resulting in uncertainty about the behavior of the underlying ad allocation algorithm. The bounds we obtain apply to both forms of uncertainty, and are robust in the sense that they apply under various perturbations of the solution concept, extending to models with information asymmetries and bounded rationality in the form of learning strategies.
We present a constant bound (2.927) on the factor of the efficiency loss (\emph{price of anarchy}) of the corresponding game for the Bayesian model of partial information about other participants and about ad quality factors. For the full information setting, we prove a surprisingly low upper bound of 1.282 on the price of anarchy over pure Nash equilibria, nearly matching a lower bound of 1.259 for the case of three advertisers. Further, we do not require that the system reaches equilibrium, and give similarly low bounds also on the quality degradation for any no-regret learning outcome. Our conclusion is that the number of advertisers in the auction has almost no impact on the price of anarchy, and that the efficiency of GSP is very robust with respect to the belief and rationality assumptions imposed on the participants.
Recently, we introduced in arXiv:1105.2434 a model for product adoption in social networks with multiple products, where the agents, influenced by their neighbours, can adopt one out of several alternatives. …
Recently, we introduced in arXiv:1105.2434 a model for product adoption in social networks with multiple products, where the agents, influenced by their neighbours, can adopt one out of several alternatives. We identify and analyze here four types of paradoxes that can arise in these networks. To this end, we use social network games that we recently introduced in arxiv:1202.2209. These paradoxes shed light on possible inefficiencies arising when one modifies the sets of products available to the agents forming a social network. One of the paradoxes corresponds to the well-known Braess paradox in congestion games and shows that by adding more choices to a node, the network may end up in a situation that is worse for everybody. We exhibit a dual version of this, where removing available choices from someone can eventually make everybody better off. The other paradoxes that we identify show that by adding or removing a product from the choice set of some node may lead to permanent instability. Finally, we also identify conditions under which some of these paradoxes cannot arise.
We study the computational complexity of candidate control in elections with few voters, that is, we consider the parameterized complexity of candidate control in elections with respect to the number …
We study the computational complexity of candidate control in elections with few voters, that is, we consider the parameterized complexity of candidate control in elections with respect to the number of voters as a parameter. We consider both the standard scenario of adding and deleting candidates, where one asks whether a given candidate can become a winner (or, in the destructive case, can be precluded from winning) by adding or deleting few candidates, as well as a combinatorial scenario where adding/deleting a candidate automatically means adding or deleting a whole group of candidates. Considering several fundamental voting rules, our results show that the parameterized complexity of candidate control, with the number of voters as the parameter, is much more varied than in the setting with many voters.
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there …
In this paper we consider a mechanism design problem in the context of large-scale crowdsourcing markets such as Amazon's Mechanical Turk mturk, ClickWorker clickworker, CrowdFlower crowdflower. In these markets, there is a requester who wants to hire workers to accomplish some tasks. Each worker is assumed to give some utility to the requester on getting hired. Moreover each worker has a minimum cost that he wants to get paid for getting hired. This minimum cost is assumed to be private information of the workers. The question then is -- if the requester has a limited budget, how to design a direct revelation mechanism that picks the right set of workers to hire in order to maximize the requester's utility? We note that although the previous work (Singer (2010) chen et al. (2011)) has studied this problem, a crucial difference in which we deviate from earlier work is the notion of large-scale markets that we introduce in our model. Without the large market assumption, it is known that no mechanism can achieve a competitive ratio better than 0.414 and 0.5 for deterministic and randomized mechanisms respectively (while the best known deterministic and randomized mechanisms achieve an approximation ratio of 0.292 and 0.33 respectively). In this paper, we design a budget-feasible mechanism for large markets that achieves a competitive ratio of 1 - 1/e ≃ 0.63. Our mechanism can be seen as a generalization of an alternate way to look at the proportional share mechanism, which is used in all the previous works so far on this problem. Interestingly, we can also show that our mechanism is optimal by showing that no truthful mechanism can achieve a factor better than 1 - 1/e, thus, fully resolving this setting. Finally we consider the more general case of submodular utility functions and give new and improved mechanisms for the case when the market is large.
We present a new, distributed method to compute approximate Nash equilibria in bimatrix games. In contrast to previous approaches that analyze the two payoff matrices at the same time (for …
We present a new, distributed method to compute approximate Nash equilibria in bimatrix games. In contrast to previous approaches that analyze the two payoff matrices at the same time (for example, by solving a single LP that combines the two players' payoffs), our algorithm first solves two independent LPs, each of which is derived from one of the two payoff matrices, and then computes an approximate Nash equilibrium using only limited communication between the players. Our method gives improved bounds on the complexity of computing approximate Nash equilibria in a number of different settings. Firstly, it gives a polynomial-time algorithm for computing approximate well supported Nash equilibria (WSNE) that always finds a 0.6528-WSNE, beating the previous best guarantee of 0.6608. Secondly, since our algorithm solves the two LPs separately, it can be applied to give an improved bound in the limited communication setting, giving a randomized expected-polynomial-time algorithm that uses poly-logarithmic communication and finds a 0.6528-WSNE, which beats the previous best known guarantee of 0.732. It can also be applied to the case of approximate Nash equilibria, where we obtain a randomized expected-polynomial-time algorithm that uses poly-logarithmic communication and always finds a 0.382-approximate Nash equilibrium, which improves the previous best guarantee of 0.438. Finally, the method can also be applied in the query complexity setting to give an algorithm that makes $$O(n \log n)$$ payoff queries and always finds a 0.6528-WSNE, which improves the previous best known guarantee of 2/3.
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin …
We consider the problem of dividing indivisible goods fairly among n agents who have additive and submodular valuations for the goods. Our fairness guarantees are in terms of the maximin share, that is defined to be the maximum value that an agent can ensure for herself, if she were to partition the goods into n bundles, and then receive a minimum valued bundle. Since maximin fair allocations (i.e., allocations in which each agent gets at least her maximin share) do not always exist, prior work has focussed on approximation results that aim to find allocations in which the value of the bundle allocated to each agent is (multiplicatively) as close to her maximin share as possible. In particular, Procaccia and Wang (2014) along with Amanatidis et al. (2015) have shown that under additive valuations a 2/3-approximate maximin fair allocation always exists and can be found in polynomial time. We complement these results by developing a simple and efficient algorithm that achieves the same approximation guarantee.
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis …
Online learning represents an important family of machine learning algorithms, in which a learner attempts to resolve an online prediction (or any type of decision-making) task by learning a model/hypothesis from a sequence of data instances one at a time. The goal of online learning is to ensure that the online learner would make a sequence of accurate predictions (or correct decisions) given the knowledge of correct answers to previous prediction or learning tasks and possibly additional information. This is in contrast to many traditional batch learning or offline machine learning algorithms that are often designed to train a model in batch from a given collection of training data instances. This survey aims to provide a comprehensive survey of the online machine learning literatures through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the learning type and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) supervised online learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) unsupervised online learning where there is no feedback available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We characterize the graphs for which …
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives. We characterize the graphs for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed. These characterizations directly yield polynomial time algorithms that allow us to determine whether a given social network satisfies one of the above properties.
We also study algorithmic questions for networks without unique outcomes. We show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than $\Omega(n)$, in contrast to the maximum spread, which is efficiently computable. We then move on to questions regarding the behavior of a node with respect to adopting some (resp. a given) product. We show that the problem of determining whether a given node has to adopt some (resp. a given) product in all final networks is co-NP-complete.
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution …
We revisit the classic problem of fair division from a mechanism design perspective and provide an elegant truthful mechanism that yields surprisingly good approximation guarantees for the widely used solution of Proportional Fairness. This solution, which is closely related to Nash bargaining and the competitive equilibrium, is known to be not implementable in a truthful fashion, which has been its main drawback. To alleviate this issue, we propose a new mechanism, which we call the Partial Allocation mechanism, that discards a carefully chosen fraction of the allocated resources in order to incentivize the agents to be truthful in reporting their valuations. This mechanism introduces a way to implement interesting truthful outcomes in settings where monetary payments are not an option.
In this work, we introduce graphical modelsfor multi-player game theory, and give powerful algorithms for computing their Nash equilibria in certain cases. An n-player game is given by an undirected …
In this work, we introduce graphical modelsfor multi-player game theory, and give powerful algorithms for computing their Nash equilibria in certain cases. An n-player game is given by an undirected graph on n nodes and a set of n local matrices. The interpretation is that the payoff to player i is determined entirely by the actions of player i and his neighbors in the graph, and thus the payoff matrix to player i is indexed only by these players. We thus view the global n-player game as being composed of interacting local games, each involving many fewer players. Each player's action may have global impact, but it occurs through the propagation of local influences.Our main technical result is an efficient algorithm for computing Nash equilibria when the underlying graph is a tree (or can be turned into a tree with few node mergings). The algorithm runs in time polynomial in the size of the representation (the graph and theassociated local game matrices), and comes in two related but distinct flavors. The first version involves an approximation step, and computes a representation of all approximate Nash equilibria (of which there may be an exponential number in general). The second version allows the exact computation of Nash equilibria at the expense of weakened complexity bounds. The algorithm requires only local message-passing between nodes (and thus can be implemented by the players themselves in a distributed manner). Despite an analogy to inference in Bayes nets that we develop, the analysis of our algorithm is more involved than that for the polytree algorithm in, owing partially to the fact that we must either compute, or select from, an exponential number of potential solutions. We discuss a number of extensions, such as the computation of equilibria with desirable global properties (e.g. maximizing global return), and directions for further research.
In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections---attempts to improve the election outcome by such actions as adding/deleting candidates or voters. That work has …
In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections---attempts to improve the election outcome by such actions as adding/deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks. By constructing polynomial-time multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time.
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives.We characterize social networks for which adoption …
We introduce a new threshold model of social networks, in which the nodes influenced by their neighbours can adopt one out of several alternatives.We characterize social networks for which adoption of a product by the whole network is possible (respectively necessary) and the ones for which a unique outcome is guaranteed.These characterizations directly yield polynomial time algorithms that allow us to determine whether a given social network satisfies one of the above properties.We also study algorithmic questions for networks without unique outcomes.We show that the problem of determining whether a final network exists in which all nodes adopted some product is NP-complete.In turn, we also resolve the complexity of the problems of determining whether a given node adopts some (respectively, a given) product in some (respectively, all) network(s).Further, we show that the problem of computing the minimum possible spread of a product is NP-hard to approximate with an approximation ratio better than Ω(n), in contrast to the maximum spread, which is efficiently computable.Finally, we clarify that some of the above problems can be solved in polynomial time when there are only two products.
Suppose $\mu_1,\ldots,\mu_n$ are probability measures on the same measurable space $(\Omega, \mathscr{F})$. Then if all atoms of each $\mu_i$ have mass $\alpha$ or less, there is a measurable partition $A_1,\ldots, …
Suppose $\mu_1,\ldots,\mu_n$ are probability measures on the same measurable space $(\Omega, \mathscr{F})$. Then if all atoms of each $\mu_i$ have mass $\alpha$ or less, there is a measurable partition $A_1,\ldots, A_n$ of $\Omega$ so that $\mu_i(A_i) \geq V_n(\alpha)$ for all $i = 1,\ldots, n$, where $V_n(\cdot)$ is an explicitly given piecewise linear nonincreasing continuous function on [0, 1]. Moreover, the bound $V_n(\alpha)$ is attained for all $n$ and all $\alpha$. Applications are given to $L_1$ spaces, to statistical decision theory, and to the classical nonatomic case.
Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In …
Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence.
We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou …
We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of Daskalakis et al. [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD -hard to compute.
We show that in any n-player, m-action normal-form game, we can obtain an approximate equilibrium by sampling any mixed-action equilibrium a small number of times. We study three types of …
We show that in any n-player, m-action normal-form game, we can obtain an approximate equilibrium by sampling any mixed-action equilibrium a small number of times. We study three types of equilibria: Nash, correlated, and coarse-correlated. For each one we obtain upper and lower bounds on the number of samples required for the empirical distribution over the sampled action profiles to form an approximate equilibrium with probability close to 1. These bounds imply that using a small number of samples we can test whether or not players are playing according to an approximate equilibrium, even in games where n and m are large. In addition, our results substantially improve previously known upper bounds on the support size of approximate equilibria in games with many players. In particular, for the three types of equilibria we show the existence of approximate equilibrium with support-size polylogarithmic in n and m, whereas the previously best-known upper bounds were polynomial in n (Hémon et al. [Hémon S, de Rougemont M, Santha M (2008) Approximate nash equilibria for multi-player games. Algorithmic Game Theory (Springer), 267–278], Germano and Lugosi [Germano F, Lugosi G (2007) Existence of sparsely supported correlated equilibria. Econom. Theory 32(3):575–578], Hart et al. [Hart S, Mas-Colell A, Babichenko Y (2013) Simple Adaptive Strategies: From Regret-Matching to Uncoupled Dynamics, Vol. 4 (World Scientific Publishing Company)]).
We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid …
We consider the problem of maximizing a nonnegative submodular set function $f:2^N \rightarrow {\mathbb R}_+$ over a ground set $N$ subject to a variety of packing-type constraints including (multiple) matroid constraints, knapsack constraints, and their intersections. In this paper we develop a general framework that allows us to derive a number of new results, in particular, when $f$ may be a nonmonotone function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over a downward-closed polytope $P$ described by an efficient separation oracle. Previously this was known only for monotone functions. For nonmonotone functions, a constant factor was known only when the polytope was either the intersection of a fixed number of knapsack constraints or a matroid polytope. Second, we show that contention resolution schemes are an effective way to round a fractional solution, even when $f$ is nonmonotone. In particular, contention resolution schemes for different polytopes can be combined to handle the intersection of different constraints. Via linear programming duality we show that a contention resolution scheme for a constraint is related to the correlation gap of weighted rank functions of the constraint. This leads to an optimal contention resolution scheme for the matroid polytope. Our results provide a broadly applicable framework for maximizing linear and submodular functions subject to independence constraints. We give several illustrative examples. Contention resolution schemes may find other applications.
Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for …
Suppose we are given a submodular function f over a set of elements, and we want to maximize its value subject to certain constraints. Good approximation algorithms are known for such problems under both monotone and non-monotone submodular functions. We consider these problems in a stochastic setting, where elements are not all active and we only get value from active elements. Each element e is active independently with some known probability pe, but we don't know the element's status a priori: we find it out only when we probe the element e. Moreover, the sequence of elements we probe must satisfy a given prefix-closed constraint, e.g., matroid, orienteering, deadline, precedence, or any downward-closed constraint.In this paper we study the gap between adaptive and non-adaptive strategies for f being a submodular or a fractionally subadditive (XOS) function. If this gap is small, we can focus on finding good non-adaptive strategies instead, which are easier to find as well as to represent. We show that the adaptivity gap is a constant for monotone and non-monotone submodular functions, and logarithmic for XOS functions of small width. These bounds are nearly tight. Our techniques show new ways of arguing about the optimal adaptive decision tree for stochastic optimization problems.
With the Delsarte-MacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance. This upper bound …
With the Delsarte-MacWilliams inequalities as a starting point, an upper bound is obtained on the rate of a binary code as a function of its minimum distance. This upper bound is asymptotically less than Levenshtein's bound, and so also Elias's.