COMSOC Video Seminar
International Seminar Series on Social Choice
Next seminar: Wednesday, 2024-10-30, 09:00 New York, 15:00 Paris. 90min including break. Everyone welcome.
Speakers: to be announced
To join, please sign up to the mailing list, where we will send a link including password to the seminar a day before. We will also post a link here shortly before the seminar, together with a password hint.
⇩ video recordings and seminar archive at the bottom of the page. Riddle archive.
2024-10-30: to be announced
2024-11-27: to be announced
At the moment, we have one seminar session per month, usually on the fourth Wednesday of the month at 9:00 New York / 15:00 Paris time.
Coming soon: Wes Holliday (Berkeley).
Be sure to sign up for email reminders.
Fig. 1: Some of the 142 attendees of our second seminar.
Interesting events
COMSOC 2025 will take place in Vienna from 17-19 September 2025.
The Online Social Choice and Welfare Seminar Series has talks twice a month.
The EDDY network holds quarterly meetings to promote collaborations between practitioners and academics working on digital democracy.
Join us for a regular series of seminars held via video meetings, open to anyone, and targeted to the interests of attendees of the COMSOC workshop series. Topics include, for example:
Voting and collective decision making
Preference elicitation and representation; restricted preference domains
Opinion diffusion and aggregation on social networks
Judgement aggregation
Fair division and allocation
Matching and coalition formation
Researchers from all disciplines are welcome to present and attend.
Steering Committee
Chair: Dominik Peters (Paris)
Florian Brandl (Bonn)
Clemens Puppe (KIT)
Nisarg Shah (Toronto)
Anaëlle Wilczynski (Paris)
Alumni:
Edith Elkind (Oxford)
Ulle Endriss (Amsterdam)
Vincent Merlin (Caen)
William S. Zwicker (Union College)
How-Tos
20min talks, 10min Q&A
10min break between talks, with an opportunity to meet others in random breakout rooms
Participants remain in a waiting room until shortly before the seminar, to allow speakers to test screen sharing.
Turn on your video if possible, but mute your microphone.
To ask a question type your question in the chat, and prefix it with "QUESTION".
Please note that we will record the seminars. Stay muted if you do not want to be recorded.
Archive of Past Seminar Sessions (Videos are available on YouTube, subscribe!)
Sonja Kraiczy (Oxford)
Stability in Random Hedonic Games
Partitioning a large group of employees into teams can prove difficult because unsatisfied employees may want to transfer to other teams. In this case, the team (coalition) formation is unstable and incentivizes deviation from the proposed structure. Such a coalition formation scenario can be modeled in the framework of hedonic games and a significant amount of research has been devoted to the study of stability in such games. Unfortunately, stable coalition structures are not guaranteed to exist in general and their practicality is further hindered by computational hardness barriers. We offer a new perspective on this matter by studying a random model of hedonic games. For three prominent stability concepts based on single-agent deviations, we provide a high probability analysis of stability in the large agent limit.
Our first main result is an efficient algorithm that outputs an individually and contractually Nash-stable partition with high probability. Our second main result is that the probability that a random game admits a Nash-stable partition tends to zero. Our approach resolves the two major downsides associated with individual stability and contractual Nash stability and reveals agents acting single-handedly are usually to blame for instabilities.
(Joint work with Martin Bullinger)
---
Tomasz Wąs (Oxford)
Method of Equal Shares with Bounded Overspending
In participatory budgeting (PB), voters decide through voting which subset of projects to fund within a given budget. Proportionality in the context of PB is crucial to ensure equal treatment of all groups of voters. However, pure proportional rules can sometimes lead to suboptimal outcomes. We introduce the Method of Equal Shares with Bounded Overspending (BOS Equal Shares), a robust variant of Equal Shares that balances proportionality and efficiency. BOS Equal Shares addresses inefficiencies inherent in strict proportionality guarantees yet still provides good proportionality similar to the original Method of Equal Shares. In the course of the analysis, we also discuss a fractional variant of the method which allows for partial funding of projects.
(Joint work with Georgios Papasotiropoulos, Seyedeh Zeinab Pishbin, Oskar Skibski, and Piotr Skowron)
Jean Lainé (Lirsa, CNAM)
Strategy-proofness with detectable lies
We introduce a new concept of manipulation of a social choice function, based on the possibility for some individuals to detect and report preference misrepresentation. More precisely, we assume that each individual having some preference p, (1) can detect whether any other individual lies or not when claiming that p is also her preference, and (2) will report a lie that is not beneficial to herself. A social choice function is called strategy-proof with detectable lies (SPDL) if every potential manipulation is reported. We characterize the class of onto and SPDL social choice functions. We also consider weaker versions of SPDL, for which a preference misrepresentation using p can be detected by individuals whose preference agrees with p only on subsets of alternatives.
(joint work with Gilbert Laffond and M. Remzi Sanver)
--
Théo Delemazure (CNRS, LAMSADE, Université Paris Dauphine - PSL)
Generalizing Instant Runoff Voting to Allow Indifferences
Instant Runoff Voting (IRV) is used in elections for many political offices around the world. It allows voters to specify their preferences among candidates as a ranking. We identify a generalization of the rule, called Approval-IRV, that allows voters more freedom by allowing them to give equal preference to several candidates. Just like standard IRV, Approval-IRV proceeds in rounds by successively eliminating candidates. It interprets each vote as an approval vote for its most-preferred candidates among those that have not been eliminated. At each step, it eliminates the candidate who is approved by the fewest voters. Among the large class of scoring elimination rules, we prove that Approval-IRV is the unique way of extending IRV to weak orders that preserves its characteristic axiomatic properties, in particular independence of clones and respecting a majority’s top choices. We also show that Approval-IRV is the unique extension of IRV among rules in this class that satisfies a natural monotonicity property defined for weak orders. The multi-winner version of IRV is known as Single Transferable Vote (STV). We prove that Approval-STV continues to satisfy the strong proportional representation properties of STV, underlining that the approval way is the right way of extending the IRV/STV idea to weak orders.
(joint work with Dominik Peters)
Schedule:
Vishwa Prakash: Fair and Efficient Completion of Indivisible Goods
Hadi Hosseini: MatchU.ai: A Pedagogical Platform for Matching
Tomasz Wąs: The Squared Kemeny Rule for Averaging Rankings
Matthieu Hervouin: Participatory Budgeting with Trichotomous Ballots
Sarfaraz Equbal: Fair Interval Scheduling of Indivisible Chores
Vincent Conitzer: Social Choice for AI Alignment
– discussion (10 minutes) –
Paritosh Verma: On the Fairness of Normalized p-Means for Allocating Goods and Chores
Aric Fowler: Maximising Entertainment in the Score Reveal Problem
Federico Fioravanti: Classification Aggregation
Tian Lin Liu: Effect of IRV: Evidence from Consequential Elections
Soumyajit Pyne: Tight Bounds on Approximate Equilibria in Spatial Competition on the Line
Vasilis Christoforidis: On The Pursuit of EFX for Chores: Non-Existence and Approximations
– discussion (10 minutes) –
Andrew Mackenzie (Maastricht)
Patience ensures fairness
We revisit the problem of fairly allocating a sequence of time slots when agents may have different levels of patience (Mackenzie and Komornik, 2023). For each number of agents, we provide a lower threshold and an upper threshold on the level of patience such that (i) if each agent is at least as patient as the lower threshold, then there is a proportional allocation, and (ii) if each agent is at least as patient as the upper threshold and moreover has weak preference for earlier time slots, then there is an envy-free allocation. In both cases, the proof is constructive.
(joint work with Florian Brandl)
Zsuzsanna Jankó (Budapest)
On Connected Strongly-Proportional Cake-Cutting
We investigate the problem of fairly dividing a divisible heterogeneous resource, also known as a cake, among a set of agents. We characterize the existence of an allocation in which every agent receives a contiguous piece worth strictly more than their proportional share, also known as a strongly-proportional allocation. The characterization is supplemented with an algorithm that determines the existence of a connected strongly-proportional allocation using at most n · 2^(n−1) queries. We provide a simpler characterization for agents with strictly positive valuations, and show that the number of queries required to determine the existence of a connected strongly-proportional allocation is in Θ(n^2). Our proofs are constructive and yield a connected strongly-proportional allocation, when it exists, using a similar number of queries.
(with Attila Joó, Erel Segal-Halevi, Sheung Man Yuen)
--
Evi Micha (Harvard)
Fair and Representative Citizens' Assemblies
Sortition is based on the idea of choosing randomly selected representatives for decision-making. The main properties that make sortition particularly appealing are fairness — all citizens can be selected with the same probability — and proportional representation — a randomly selected panel likely reflects the composition of the whole population. When a population lies on a representation metric, we formally define proportional representation using a notion called the core. A panel is in the core if no group of individuals is underrepresented proportional to its size. While uniform selection is fair, it does not always return panels that are in the core. Thus, we ask: Can we design a selection algorithm that satisfies fairness and the core simultaneously? We answer this question affirmatively and present an efficient selection algorithm, called Fair Greedy Capture, that is fair and provides a constant-factor approximation to the optimal core. We also ask: Do these panels reflect the entire population’s opinion? We present a positive answer by adopting the concept of metric distortion. We show that while uniform selection does not provide a reasonable distortion, Fair Greedy Capture achieves a constant distortion.
Flip Klijn (Institute for Economic Analysis (CSIC) and Barcelona School of Economics)
Core Stability and Strategy-Proofness in Hedonic Coalition Formation Problems with Friend-Oriented Preferences
We study hedonic coalition formation problems with friend-oriented preferences; that is, each agent has preferences over his coalitions based on a partition of the set of agents, except himself, into "friends" and "enemies" such that (E) adding an enemy makes him strictly worse off and (F) adding a friend together with a set of enemies makes him strictly better off. Friend-oriented preferences induce a so-called friendship graph where vertices are agents and directed edges point to friends. We show that the partition associated with the strongly connected components (SCC) of the friendship graph is in the strict core. We then prove that the SCC mechanism, which assigns the SCC partition to each hedonic coalition formation problem with friend-oriented preferences, satisfies a strong group incentive compatibility property: group strategy-proofness. Our main result is that on any "rich" subdomain of friend-oriented preferences, the SCC mechanism is the only mechanism that satisfies core stability and strategy-proofness.
--
Gerdus Benadè (Boston)
Towards Rawlsian justice in food rescue
We study a problem faced by a national food rescue platform which matches donations to the first recipient who claims it. Recipients have very different response rates, leading to a few highly responsive recipients claiming the bulk of the donations. We ask if priority lists, which control when the donation is announced to each recipient, are a remedy. We find that an n-stage priority list, which controls the announcement time for every agent individually, can achieve any desired fractional allocation when goods are non-perishable. Two-stage priority lists, which announce a donation in only two waves, are simpler to implement and administer but offer less fine-grained control. We give polynomial-time algorithms to find the 2-stage and n-stage priority lists that maximize a class of Rawlsian objective functions. Computational experiments find that even 2-stage priority lists lead to significantly more balanced allocations than the existing first-come first-served rule.
Ulrike Schmidt-Kraepelin (SLMath)
Project-Fair and Truthful Mechanisms for Budget Aggregation
We study the budget aggregation problem in which a set of strategic voters must split a finite divisible resource (such as money or time) among a set of competing projects. Our goal is twofold: We seek truthful mechanisms that provide fairness guarantees to the projects. For the first objective, we focus on the class of moving phantom mechanisms [Freeman et al., 2021], which are -- to this day -- essentially the only known truthful mechanisms in this setting. For project fairness, we consider the mean division as a fair baseline, and bound the maximum difference between the funding received by any project and this baseline. We propose a novel and simple moving phantom mechanism that provides optimal project fairness guarantees. As a corollary of our results, we show that our new mechanism minimizes the l1 distance to the mean for three projects and gives the first non-trivial bounds on this quantity for more than three projects.
This is joint work with Rupert Freeman.
--
Simon Rey (Amsterdam)
The ComSoC Tools
In this talk, I will discuss various tools that have been developed for and by the ComSoC community. I will start by focusing on PrefLib, presenting its recent developments. The general ComSoC ecosystem will then be presented, with a special emphasis on recently developed tools. Finally, I will propose a general discussion on how to organize and valorize such tools.
Alexandros Psomas (Purdue)
Fair Division With Imperfectly Rational Agents
We consider the fundamental problem of allocating a set of indivisible goods among strategic agents with additive valuation functions. It is well known that, in the absence of monetary transfers, Pareto efficient and truthful rules are dictatorial, while there is no deterministic truthful mechanism that allocates all items and achieves envy-freeness up to one item (EF1), even for the case of two agents. The works I discuss in this talk stem from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this talk, I will discuss two (incomparable) relaxations of DSIC: non-obvious manipulability (NOM) and Bayesian incentive compatibility (BIC), and show how these relaxations allow us to bypass the aforementioned negative results.
This talk is based on two papers, joint with Paritosh Verma, and Vasilis Gkatzelis, Xizhi Tan, and Paritosh Verma.
--
Paul Gölz (SLMath)
Generative Social Choice
Traditionally, social choice theory has only been applicable to choices among a few predetermined alternatives but not to more complex decisions such as collectively selecting a textual statement. We introduce generative social choice, a framework that combines the mathematical rigor of social choice theory with large language models’ capability to generate text and extrapolate preferences. This framework divides the design of AI-augmented democratic processes into two components: first, proving that the process satisfies rigorous representation guarantees when given access to oracle queries; second, empirically validating that these queries can be approximately implemented using a large language model. We illustrate this framework by applying it to the problem of generating a slate of statements that is representative of opinions expressed as free-form text, for instance in an online deliberative process.
Session chair: Nisarg Shah
---
Vasilis Gkatzelis (Drexel)
Learning-Augmented Mechanism Design
This talk will introduce the model of “learning-augmented mechanism design” (or “mechanism design with predictions”), which is an alternative model for the design and analysis of mechanisms in strategic settings. Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, recent work on “algorithms with predictions” has developed algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use this information to guide their decisions and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining good worst-case guarantees, even if these predictions are very inaccurate (robustness). This model has recently been used to analyze the fair allocation of goods that arrive in an online fashion, as well as to design voting rules that optimize for distortion. This talk will focus on the adaptation of this framework into mechanism design and specifically on the problem of strategic facility location.
---
Fatih Kızılkaya (USC)
Plurality Veto and Beyond: On Natural and Practical Voting Rules with Optimal Metric Distortion
The metric distortion framework posits that n voters and m candidates are jointly embedded in a metric space such that voters rank candidates that are closer to them higher. A voting rule’s purpose is to pick a candidate with minimum total distance to the voters, given only the rankings, but not the actual distances. As a result, in the worst case, each deterministic rule picks a candidate whose total distance is at least three times larger than that of an optimal one, i.e., has distortion at least 3. The recent breakthrough result of Gkatzelis et al. (FOCS’20) showed that the optimal distortion of 3 can be achieved by a mechanism called Plurality Matching. This rule is a complicated exhaustive search picking an arbitrary candidate for whom a certain candidate-specific bipartite graph contains a perfect matching. In our previous work (IJCAI’22), a much simpler rule called Plurality Veto was shown to achieve distortion 3 as well. This rule only constructs such a matching implicitly, but it, too, makes some arbitrary decisions affecting the outcome. Due to their inherent arbitrariness, both of these rules are not even remotely usable in practice without an independent arbitrator. In our most recent work (EC’23), we identify the underlying source of arbitrariness in these rules, which leads to a highly practical voting rule with optimal distortion 3 called Simultaneous Plurality Veto. This rule is also intuitive and easy to explain: Each candidate starts off with public support equal to his plurality score. From time 0 to 1, every voter continuously brings down, at rate 1, the support of her bottom choice among not-yet-eliminated candidates. A candidate is eliminated if he is opposed by a voter after his support reaches 0. On top of being anonymous and neutral, the absence of arbitrary non-deterministic choices in this rule allows for the study of other axiomatic properties that are desirable in practice. We show that it also satisfies resolvability, monotonicity, majority, majority loser, mutual majority and reversal symmetry, in addition to still guaranteeing metric distortion 3.
Session chairs: Vincent Merlin and Anaëlle Wilczynski
--
Dolors Berga (Girona)
Condorcet consistency and pairwise justifiability under variable agendas
Consider the following principle regarding the performance of collective choice rules. "If a rule selects alternative x in situation 1, and alternative y in situation 2, there must be an alternative z, and some member of society whose appreciation of z relative to x has increased when going from situation 1 to situation 2." This principle requires a minimal justification for the fall of x in the consideration of society: someone must have decreased its appreciation relative to some other possible alternative. On appropriately restricted domains, pairwise justifiability, along with anonymity and neutrality, characterizes Condorcet consistent rules, thus providing a foundation for the choice of the alternatives that win by majority over all others in pairwise comparisons, when they exist. We study the consequences of imposing this requirement of pairwise justifiability on a large class of collective choice rules that includes social choice and social welfare functions as particular cases. When preference profiles are unrestricted, it implies dictatorship, and both Arrow's and the Gibbard-Satterthwaite's theorems become corollaries of our general result.
Joint work with S. Barberà, B. Moreno and A. Nicolò.
--
Oliviero Nardi (TU Wien)
Repeated Fair Allocation of Indivisible Items
The problem of fairly allocating a set of indivisible items is a well-known challenge in the field of (computational) social choice. In this scenario, there is a fundamental incompatibility between notions of fairness (such as envy-freeness and proportionality) and economic efficiency (such as Pareto-optimality). However, in the real world, items are not always allocated once and for all, but often repeatedly. For example, the items may be recurring chores to distribute in a household. Motivated by this, we initiate the study of the repeated fair division of indivisible goods and chores, and propose a formal model for this scenario. We show that, if the number of repetitions is a multiple of the number of agents, there always exists a sequence of allocations that is proportional and Pareto-optimal. On the other hand, irrespective of the number of repetitions, an envy-free and Pareto-optimal sequence of allocations may not exist. For the case of two agents, we show that if the number of repetitions is even, it is always possible to find a sequence of allocations that is overall envy-free and Pareto-optimal. We then prove even stronger fairness guarantees, showing that every allocation in such a sequence satisfies some relaxation of envy-freeness. Finally, in case the number of repetitions can be chosen freely, we show that envy-free and Pareto-optimal allocations are achievable for any number of agents.
Joint work with Ayumi Igarashi, Martin Lackner, and Arianna Novaro.
We are returning from our summer break with a rump session which will feature several short 5min talks.
Speakers: Erel Segal-Halevi, Reshef Meir, Georgios Papasotiropoulos, Jan Maly, Chris Dong, Nicholas Teh, Soroush Ebadian, Ben Armstrong, Chinmay Sonar, Shivika Narang.
Michal Feldman (Tel Aviv)
Algorithmic Contract Design
Contract design captures situations where a principal delegates the execution of a costly task to an agent. To complete the task, the agent chooses an action from a set of costly actions. The principal can only observe the outcome, which is stochastically determined by the chosen action. The principal incentivizes the desired action through a contract, that specifies payments based on the observed outcome. In this talk, I will survey two papers on *combinatorial* contracts, which highlight different sources of complexity that arise in contract design. The first (FOCS’21) is where the agent can choose any subset of a given set of actions; the second (STOC’23) is where the principal motivates a team of agents. We provide (approximation) algorithms and hardness results for the optimal contract problem in these scenarios.
--
Based on joint work with Tomer Ezra, Paul Duetting and Thomas Kesselheim.
Moshe Babaioff (Hebrew University)
Fair Shares: Feasibility, Domination and Incentives
We consider the problem of fairly allocating a set of indivisible goods to equally entitled agents, with no monetary transfers. A share function maps a pair of agent valuation and number of agents to a non-negative value, with the interpretation that if an allocation fails to give the agent a bundle of value at least equal to her share, this serves as evidence that the allocation is not fair towards the agent. We embark on a systematic study of feasible shares – a share is feasible if we can always give every agent her share. We introduce the notion of a self-maximizing share to capture the incentive for truthful valuation reporting. We prove the inexistence of an “ultimate feasible share”: one that dominates any other feasible share that is self-maximizing. We then present several feasible shares that are self-maximizing and polynomial-time computable, and approximately dominate all other shares.
Joint work with Uri Feige. https://arxiv.org/abs/2205.07519
Niclas Boehmer (TU Berlin)
Robustness in Elections and Participatory Budgeting: Experiments and Counting Complexity
Robustness measures for election winners - such as the margin of victory, the score difference, or the minimum cost of a successful destructive bribery - are typically worst-case focused. In this talk, we explore a new average-case view on winner robustness by examining the winning probabilities of candidates when introducing random noise in the votes. Interestingly, computing these probabilities requires solving so far unexplored counting variants of established robustness-related decision problems. In an experimental analysis, we provide evidence that our average-case approach offers a new perspective that substantially differs from existing, worst-case-focused methods. We compare the robustness of different voting rules and evaluate the robustness of winners from the Formula 1 World Championship and some variants of real-world political elections. We find many instances of elections that have surprisingly non-robust winners and identify numerous delicate robustness patterns that cannot be identified using classic approaches. Finally, we describe how a similar approach can also be used to assess the robustness of funding decisions in participatory budgeting.
Based on joint work with Robert Bredereck, Piotr Faliszewski, Łukasz Janeczko, Andrzej Kaczmarczyk, and Rolf Niedermeier
--
Jugal Garg (Univ. of Illinois at Urbana-Champaign)
Finding fair and efficient allocations of indivisible goods
Fair division is an age-old problem of allocating a set of goods among agents with preferences in a fair and efficient manner. It arises naturally in a wide range of real-life settings, from interpersonal to international conflicts. I will talk about some recent exciting developments in this area, based on https://arxiv.org/abs/2211.03883.
Hadi Hosseini (Penn State)
Distributing Mixed Items Fairly: A Lexicographic Excursion
Fair division of indivisible items encompasses a wide range of real-world applications. The discrete nature of this problem has given rise to several intriguing approximate notions such as envy-freeness up to any item (EFX), envy-freeness up to one item (EF1), and maximin share guarantee (MMS). In this talk, I will focus on fair allocation of mixture of goods and chores under lexicographic preferences. I will argue that this class of preferences is natural, common in computational social choice and AI, and more importantly, elevates our understanding of computational and structural challenges in achieving fairness. In particular, I will i) demonstrate the non-existence and intractability of achieving EFX, ii) identify natural subclasses of preferences for which positive results can be obtained (along with Pareto optimality), and iii) discuss some of the remaining challenges in achieving other fairness notions and pushing beyond this domain. The negative results immediately carry over to additive valuations while the positive results highlight the challenges of achieving fair and efficient allocations of mixed items.
---
Martin Bullinger (TU Munich)
Superiority of Instantaneous Decisions in Thin Dynamic Matching Markets
We study a dynamic matching procedure where homogeneous agents arrive at random according to a Poisson process and form edges at random yielding a sparse market. Agents leave according to a certain departure distribution and may leave early by forming a pair with a compatible agent. The primary objective is to maximize the number of matched agents. Our main result is to show that a mild condition on the departure distribution suffices to get almost optimal performance of instantaneous matching, despite operating in a thin market. We are thus the first to provide a natural condition under which instantaneous decisions are superior in a market that is both sparse and thin. This result is surprising because similar results in the previous literature are based on market thickness. In addition, instantaneous matching performs well with respect to further objectives such as minimizing waiting times and avoiding the risk of market congestion.
Joint work with Johannes Bäumler, Stefan Kober, and Donghao Zhu.
Daniel Halpern (Harvard)
Representation with Incomplete Votes
Platforms for online civic participation rely heavily on methods for condensing thousands of comments into a relevant handful based on whether participants agree or disagree with them. We argue that these methods should guarantee fair representation of the participants, as their outcomes may affect the health of the conversation and inform impactful downstream decisions. To that end, we draw on the literature on approval-based committee elections. Our setting is novel in that the approval votes are incomplete since participants will typically not vote on all comments. We prove that this complication renders non-adaptive algorithms impractical in terms of the amount of information they must gather. Therefore, we develop an adaptive algorithm that uses information more efficiently by presenting incoming participants with statements that appear promising based on votes by previous participants. We prove that this method satisfies commonly used notions of fair representation, even when participants only vote on a small fraction of comments. Finally, an empirical evaluation on real data shows that the proposed algorithm provides representative outcomes in practice
Joint work with Gregory Kehne, Ariel D. Procaccia, Jamie Tucker-Foltz and Manuel Wuthrich.
--
Marc Fleurbaey (Paris School of Economics / CNRS)
Universal social welfare orderings and risk
How to evaluate and compare social prospects when there may be a risk on i) the actual allocation people will receive; ii) the existence of these future people; and iii) their preferences? This paper investigates this question that may arise when considering policies that endogenously affect future people, for instance climate policy. We show that there is no social ordering that meets minimal requirements of fairness, social rationality, and respect for people's ex ante preferences. We explore three ways to avoid this impossibility. First, if we drop the ex ante Pareto requirement, we can obtain fair ex post criteria that take an (arbitrary) expected utility of an equally-distributed equivalent level of well-being. Second, if the social ordering is not an expected utility, we can obtain fair ex ante criteria that assess uncertain individual prospects with a certainty-equivalent measure of well-being. Third, if we accept that interpersonal comparisons rely on VNM utility functions even in absence of risk, we can construct expected utility social orderings that satisfy of some version of Pareto ex ante.
Joint work with Stéphane Zuber.
Session chair: Ulle Endriss (Amsterdam)
--
Piotr Skowron (Warsaw)
Proportional Participatory Budgeting: an Experimental View
We compare election methods for participatory budgeting based on data coming from real elections, focusing on comparing the Method of Equal Shares with the methods that are currently used by cities.
--
Antonin Macé (Paris)
A Model of Repeated Collective Decisions (Slides)
The theory of repeated games offers a compelling rationale for cooperation in a variety of environments. Yet, its consequences for collective decision-making have been largely unexplored. In this paper, we propose a general model of repeated voting in committees and study equilibrium behavior under alternative majority rules. Our main characterization reveals a complex, non-monotonic, relationship between the majority threshold, the preference distribution, and the optimal equilibrium outcome. In contrast with the stage-game equilibrium, the optimal equilibrium of the repeated game involves a form of implicit logroll, individuals sometimes voting against their preference to achieve the efficient decision. In turn, this affects the optimal voting rule, which may significantly differ from the optimal rule under sincere voting. The model provides a rationale for the use of unanimity rule while accounting for the prevalence of consensus in committees which use a lower majority threshold.
Session Chair: Vincent Merlin (Caen)
--
François Durand (Nokia Bell Labs)
Coalitional Manipulation of Voting Rules: Simulations on Empirical Data
Using computer simulations based on empirical data, we show that seven voting rules that we call the IRV family (Instant-Runoff Voting, exhaustive ballot, Condorcet-IRV, Benham, Smith-IRV, Tideman and Woodall) are less sensitive to coalitional manipulation than a large selection of prominent voting rules. While the relative performances of these seven rules still deserve further investigation, we show that the differences are at most marginal.
Preprint: https://www.researchsquare.com/article/rs-2032633/v1
--
Zoi Terzopoulou (Saint-Etienne)
Let’s Agree to Agree: Targeting Consensus for Incomplete Preferences through Majority Dynamics
Consider settings in which agents with incomplete preferences need to make a collective decision, like a group of organisers needing to agree on an online platform for their upcoming conference, without having full preferences about all options. We focus on a process of majority dynamics where issues are addressed one at a time and undecided agents follow the opinion of the majority. We assess the effects of this process on various consensus notions—such as the Condorcet winner—and show that in the worst case, myopic adherence to the majority damages existing consensus; yet, simulation experiments indicate that the damage is often mild. We also examine scenarios where the chair of the decision process can control the existence (or the identity) of consensus, by determining the order in which the issues are discussed.
This is joint work with Sirin Botan and Simon Rey (published in the proceedings of IJCAI-2022).
Session chair: Ulle Endriss (Amsterdam)
--
Davide Grossi (Groningen and Amsterdam)
From Juries to Markets: Collective Truth-tracking by Voting and by Wagering
In this talk I will introduce on-going work on establishing formal relationships between weighted majority elections and information markets in a binary truth-tracking setting. I will present correspondence results showing how weighted majority elections and specific classes of information markets implement the same collective truth-tracking behaviour. The aim is to provide common foundations to both elections and information markets in the framework of epistemic social choice.
Joint work with Nicholas Kees Dupuis and Stéphane Airiau.
--
Manon Revel (MIT)
Democratic Innovations and Social Choice: The Case of Liquid Democracy
Proposals to change the size of the Congress, update ballot types, channel decisions through citizens' assemblies or participatory budgeting are being actively researched and tested. Liquid democracy is making its way to the democratic innovation table as a model that may stand for non-active stakeholders and enhance collective intelligence using the wealth of inter-personnel information embedded in social networks. Yet, much uncertainty remains about its propensity to concentrate voting power in the hands of a few, its likelihood to ease vote buying, or to create problematic (e.g., cyclical) delegations. In this talk, I will review what we currently understand theoretically and empirically about liquid democracy and interpret how that informs further real-world testing and deployment of liquid democratic procedures.
Session Chair: Bill Zwicker (Union College)
--
Ayumi Igarashi (NII Tokyo)
Fair Division of House Chores
Division of house chores can be a great source of conflicts. For example, some studies have reported that inequity in the division of household labor can cause significant stress to couples in the home. This raises the fundamental question of how to divide chores. Fair division is a growing field in mathematics and computer science that provides an elegant solution to this problem. It has been successfully implemented in various real-life applications, including the Adjusted Winner Website, Fair Division Calculator, Course Match, and Spliddit. In this talk, I would like to present a new application of fair division to the division of household chores. Our goal is to provide a systematic and transparent tool to divide household chores and help create harmony in the home.
--
Alexandros Voudouris (Essex)
The Metric Distortion of Multiwinner Voting
We extend the framework of metric distortion to multiwinner voting. There are n agents and m alternatives located in an underlying metric space. The exact distances between agents and alternatives are unknown. Instead, each agent provides a ranking of the alternatives, ordered from the closest to the farthest. Typically, the goal is to select a single alternative that approximately minimizes the total distance from the agents, and the worst-case approximation ratio is termed distortion. In the case of multiwinner voting, the goal is to select a committee of k alternatives that (approximately) minimizes the total cost to all agents. We consider the scenario where the cost of an agent for a committee is her distance from her q-th favorite alternative in the committee. We reveal a surprising trichotomy on the distortion of multiwinner voting rules in terms of k and q: The distortion is unbounded when q ≤ k/3, asymptotically linear in the number of agents when k/3 < q ≤ k/2, and constant when q > k/2.
Joint work with Ioannis Caragiannis and Nisarg Shah.
Session Chair: Vincent Merlin (Caen)
--
Martin Lackner (TU Wien)
Perpetual Voting
Perpetual voting is a new formalism to support long-term collective decision making. The corresponding perpetual voting rules take the history of previous decisions into account and---due to this additional information---can offer fairness guarantees that are unachievable in single-shot decisions. In particular, such rules enable minorities to have a fair (proportional) influence on the decision process and thus foster long-term participation. I will discuss strengths and caveats of perpetual voting and present results of our axiomatic analysis.
Joint work with Jan Maly.
--
Anaëlle Wilczynski (CentraleSupélec)
Combining Fairness and Optimality when Selecting and Allocating Projects
We consider the problem of the conjoint selection and allocation of projects to a population of agents, e.g., students are assigned papers and shall present them to their peers. The selection can be constrained either by quotas over subcategories of projects, or by the preferences of the agents themselves. In this context, we explore fairness and optimality issues. In particular, we refine the analysis of the optimality concepts of rank-maximality and popularity, and show that they are compatible with reasonable fairness requirements related to rank-based envy-freeness. Furthermore, we adapt the concept of popularity in order to select globally good projects according to the preferences of the agents.
Joint work with Khaled Belahcène and Vincent Mousseau.
2022-06-09: Special Session in Memory of Rolf Niedermeier and Gerhard J. Woeginger --- talks by M. Bentert and R. Bredereck [Video]
Recently, we lost two central and esteemed researchers that were part of the COMSOC community: Rolf Niedermeier (*1966 † 2022) and Gerhard J. Woeginger (*1964 † 2022). We are dedicating this special session to their memory. Matthias Bentert and Robert Bredereck will give two short memorial speeches about Rolf and Gerhard, and present results from recent collaborations.
Session Chair: Piotr Faliszewski (AGH Krakow)
--
Matthias Bentert (TU Berlin)
An Adaptive Version of Brandes' Algorithm for Betweenness Centrality
Betweenness centrality measures the relative importance of a vertex in a graph in terms of how many shortest paths pass through it. It is one of the most important concepts in network analysis and the well-known algorithm of Brandes [Journal of Mathematical Sociology 2001] computes it in O(nm) time. While the running time is asymptotically optimal under the APSP conjecture, significant empirical speedups were achieved in later work by preprocessing degree-one vertices and cut vertices. The speedup can partially be explained by the fact that many real-world social networks follow a power-law degree distribution. Going further in this direction, we present an algorithmic treatment of degree-two vertices and thereby prove an adaptive running time bound in O(nk), where k < m is the feedback edge number of the input graph. To the best of our knowledge, this is the first proven (parameterized) worst-case improvement over Brandes' upper bound.
This is joint work with Alexander Dittmann, Leon Kellerhals, André Nichterlein, and Rolf Niedermeier.
--
Robert Bredereck (TU Clausthal)
How to Put Through Your Agenda in Collective Binary Decisions
We consider the following decision-making scenario: a society of voters has to find an agreement on a set of proposals, and every single proposal is to be accepted or rejected. Each voter supports a certain subset of the proposals—the favorite ballot of this voter—and opposes the remaining ones. He accepts a ballot if he supports more than half of the proposals in this ballot. The task is to decide whether there exists a ballot approving a specified number of selected proposals (agenda) such that all voters (or a strict majority of them) accept this ballot.
We show that, on the negative side, both problems are NP-complete, and on the positive side, they are fixed-parameter tractable with respect to the total number of proposals or with respect to the total number of voters. We look into further natural parameters and study their influence on the computational complexity of both problems, thereby providing both tractability and intractability results. Furthermore, we provide tight combinatorial bounds on the worst-case size of an accepted ballot in terms of the number of voters.
The talk is based on a joint paper with Noga Alon, Jiehua Chen, Stefan Kratsch, Rolf Niedermeier, Gerhard J. Woeginger.
For this special session, we invited each speaker to talk about "an older piece of work that never garnered the attention it deserved". Each of our four distinguished speakers will have 10 minutes to make their case (plus 5 minutes for questions).
Session Chair: Dominik Peters (LAMSADE)
Please note that this second instalment of the 'Oldies but Goodies' format will take place one week later than it usually would, so as to make space for AAMAS-2022, which will include a number of social choice talks.
Speakers: Annick Laruelle (Bilbao), Lirong Xia (RPI), Reshef Meir (Technion), Klaus Nehring (UC Davis)
The following papers will be discussed:
Elena Iñarra, Annick Laruelle, and Peio Zuazo-Garin. Games with Perception. Journal of Mathematical Psychology, 2015.
Lirong Xia. Designing Social Choice Mechanisms Using Machine Learning. Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2013.
Dvir Falik, Reshef Meir and Moshe Tennenholtz. On Coalitions and Stable Winners in Plurality. Proceedings of the 8th Workshop on Internet and Network Economics (WINE), 2012.
Klaus Nehring and Clemens Puppe. The Structure of Strategy-proof Social Choice. Part I: General Characterization and Possibility Results on Median Spaces. Journal of Economic Theory, 2007.
For this special session we invited each speaker to talk about "an older piece of work that never garnered the attention it deserved". Each of our four distinguished speakers will have 10 minutes to make their case (plus 5 minutes for questions).
Session Chair: Bill Zwicker (Union College)
Speakers: Jean-François Laslier (Paris), Mark C. Wilson (Amherst), Jérôme Lang (Paris), Hervé Moulin (Glasgow)
The following papers will be discussed:
Gilbert Laffond, Jean-François Laslier, and Michel Le Breton. K-Player Additive Extension of Two-Player Games with an Application to the Borda Electoral Competition Game. Theory and Decision, 2000.
Geoffrey Pritchard, Reyhaneh Reyhani, and Mark C. Wilson. Power Measures Derived from the Sequential Query Process. Mathematical Social Sciences, 2013.
Yann Chevaleyre, Jérôme Lang, Nicolas Maudet, and Guillaume Ravilly-Abadie. Compiling the Votes of a Subelectorate. Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), 2009.
Hervé Moulin. An Efficient and Almost Budget Balanced Cost Sharing Method. Games and Economic Behavior, 2010.
Session Chair: Ulle Endriss (Amsterdam)
---
Warut Suksompong (Singapore)
Picking Sequences and Monotonicity in Weighted Fair Division
We study the problem of fairly allocating indivisible items to agents with different entitlements, which captures, for example, the distribution of ministries among political parties in a coalition government. Our focus is on picking sequences derived from common apportionment methods, including five traditional divisor methods and the quota method. We paint a complete picture of these methods in relation to known envy-freeness and proportionality relaxations for indivisible items as well as monotonicity properties with respect to the resource, population, and weights. In addition, we provide characterizations of picking sequences satisfying each of the fairness notions, and show that the well-studied maximum Nash welfare solution fails resource- and population-monotonicity even in the unweighted setting. Our results serve as an argument in favor of using picking sequences in weighted fair division problems.
Joint work with Mithun Chakraborty and Ulrike Schmidt-Kraepelin.
---
Stefan Napel (Bayreuth)
Influence and Manipulation in Weighted Committee Voting (Slides)
Committee decisions on three or more alternatives depend on the adopted voting method and the underlying distribution of votes (e.g., voting weights, party seats, individual shareholdings). Based on structural analysis of how voting weights affect the aggregation of individual preferences into a collective decision and generalizations of the Penrose-Banzhaf and Shapley-Shubik indices from binary votes to general social choice, we study the distribution of voting power and who benefits from adoption of a particular voting rule (e.g., a plurality vote vs. plurality voting with a runoff vs. pairwise comparisons). We also investigate the interaction of voting weights, voting method and the manipulability of committee decisions.
Joint work with Sascha Kurz and Alexander Mayer.
Session Chair: Dominik Peters (LAMSADE)
---
Kamesh Munagala (Duke)
Locally fair partitioning (arXiv)
We model the societal task of redistricting political districts as a fair partitioning problem: Given a set of n points in the plane, each belonging to one of two parties, and a parameter k, our goal is to compute a partition of the plane into regions so that each region contains roughly n/k points. The partition should satisfy a notion of ``local'' fairness, which is related to the notion of core, a well-studied concept in cooperative game theory. A region is associated with the majority party for that region, and a point in that region is ``unhappy" if it belongs to the minority party. A group of roughly n/k contiguous points is called a deviating group with respect to the partition if a majority of points in this group are unhappy. The partition is locally fair if there is no deviating group. The talk focuses on the case when points lie in one dimension, and the problem is non-trivial even in this case. We consider both adversarial and ``beyond worst-case" settings for this problem. For the former, we characterize the input parameters for which a locally fair partition always exists, and for which it does not. We then consider beyond worst case input models where there are ``runs'' of red and blue points. For such clustered inputs, we show that an approximate locally fair partition exists if we allow some regions to have smaller sizes. We also present a polynomial-time algorithm for computing a locally fair partition if one exists. Time permitting, we will present preliminary empirical results on real-world maps.
Joint work with Pankaj K. Agarwal, Shao-Heng Ko, and Erin Taylor. Appears in AAAI 2022.
---
Eric Neyman (Columbia)
Are You Smarter Than a Random Expert? The Robust Aggregation of Substitutable Signals (arXiv)
The problem of aggregating expert forecasts is ubiquitous in fields as wide-ranging as machine learning, economics, climate science, and national security. Despite this, our theoretical understanding of this question is fairly shallow. This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures. While in full generality it is impossible to achieve a nontrivial performance guarantee, we show that doing so is possible under a condition on the experts' information structure that we call projective substitutes. The projective substitutes condition is a notion of informational substitutes: that there are diminishing marginal returns to learning the experts' signals. We show that under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert. We then consider a more permissive setting, in which the aggregator has access to the prior. We show that by averaging the experts' forecasts and then extremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior. Our results give a theoretical grounding to past empirical research on extremization and help give guidance on the appropriate amount to extremize.
Joint work with Tim Roughgarden.
Session Chair: Vincent Merlin (Caen)
---
Florian Brandl (Bonn)
Efficient, Fair, and Incentive-Compatible Healthcare Rationing
Rationing of healthcare resources has emerged as an important issue, which has been discussed by medical experts, policy-makers, and the general public. We consider a rationing problem where medical units are to be allocated to patients. Each unit is reserved for one of several categories and each category has a priority ranking of the patients. We present an allocation rule that respects the priorities, complies with the eligibility requirements, allocates the largest feasible number of units, and does not incentivize agents to hide that they qualify through a category. The rule characterizes all possible allocations that satisfy the first three properties and is polynomial-time computable.
Joint work with Haris Aziz.
---
Maria Polukarov (London)
Multiwinner Candidacy Games
In strategic candidacy games, both voters and candidates have preferences over the set of possible election outcomes, and candidates may strategically withdraw from the election in order to manipulate the result in their favour. In this work, we extend the candidacy game model to the setting of multiwinner elections, where the goal is to select a fixed-size subset of candidates (a committee), rather than a single winner. We examine the existence and properties of Nash equilibria in the resulting class of games, under various voting rules and voter preference structures.
This is joint work with Svetlana Obraztsova, Edith Elkind, and Marek Grzesiuk.
Session Chair: Ulle Endriss (Amsterdam)
--
Iannis Caragiannis (Aarhus)
The Complexity of Learning Approval-Based Multiwinner Voting Rules
We study the PAC learnability of multiwinner voting, focusing on the class of approval-based committee scoring (ABCS) rules. These are voting rules applied on profiles with approval ballots, where each voter approves some of the candidates. ABCS rules adapt positional scoring rules in single-winner voting by assuming that each committee of k candidates collects from each voter a score, that depends on the size of the voter's ballot and on the size of its intersection with the committee. Then, committees of maximum score are the winning ones. Our goal is to learn a target rule (i.e., to learn the corresponding scoring function) using information about the winning committees of a small number of sampled profiles. Despite the existence of exponentially many outcomes compared to single-winner elections, we show that the sample complexity is still low: a polynomial number of samples carries enough information for learning the target committee with high confidence and accuracy. Unfortunately, even simple tasks that need to be solved for learning from these samples are intractable. We prove that deciding whether there exists some ABCS rule that makes a given committee winning in a given profile is a computationally hard problem. Our results extend to the class of sequential Thiele rules, which have received attention due to their simplicity.
Joint work with Karl Fehrs.
--
Erel Segal-Halevi (Ariel)
Fair Division of Electricity
The supply of electricity in developing countries is often insufficient to satisfy the total demand of all households. This raises the question of how to fairly divide the electricity among the households, taking into accounts their different demands and utilities in different times. I will show the connections of this problem to classic combinatorial and social choice problems such as bin packing, fractional approval voting, cake-cutting and consensus division. As this is a work in progress, the talk will contain more questions than answers.
Joint work with Dinesh Kumar Baghel.
Session Chair: Edith Elkind (Oxford)
--
Patrick Lederer (Munich)
Characterizing the Top Cycle via Strategyproofness
Gibbard and Satterthwaite have shown that the only single-valued social choice functions (SCFs) that satisfy non-imposition (i.e., the function’s range coincides with its codomain) and strategyproofness (i.e., voters are never better off by misrepresenting their preferences) are dictatorships. In this paper, we consider set-valued social choice correspondences (SCCs) that are strategyproof according to Fishburn's preference extension and, in particular, the top cycle, an attractive SCC that returns the maximal elements of the transitive closure of the weak majority relation. Our main theorem implies that, under mild conditions, the top cycle is the only non-imposing strategyproof SCC whose outcome only depends on the quantified pairwise comparisons between alternatives. This result effectively turns the Gibbard-Satterthwaite impossibility into a complete characterization of the top cycle by moving from SCFs to SCCs. It is obtained as a corollary of a more general characterization of strategyproof SCCs.
Joint work with Felix Brandt.
https://arxiv.org/abs/2108.04622
--
Maria Kyropoulou (Essex)
On Interim Envy-Free Allocation Lotteries
With very few exceptions, recent research in fair division has mostly focused on deterministic allocations. Deviating from this trend, we study the fairness notion of interim envy-freeness (iEF) for lotteries over allocations, which serves as a sweet spot between the too stringent notion of ex-post envy-freeness and the very weak notion of ex-ante envy-freeness. iEF is a natural generalization of envy-freeness to random allocations in the sense that a deterministic envy-free allocation is iEF (when viewed as a degenerate lottery). It is also certainly meaningful as it allows for a richer solution space, which includes solutions that are provably better than envy-freeness according to several criteria. Our analysis relates iEF to other fairness notions as well, and reveals tradeoffs between iEF and efficiency. Even though several of our results apply to general fair division problems, we are particularly interested in instances with equal numbers of agents and items where allocations are perfect matchings of the items to the agents. Envy-freeness can be trivially decided and (when it can be achieved, it) implies full efficiency in this setting. Although computing iEF allocations in matching allocation instances is considerably more challenging, we show how to compute them in polynomial time, while also maximizing several efficiency objectives. Our algorithms use the ellipsoid method for linear programming and efficient solutions to a novel variant of the bipartite matching problem as a separation oracle. We also study the extension of interim envy-freeness notion when payments to or from the agents are allowed. We present a series of results on two optimization problems, including a generalization of the classical rent division problem to random allocations using interim envy-freeness as the solution concept.
Joint work with Ioannis Caragiannis and Panagiotis Kanellopoulos.
Session Chair: Bill Zwicker (Union College)
--
Jiehua Chen (Vienna)
Fractional Matchings under Preferences: Stability and Optimality
We study generalizations of stable matching in which agents may be matched fractionally; this models time-sharing assignments. We focus on the so-called ordinal stability and cardinal stability, and investigate the computational complexity of finding an ordinally stable or cardinally stable fractional matching which either maximizes the social welfare (i.e., the overall utilities of the agents) or the number of fully matched agents (i.e., agents whose matching values sum up to one). We complete the complexity classification of both optimization problems for both ordinal stability and cardinal stability, distinguishing between the marriage (bipartite) and roommates (non-bipartite) cases and the presence or absence of ties in the preferences. In particular, we prove a surprising result that finding a cardinally stable fractional matching with maximum social welfare is NP-hard even for the marriage case without ties. This answers an open question and exemplifies a rare variant of stable marriage that remains hard for preferences without ties. We also complete the picture of the relations of the stability notions and derive structural properties.
--
Conal Duddy (Cork)
Fairness in Random Assignment
In the random assignment problem, there are n agents and n indivisible objects. Each agent has a preference ordering over the objects. We seek a method of assigning one object to each agent, using some randomisation to achieve fairness. The central solutions in the literature are Random Priority and the Serial Rule. In this talk, I will propose a new fairness principle and solution for this problem.
Session Chair: Ulle Endriss (Amsterdam)
--
Svetlana Obraztsova (Singapore)
Securing Election Outcomes: The Good, the Bad, and the Reality
Complexity of voting manipulation is one of the most important topics in computational social choice. In this talk, two new approaches to this problem will be shown. In the beginning a two-stage voting manipulation scenario will be considered. First, a malicious party (an attacker) attempts to manipulate the election outcome in favor of a preferred candidate by changing the vote counts in some of the voting districts. Afterwards, another party (a defender), which cares about the voters' wishes, demands a recount in a subset of the manipulated districts, restoring their vote counts to their original values. The resulting Stackelberg game for the case where votes are aggregated using two variants of the Plurality rule will be described, and an almost complete picture of the complexity landscape will be shown, both from the attacker's and from the defender's perspective. In the second part of talk the computational problem of constructive election control via issue selection will be covered. In this problem, a party decides which political issues to focus on to ensure victory for the favored candidate.
--
Ali Ozkes (Paris)
Voting Behavior in One-Shot and Iterative Multiple Referenda
We consider a set of voters making a collective decision via simultaneous vote on two binary issues. Voters' preferences are captured by payoffs assigned to combinations of outcomes for each issue and they can be nonseparable: a voter's preference over an issue might be dependent on the other issue. When the collective decision in this context is reached by voting on both issues at the same time, multiple election paradoxes may arise, as studied extensively in the theoretical literature. In this paper we pursue an experimental approach and investigate the impact of iterative voting, in which groups deliberate by repeating the voting process until a final outcome is reached. Our results from experiments run in the lab show that voters tend to have an optimistic rather than a pessimistic behaviour when casting a vote on a non-separable issue and that iterated voting may in fact improve the social outcome. We provide the first comprehensive empirical analysis of individual and collective behavior in the multiple referendum setting.
Joint work with Umberto Grandi, Jérôme Lang, and Stéphane Airiau.
Session Chair: Ulle Endriss (Amsterdam)
---
Vincent Merlin (Caen)
The Probability of Disputable Outcomes under Direct and Indirect Elections (Slides)
Defenders of the Electoral College routinely invoke a traditional argument to reject proposals for a national popular vote. Granted, they say, Florida in 2000 was a national nightmare, but the agony would be far greater if such a dispute extended over the entire nation. Proponents of a direct vote reply by conjecturing that the Electoral College increases the frequency of disputable elections. Indeed, the 2020 US presidential election was another instance of disputable election on the legal front. We investigate whether we should expect disputable outcomes to be more frequent under the present US indirect system as compared with a direct vote and, if so, by how much. We use two methods: an historical analysis of actual outcomes in presidential elections, and an a priori formal model borrowed from Social Choice Theory (IAC). Depending on the thresholds one posits for disputability, the historical analysis shows that disputable elections have been about two to six times more frequent under the Electoral College. In a model where all the states have the same population the IAC model produces an impressively compatible intermediate ratio of 4:1. We also explore the impact of differences in the population of the states on the likelihood of legal disputes.
Joint work with Jack Nagel and Théo Duchemin.
---
Nicholas Mattei (Tulane)
Modeling Voters in Multi-Winner Approval Voting (Slides)
In many real world situations, collective decisions are made using voting and, in scenarios such as committee or board elections, employing voting rules that return multiple winners. In multi-winner approval voting (AV), an agent submits a ballot consisting of approvals for as many candidates as they wish, and winners are chosen by tallying up the votes and choosing the top-k candidates receiving the most approvals. In many scenarios, an agent may manipulate the ballot they submit in order to achieve a better outcome by voting in a way that does not reflect their true preferences. In complex and uncertain situations, agents may use heuristics instead of incurring the additional effort required to compute the manipulation which most favors them. In this paper, we examine voting behavior in single-winner and multi-winner approval voting scenarios with varying degrees of uncertainty using behavioral data obtained from Mechanical Turk. We find that people generally manipulate their vote to obtain a better outcome, but often do not identify the optimal manipulation. There are a number of predictive models of agent behavior in the social choice and psychology literature that are based on cognitively plausible heuristic strategies. We show that the existing approaches do not adequately model our real-world data. We propose a novel model that takes into account the size of the winning set and human cognitive constraints; and demonstrate that this model is more effective at capturing real-world behaviors in multi-winner approval voting scenarios.
Joint work with Jaelle Scheuerman, Jason Harman, and K. Brent Venable.
We are going to have eight speakers, each speaking for 5 minutes.
Speakers: Farhad Mohsin (RPI), Sonja Kraiczy (Oxford), Michał T. Godziszewski (Warsaw), Eyal Leizerovich (BGU), Neel Karia (IIT Kharagpur), Fatima Umar (Rochester), Péter Biró (Budapest), Sricharan AR (Chennai)
Session Chairs: Edith Elkind (Oxford) and Dominik Peters (Toronto)
Session Chair: Edith Elkind (Oxford)
---
Rida Laraki (Paris/Liverpool)
Stable Matching Games
Gale and Shapley introduced a matching problem between two sets of agents where each agent on one side has an exogenous preference over the agents of the other side. They defined a matching as stable if no unmatched pair can both improve their utility by forming a new pair. They proved, algorithmically, the existence of a stable matching. Shapley and Shubik, as well as many others, extended the model by allowing monetary transfers. We offer a further extension by assuming that matched couples obtain their payoff endogenously as the outcome of a strategic game they have to play in a usual non-cooperative sense (without commitment) or in a semi-cooperative way (with commitment, as the outcome of a bilateral binding contract in which each player is responsible of his/her part of the contract). Depending on whether the players can commit or not, we define in each case a solution concept which combines Gale-Shapley pairwise external stability with a (generalized) Nash equilibrium internal stability. In each case, we give the necessary and sufficient conditions for the set of stable allocations to be non-empty, we study its geometry (full/semi-lattice), and provide an algorithm which converges to its maximal element. Finally, we prove that our second model—with commitment—encompasses and refines most of the literature (matching with monetary transfers as well as matching with contracts).
Joint work with Felipe Garrido-Lucero.
---
Bill Zwicker (Union College)
John Nash meets Jorge Hirsch: Scale Invariant Citation Indices (Slides)
A number of citation indices have been proposed for measuring and ranking the research publication records of scholars. Some of the best-known indices, such as those proposed by Hirsch and Woeginger, are designed to reward most highly those records that strike some balance between productivity (number of papers published) and impact (frequency with which those papers are cited). A large number of rarely cited publications will not score well, nor will a very small number of heavily cited papers. We propose several new citation indices, each resting on the notion of scale invariance, as introduced by John Nash in his solution of the two-person bargaining problem. Our main focus is on one of these—a scale invariant version of the Hirsch index. We argue that it has advantages over the original; it produces fairer rankings within subdisciplines, is more decisive (discriminates more finely, yielding fewer ties) and more dynamic (growing over time via more frequent, smaller increments). Simulations with Poisson noise suggest that scale invariance reduces the number of "accidental" reversals, wherein random irregularities cause researcher A to receive a lower index value than B, although A's productivity and impact are both slightly higher than B's. Moreover, we provide an axiomatic characterization of the scale invariant Hirsch index, via axioms that bear a close relationship, in discrete analogue, to those used by Nash. This argues for the mathematical naturality of the new index.
Joint work with Josep Freixas and Roger Hoerl.
Session Chair: Ulle Endriss (Amsterdam)
---
Nicolas Maudet (Paris)
Swap Dynamics in Single-Peaked Housing Markets (Slides)
In this talk we will discuss recent results regarding swap dynamics in single-peaked housing markets. In this restricted domain, Top Trading Cycle (TTC) is no longer the only procedure satisfying Pareto-optimality, individual rationality and strategy-proofness. In particular, the Crawler procedure recently introduced by Bade enjoys the same properties. These two centralized procedures might however involve long trading cycles. In swap dynamics, agents perform instead pairwise mutually improving deals until a stable allocation is reached. We will discuss the properties of such dynamics, and show in particular how the outcomes reached compare to those returned by these centralized procedures.
Joint work with Aurélie Beynier, Simon Rey, and Parham Shams.
---
Eric Pacuit (Maryland)
Expansion Consistency in Voting (Slides)
Sen introduced the gamma condition, also known as expansion consistency, as part of a characterization of choice functions representable by a binary relation. Following Bordes, we consider expansion as an axiom on voting methods—as well as a natural weakening of expansion hinted at by Bordes, which we call binary expansion. Surveying standard voting methods, there is an apparent connection between the degree of resoluteness of a voting method and whether it satisfies (binary) expansion. We explain this observation with a new impossibility theorem: There is no voting method satisfying anonymity, neutrality, binary expansion, and an axiom we call quasi-resoluteness, which requires a single winner in any profile without tied margins. Finally, we explore the frequency with which some voting methods violate binary expansion using computer simulations.
This is joint work with Wes Holliday.
Session Chair: Vincent Merlin (Caen)
---
Julia Stoyanovich (New York)
Algorithmic Techniques for Necessary and Possible Winners
We investigate the practical aspects of computing the necessary and possible winners in elections over incomplete voter preferences. In the case of the necessary winners, we show how to implement and accelerate the polynomial-time algorithm of Xia and Conitzer. In the case of the possible winners, where the problem is NP-hard, we give a natural reduction to Integer Linear Programming (ILP) for all positional scoring rules and implement it in a leading commercial optimization solver. Further, we devise optimization techniques to minimize the number of ILP executions and, oftentimes, avoid them altogether. We conduct a thorough experimental study that includes the construction of a rich benchmark of election data based on real and synthetic data. Our findings suggest that, the worst-case intractability of the possible winners notwithstanding, the algorithmic techniques presented here scale well and can be used to compute the possible winners in realistic scenarios.
Joint work with Vishal Chakraborty, Théo Delemazure, Benny Kimelfeld, Phokion Kolaitis, and Kunal Relia.
---
Simina Brânzei (Purdue)
Searching, Sorting, and Cake Cutting in Rounds
We study sorting and searching in rounds motivated by a cake cutting problem. The search problem we consider is: we are given an array x=(x1,…,xn) and an element z promised to be in the array. We have access to an oracle that answers comparison queries and the goal is to find the location of z with success probability at least p∈[0,1] in at most k rounds of interaction with the oracle. The problem is called ordered or unordered search, depending on whether the array x is sorted or unsorted,respectively. We analyze the expected query complexity of randomized and deterministic algorithms, finding a surprising query complexity gap in the performance of randomized algorithms on a worst case input and deterministic algorithms on a random input. We also discuss the connections of these search problems to sorting in the rank query model and proportional cake cutting with contiguous pieces.
Please note that for this session we will start one hour later than usual: 10am New York / 4pm Paris.
---
Session Chair: Edith Elkind (Oxford)
---
Phokion Kolaitis (UC Santa Cruz & IBM)
Semantics and Complexity of Queries on Election Databases (Slides)
Election databases are the main elements of a new framework that aims to create bridges between the computational social choice and the data management communities. An election database contains incomplete information about the preferences of voters (in the form of partial orders), alongside with standard database relations that provide contextual information. The availability of the relational context allows for the formulation of sophisticated queries about voting rules, candidates, winners, issues, and positions on issues. We introduce the necessary answer semantics and the possible answer semantics to queries on election databases and explore the computational complexity of query evaluation under these semantics.
Joint work with Benny Kimelfeld, Julia Stoyanovich, and Muhammad Tibi.
---
Judy Goldsmith (Kentucky)
Opinion Diffusion with Heterogeneous Agents in a Dynamic Network (Slides)
I will present recent work with Patrick Shepherd and Mia Weaver on modeling opinion diffusion in networks with homophilic, heterophilic, and adversarial agents. We assume that the network can apply triadic closure, but agents can unfriend each other in order to increase their utility. This work serves to highlight Shepherd's social network software package, which can also support simulations in any social-network-based social choice processes. I will briefly mention our ongoing work on modeling individuals' willingness to confront biased remarks, and on learning agent strategies for epidemic responses.
The latter is joint work with Isaac Batts, Abigail Folberg, Emory Hufbauer, Jennifer Hunt, Hana Khamfroush, Patrick Shepherd, Mia Weaver, and Angela Zhang.
This will be a session with 5-minute talks by 8 different speakers. A great way of finding out what people are working on.
Speakers: Georgios Papasotiropoulos (Athens), Ayumi Igarashi (Tokyo), Boris Tsvelikhovskiy (Pittsburgh), Nimrod Talmon (Ben-Gurion), Paul Harrenstein (Oxford), Kunal Relia (New York), Chinmay Sonar (UC Santa Barbara), Foroogh Salekpay (Rovira i Virgili)
Session Chair: Ulle Endriss (Amsterdam)
Session Chair: Bill Zwicker (Union College)
---
Felix Brandt (Munich)
A Natural Adaptive Process for Collective Decision-Making (Slides)
Consider an urn filled with balls, each labelled with one of several possible collective decisions. Now, draw two balls from the urn, let a random voter pick her more preferred as the collective decision, relabel the losing ball with the collective decision, put both balls back into the urn, and repeat. In order to prevent the permanent disappearance of some types of balls, a randomly drawn ball is labelled with a random collective decision once in a while. We prove that the empirical distribution of collective decisions converges towards the outcome of a celebrated probabilistic voting rule proposed by Peter C. Fishburn (Rev. Econ. Stud., 51(4), 1984). The proposed procedure has analogues in nature recently studied in biology, physics, and chemistry. It is more flexible than traditional voting rules because it does not require a central authority, elicits very little information, and allows agents to arrive, leave, and change their preferences over time.
Joint work with Florian Brandl.
---
Klaus Nehring (UC Davis)
The Median Rule in Judgment Aggregation: Extension to Weighted Judgment Contexts (Slides)
A judgement aggregation rule takes the views of a collection of voters over a set of interconnected issues and yields a logically consistent collective view. The median rule is a judgement aggregation rule that selects the logically consistent view which minimizes the average distance to the views of the voters (where the "distance" between two views is the number of issues on which they disagree). In the special case of preference aggregation, this is called the Kemeny rule. We show that, under appropriate regularity conditions, the median rule is the unique judgement aggregation rule which satisfies three axioms: Ensemble Supermajority Efficiency, Reinforcement, and Continuity. Our analysis covers aggregation problems in which the consistency restrictions on input and output judgements may differ. We also allow for issues to be weighted, and provide numerous examples in which issue weights arise naturally. The generalization to the weighted case requires non-trivial combinatorial conditions on the structure of the input and output spaces.
Joint work with Marcus Pivato.
For this session we asked Jérôme Lang (Paris) to invite speakers and chair the session.
---
Rohit Vaish (Mumbai)
Stable Fractional Matchings (Slides)
In this talk, I will discuss a generalization of the classical stable matching problem that allows for cardinal preferences (as opposed to ordinal) and fractional matchings (as opposed to integral). In this cardinal setting, stable fractional matchings can have much higher social welfare than their integral counterparts. This motivates the natural question of understanding the computational complexity of finding an optimal (i.e., welfare-maximizing) stable fractional matching. I will present simple approximation algorithms for this problem with weak welfare guarantees and show that, somewhat surprisingly, achieving better approximations is computationally hard. To the best of our knowledge, these are the first computational complexity results for stable fractional matchings. I will also discuss some structural observations as well as directions for future research.
Joint work with Ioannis Caragiannis, Aris Filos-Ratsikas, and Panagiotis Kanellopoulos.
https://arxiv.org/abs/1902.06698
---
John Dickerson (Maryland)
Scalable Equilibrium Computation in Multi-agent Influence Games on Networks
In games of influence, agents have limited advertising budget to influence the initial predisposition of nodes in some network towards their products, but the eventual decisions of the nodes are determined by the stationary state of DeGroot opinion dynamics on the network, which takes over after the seeding. In multi-agent systems, how should agents spend their budgets to seed the network to maximize their utility in anticipation of other advertising agents and the network dynamics? In this talk, we will discuss our recent polynomial-time, scalable algorithm for equilibrium computation in multi-agent influence games on networks, extending work of Bindel, Kleinberg, and Oren (2015) from the single-agent to the multi-agent setting. Specifically, we showed that Nash equilibria of this game are pure and (under weak assumptions) unique, and can be computed in polynomial time. We will also discuss a preliminary test of our model in the two-agent case on random graphs, where equilibria can be computed using mirror descent.
Joint work with Fotini Christia, Michael Curry, Constantinos Daskalakis, Erik Demaine, MohammadTaghi Hajiaghayi, Adam Hesterberg, Marina Knittel, and Aidan Milliff. Preliminary version appeared at AAAI-2021.
Session Chair: Vincent Merlin (Caen)
---
Ashley Piggins (Galway)
Pure Strategy Nash Equilibrium in the Spatial Model with Valence: Existence and Characterization (Slides)
Pure strategy Nash equilibria (PSNE) almost never exist in spatial majority voting games when the number of positional dimensions is at least two. This is a persisting foundational issue with the spatial model. However, positive results are possible when we add a valence parameter to the model. For example, it is known that an equilibrium exists in a majority voting game if and only if the radius of the yolk, a classic concept in the spatial theory, is less than the square root of the valence parameter. We prove a general theorem that enables us to characterize the set of all PSNE for every spatial voting game with valence. We complement this by establishing an equilibrium existence theorem based on a new concept, the b-yolk.
Joint with Mathieu Martin, Zéphirin Nganmeni, and Élise F. Tchouante.
---
Aris Filos-Ratsikas (Liverpool)
Distortion-Information Tradeoffs in Social Choice and Matching via Queries
Aggregating the preferences of individuals into a collective decision is the core subject of study of social choice theory. In 2006, Procaccia and Rosenschein considered a utilitarian social choice setting, where the agents have explicit numerical values for the alternatives, yet they only report their linear orderings over them. To compare different aggregation mechanisms, Procaccia and Rosenschein introduced the notion of distortion, which quantifies the inefficiency of using only ordinal information when trying to maximize the social welfare, i.e., the sum of the underlying values of the agents for the chosen outcome. Since then, this research area has flourished and bounds on the distortion have been obtained for a wide variety of fundamental scenarios. However, the vast majority of the existing literature is focused on the case where nothing is known beyond the ordinal preferences of the agents over the alternatives. We take a more expressive approach, and consider mechanisms that are allowed to further ask a few cardinal queries in order to gain partial access to the underlying values that the agents have for the alternatives. With this extra power, we design new deterministic mechanisms that achieve significantly improved distortion bounds and, in many cases, outperform the best-known randomized ordinal mechanisms. We consider both the setting of general social choice and one-sided matching, and provide bounds on the possible tradeoffs between the distortion and the amount of information elicited.
Joint work with Georgios Amanatidis, Georgios Birmpas, and Alexandros Voudouris.
Session Chair: Bill Zwicker (Union College)
---
Franz Dietrich (Paris)
Dynamically Rational Judgment Aggregation (Slides)
A key goal in judgment aggregation theory has always been to make collective judgments rational. So far, rationality has been understood in purely static terms: as coherence of judgments at a given time, where 'coherence' could for instance mean consistency, or completeness, or deductive closure, or combinations thereof. By contrast, this paper studies whether and how collective judgments can be dynamically rational, so that they respond well to new information, i.e., change rationally when information is learnt by everyone. Formally, we call a judgment aggregation rule dynamically rational with respect to a given revision operator if, whenever all individuals revise their judgments in light of some information (a proposition), then the new aggregate judgments are the old ones revised in light of this information. In short, aggregation and revision commute. We establish a general impossibility theorem: as long as the propositions on the agenda are sufficiently interconnected, no judgment aggregation rule with standard properties is dynamically rational with respect to any revision operator satisfying mild conditions (familiar from belief revision theory). The theorem is a counterpart for dynamic rationality of known impossibility theorems for static rationality. Turning to possibilities, the paper then makes a first attempt to restore dynamic rationality, by showing that certain rules ('hierarchy rules') generate dynamic rationality with respect to special ('hierarchical') revision operators. Like the familiar 'priority rules', such rules are guided by priorities between the propositions.
Joint work with Christian List.
http://franzdietrich.net/Papers/DietrichList-DynamicallyRationalJA.pdf
---
Sean Horan (Montréal)
Two-Stage Majoritarian Choice (Slides)
We propose a class of decisive collective choice rules that rely on an exogenous linear ordering to partition the majority relation into two acyclic relations. The first relation is used to make a shortlist of the feasible alternatives before the second is used to make a final choice. Rules in this class are characterized by four properties: two classical rationality requirements (Sen's expansion consistency and Manzini and Mariotti's weak WARP); and adaptations of two natural collective choice conditions (Arrow's independence of irrelevant alternatives and Saari and Barney's no preference reversal bias). These rules also satisfy a number of other desirable properties, including a version of May's positive responsiveness.
Joint work with Yves Sprumont.
Session Chair: Edith Elkind (Oxford)
---
Umberto Grandi (Toulouse)
Multiagent Ranked Delegations in Voting (Slides)
We propose a generalisation of liquid democracy in which a voter can either vote directly on the issues at stake, delegate her vote to another voter, or express complex delegations to a set of trusted voters. By requiring a ranking of desirable delegations and a backup vote from each voter, we are able to put forward and compare algorithms to solve delegation cycles and obtain a final collective decision.
Joint work with Rachael Colley and Arianna Novaro.
---
Nimrod Talmon (Ben-Gurion)
Participatory Budgeting with Project Interactions (Slides)
Participatory budgeting systems allow city residents to jointly decide on projects they wish to fund using public money, by letting residents vote on such projects. While participatory budgeting is gaining popularity, existing aggregation methods do not take into account the natural possibility of project interactions, such as substitution and complementarity effects. Here we take a step towards fixing this issue by augmenting the standard model of participatory budgeting with a partition over the projects where we model the type and extent of project interactions within each part using certain functions. We study the computational complexity of finding bundles that maximize voter utility, as defined with respect to such functions. Motivated by the desire to incorporate project interactions in real-world participatory budgeting systems, we identify certain cases that admit efficient aggregation in the presence of such project interactions.
Joint work with Pallavi Jain and Krzysztof Sornat.
Session Chair: Ulle Endriss (Amsterdam)
---
Kate Larson (Waterloo)
Testing Axioms against Human Reward Divisions in Cooperative Games
Axiomatic approaches are an appealing method for designing fair allocation algorithms, as they provide a formal structure for reasoning about and rationalizing individual decisions. However, to make these algorithms useful in practice, their axioms must appropriately capture social norms. We explore this tension between fairness axioms and socially acceptable decisions in the context of cooperative game theory for the fair division of rewards. We use two crowdsourced experiments to study people’s impartial reward divisions in cooperative games, focusing on games that systematically vary the values of the single-player coalitions. Our results show that people select rewards that are remarkably consistent, but place much more emphasis on the single-player coalitions than the Shapley value does. Further, their reward divisions violate both the null player and additivity axioms but support weaker axioms. We argue for a more general methodology of testing axioms against experimental data, retaining some of the conceptual simplicity of the axiomatic approach while still using people’s opinions to drive the design of algorithms.
Joint work with Greg d'Eon.
---
Ernesto Savaglio (Pescara)
On Rational Choice from Lists of Sets (Slides)
We analyse the rationality of a Decision Maker (DM) who chooses from lists of sets of alternatives. A new class of choice functions, representing DM's choice behaviour, and a novel rationality axiom named No-Regret (NR) are introduced. We show that the NR property, which suggests ignoring alternatives disregarded in a previous choice procedure, is related to the famous rationality axioms Outcast, Heritage and Path-independence. Such relationships depend on implementations of choice functions on lists of sets by means of the standard choice function on alternatives. We point out that No-Regret is a rationality criterion that encompasses these compelling properties of the classical choice model and extends them to the more general framework of choice from lists of sets of alternatives.
Joint work with Gleb Koshevoy.
For this session we asked Piotr Faliszewski (Kraków) to invite speakers and chair the session.
---
Oskar Skibski (Warsaw)
Axiomatic Characterization of PageRank (Slides)
We study the problem of identifying the most important nodes in a network. We propose to use an axiomatic approach to build a theoretical foundation for choosing a centrality measure in a specific setting. We propose six simple properties and prove that PageRank is the only centrality measure that satisfies all of them. Our axiomatization is the first axiomatic characterization of PageRank in its general form in the literature.
Joint work with Tomasz Was.
---
Elliot Anshelevich (RPI)
Bounding Distortion under Metric Preferences using Awareness of Voter Passion (Slides)
Distortion measures the inefficiency of using only ordinal information when attempting to select an optimal outcome, instead of the unknown numerical information. For example, in social choice applications, agents may have numerical costs for each of the possible alternatives, but the mechanism must decide on a good alternative based only on the ordinal preferences of the agents which are induced by the underlying costs. In this talk, I will address bounding distortion under metric preferences, i.e., under the assumption that the agent costs form a metric space and thus obey the triangle inequality. I will specifically focus on how a small amount of additional information about agent preferences can greatly reduce distortion, and the resulting information-distortion tradeoffs.
Joint work with Ben Abramowitz and Wennan Zhu.
Session Chair: Edith Elkind (Oxford)
---
Markus Brill (Berlin)
The Maximin Support Method: An Extension of the D'Hondt Method to Approval-Based Multiwinner Elections (Slides)
We propose the maximin support method, a novel extension of the D'Hondt apportionment method to approval-based multiwinner elections. The maximin support method is a sequential procedure that aims to maximize the support of the least supported elected candidate. It can be computed efficiently and satisfies (adjusted versions of) the main properties of the original D'Hondt method: house monotonicity, population monotonicity, and proportional representation. We also establish a close relationship between the maximin support method and alternative D'Hondt extensions due to Phragmén.
Joint work with Luis Sánchez-Fernández, Norberto Fernández García, and Jesús A. Fisteus.
---
Robert Bredereck (Berlin)
Maximizing the Spread of an Opinion in Few Steps: Opinion Diffusion in Non-Binary Networks (Slides)
We consider the setting of asynchronous opinion diffusion with majority threshold: given a social network with each agent assigned to one opinion, an agent will update its opinion if the majority of its neighbors agree on a different opinion. The stabilized final outcome highly depends on the sequence in which agents update their opinion. We are interested in optimistic sequences−sequences that maximize the spread of a chosen opinion. We complement known results for two opinions where optimistic sequences can be computed in time and length linear in the number of agents. We analyze upper and lower bounds on the length of optimistic sequences, showing quadratic bounds in the general and linear bounds in the acyclic case. Moreover, we show that in networks with more than two opinions determining a spread-maximizing sequence becomes intractable; surprisingly, already with three opinions the intractability results hold in highly restricted cases, e.g., when each agent has at most three neighbors, when looking for a short sequence, or when we aim for approximate solutions.
Please note that this session will start one hour later than usual, namely at 10:00 New York / 16:00 Paris time.
Session Chair: Vincent Merlin (Caen)
---
Hervé Moulin (Glasgow)
Universal Guarantees in Bargaining (Slides)
In the n-person bargaining model, the familiar Mid-point Domination property guarantees my worst utility if I have a 1/n-th chance of dictating the outcome. Another natural Guarantee gives to each participant the right to veto her fair share of outcomes. Yet another sets the disagreement utility at the arithmetic average of all outcomes. Maintaining equal treatment of agents and outcomes, and vNM utilities, the set of such Guarantees is of infinite dimension even with two agents and three outcomes. The natural subset of rank-additive Guarantees is of finite dimension, and in two-agent problems it boils down to the convex combinations of the veto Guarantees above. But more complicated options are feasible among three or more agents.
Joint work with Anna Bogomolnaia.
---
Matías Núñez (Paris)
Voter Coordination in Large Elections: A Case for Approval Voting (Slides)
We study the potential for voter coordination in large elections. Drawing on equilibrium analysis and numerical simulations, we compare the three most basic rules for 3-candidate elections, under which voters may vote for exactly one candidate (Plurality Voting), for exactly two candidates (Anti-Plurality) or for one or two candidates (Approval Voting). As well-known since Duverger (1951), Plurality Voting allows for voter coordination, but the election is indeterminate: at least two candidates are plausible winners. By contrast, coordination always fails under Anti-Plurality Voting. We further show that Approval voting always permits coordination when a Condorcet winner exists, and also ensures that, in most cases, only this normatively appealing candidate can be elected. At the heart of our numerical results lies a novel algorithm computing voter best replies in a large election.
Joint work with François Durand and Antonin Macé.
Session Chair: Vincent Merlin (Caen)
---
Anna Bogomolnaia (Glasgow)
Guarantees in Fair Division (Slides)
We consider a fair division problem, with n agents and a set Ω to divide. An individual "guarantee" is a utility level which an agent is regarded as entitled to. It only depends on her preferences and exogenous rights, and on physical constraints of the division problem: the set Ω and the number n of participants (but it should not depend on preferences or other characteristics of other agents). For example, if Ω is a finite set of divisible goods and preferences are non-negative convex, a compelling symmetric and feasible guarantee is individual's utility of (1/n)Ω. If Ω is arbitrary but preferences are additive non-negative, it would be (1/n)-th of the individual's utility of the whole Ω. We consider a general case, when utilities are continuous, but neither convex nor monotonic, or even positive. In a "cake-cutting" problem, we can always guarantee each agent her minmax utility: that of her best share in the worst possible partition. It is lower than her maxmin utility (that of her worst share in the best possible partition), that cannot always be guaranteed to all agents (even for n=2). Moreover, a little-known generalization of the Divide and Choose algorithm to any number n of agents implements this guarantee with a simple sequence of cuts and queries. These findings rely on the equi-partition lemma, which shows that, for any n and any continuous utility u on a bounded measurable Ω in Euclidean space, one can always find a partition of Ω in n elements with identical utility. This is a highly non-trivial mathematical result, proven in a companion paper by R. Karasev and S. Avvakumov.
Joint work with Hervé Moulin.
---
Jean-François Laslier (Paris)
Evolutionary Foundations of the Universalisation Ethics, with Political Applications (Slides)
In this talk I will describe the "Homo moralis" model of partial Kantian morality, its evolutionary roots (genetics and culture) and its philosophical interpretation (the universalisation principle). I will present behavioural and political implications for three classical problems in the theory of voting: the divided majority problem, the costly participation dilemma, and the strategic revelation of information.
Joint work with Ingela Alger.
Session Chair: Bill Zwicker (Union College)
---
Speakers: Jörg Rothe (Düsseldorf)
What's New in Altruistic Hedonic Games? (Slides)
Hedonic games are coalition formation games where players have preferences over the coalitions they may join. Considering not only the players’ own (friend-oriented) preferences but also their friends’ preferences in such games, Nguyen et al. (AAMAS'16) introduced and studied three degrees of altruistic hedonic games based on selfish-first, equal-treatment, and altruistic-treatment preferences, which depend on the order in which players look at their own or their friends’ preferences. In this talk, a related approach due to Wiechers and Rothe (STAIRS'20) will be presented: By replacing the average by the minimum in their definitions, we introduce the corresponding minimum-based variants of the same three degrees of altruism. As innocent as this definitional change appears, it is as fundamental as switching from utilitarian to egalitarian social welfare. We study this novel concept of altruism in hedonic games in terms of the common notions of stability in such games, and the related decision problems in terms of their computational complexity. We also briefly mention an extension, due to Kerkmann and Rothe (IJCAI'20), of the model of altruism in hedonic games to coalition formation games in general.
---
Lirong Xia (RPI)
The Smoothed Possibility of Social Choice (Slides)
We develop a framework to leverage the smoothed complexity analysis by Spielman and Teng (2004) to circumvent paradoxes and impossibility theorems in social choice, motivated by the low-stakes, large-scale, and frequently-used modern applications of social choice powered by AI and ML. For Condorcet's paradox, we prove that the smoothed likelihood of the paradox either vanishes at an exponential rate as the number of agents increases, or does not vanish at all. For the folklore impossibility theorem on the non-existence of voting rules that simultaneously satisfy anonymity and neutrality, we characterize the rate for the impossibility to vanish, to be either polynomially fast or exponentially fast. We also propose a novel easy-to-compute tie-breaking mechanism that optimally preserves anonymity and neutrality for even number of alternatives in natural settings. Our results illustrate the smoothed possibility of social choice—even though the paradox and the impossibility theorem hold in the worst case, they may not be a big concern in practice. The main technical tool we developed is a dichotomous characterization of the smoothed likelihood for a Poisson multinomial variable to satisfy a system of linear inequalities, which might be of independent interest and can be used to study various other events in social choice, such as the likelihood of tied elections, likelihood of (weak) Condorcet winner(s) and Condorcet cycles, etc.
Session Chair: Bill Zwicker (Union College)
---
Dominik Peters (Harvard)
Proportional Participatory Budgeting with Cardinal Utilities (Slides)
We study voting rules for participatory budgeting, where a group of voters collectively decides which projects should be funded using a common budget. We allow the projects to have arbitrary costs, and the voters to have arbitrary additive valuations over the projects. We formulate two axioms that guarantee proportional representation to groups of voters with common interests. To the best of our knowledge, all known rules for participatory budgeting do not satisfy either of the two axioms; in addition we show that the most prominent proportional rules for committee elections (such as Proportional Approval Voting) cannot be adapted to arbitrary costs nor to additive valuations so that they would satisfy our axioms of proportionality. We construct a simple and attractive voting rule that satisfies one of our axioms (for arbitrary costs and arbitrary additive valuations), and that can be evaluated in polynomial time. We prove that our other stronger axiom is also satisfiable, though by a computationally more expensive and less natural voting rule.
Joint work with Grzegorz Pierczyński and Piotr Skowron.
https://arxiv.org/abs/2008.13276
---
Juan D. Moreno-Ternero (Seville)
Sharing the Revenues from Broadcasting Sports Leagues (Slides)
We study the problem of sharing the revenues raised from the collective sale of broadcasting rights for sports leagues. We take the axiomatic approach and derive focal rules, arising from polar estimations of teams' loyal viewers, as well as compromises among them. We bring some of the testable implications from our axiomatic analysis to the real case of European football leagues.
Joint work with Gustavo Bergantiños.
Session Chair: Edith Elkind (Oxford)
---
Toby Walsh (Sydney)
Strategy-Proof Mechanisms for Facility Location (Slides)
Facility location is a fundamental problem in social choice with a number of practical application. In this talk, I study the impact on extending mechanisms for locating facilities on the line to mechanisms for locating facilities in Euclidean or Manhattan space. I consider three fundamental axiomatic properties: anonymity, which is a simple fairness property; Pareto optimality, which is one of the most fundamental efficiency properties; and strategy-proofness, which ensures agents have no incentive to mis-report their location. I also consider how well strategy-proof mechanisms can approximate the optimal welfare.
---
Lihi Dery (Ariel)
Beyond Majority: Label Ranking Ensembles based on Voting Rules (Slides)
Label ranking is a machine learning problem that is concerned with predicting the order of a set of labels for an object. Examples include: (a) predicting a ranked list of preferences for a given person, (b) providing an ordered list of the most suitable advertisements for a given Facebook profile, and even (c) predicting the most drug-resistant mutations in a genome. Ensemble models are a common machine learning technique that collects the results of multiple simple models and combines them into a single aggregated output, thus enhancing performance. We propose to apply voting rules as the aggregation technique for label ranking ensembles. Furthermore, we propose a novel aggregation method for label ranking ensembles that learns the best voting rule to be used in each setting. An evaluation of the proposed method shows that it obtains prediction performance that is significantly higher than that of the aggregation techniques currently used by state-of-the-art label ranking ensembles.
Joint work with Havi Werbin and Erez Shmueli.
This will be a session with 5-minute talks by 8 different speakers. A great way of finding out what people are working on.
Speakers: Alan Tsang (Carleton), Anaëlle Wilczynski (Paris-Saclay), Evi Micha (Toronto), Ali Ozkes (Vienna), Farhad Mohsin (RPI), Ulrike Schmidt-Kraepelin (Berlin), Yun Lu (Edinburgh), Jacques Bara (Warwick)
Session Chair: Ulle Endriss (Amsterdam)
For this, the California Session, we will start three hours later than usual, at 9:00 San Francisco, 12:00 New York, 18:00 Paris time.
Session Chair: Bill Zwicker (Union College)
---
Francis Su (Harvey Mudd)
Multilabeled Versions of Sperner's and Fan's Lemmas and Applications (Slides)
We propose a general technique related to the polytopal Sperner lemma for proving old and new multilabeled versions of Sperner's lemma. A notable application of this technique yields a cake-cutting theorem where the number of players and the number of pieces can be independently chosen. We also prove multilabeled versions of Fan's lemma, a combinatorial analogue of the Borsuk-Ulam theorem, and exhibit applications to fair division and graph coloring.
Joint work with Frédéric Meunier.
https://doi.org/10.1137/18M1192548
---
Matthew O. Jackson (Stanford)
Learning through the Grapevine: The Impact of Noise and the Breadth and Depth of Social Networks (Slides)
We examine how well people learn when information is noisily relayed from person to person; and we study how communication platforms can improve learning without censoring or fact-checking messages. We analyze learning as a function of social network depth (how many times information is relayed) and breadth (the number of relay chains accessed). Noise builds up as depth increases, so learning requires greater breadth. In the presence of mutations (deliberate or random) and transmission failures of messages, we characterize sharp thresholds for breadths above which receivers learn fully and below which they learn nothing. When there is uncertainty about mutation rates, optimizing learning requires either capping depth, or if that is not possible, limiting breadth by capping the number of people to whom someone can forward a message. Limiting breadth cuts the number of messages received but also decreases the fraction originating further from the receiver, and so can increase the signal to noise ratio. Finally, we extend our model to study learning from message survival: e.g., people are more likely to pass messages with one conclusion than another. We find that as depth grows, all learning comes from either the total number of messages received or from the content of received messages, but the learner does not need to pay attention to both.
Joint work with Suraj Malladi and David McAdams.
Session Chair: Ulle Endriss (Amsterdam)
---
Hans Peters (Maastricht)
Unanimous and Strategy-proof Probabilistic Rules for Single-peaked Preference Profiles on Graphs (Slides)
Finitely many agents have preferences on a finite set of alternatives, single-peaked with respect to a connected graph with these alternatives as vertices. A probabilistic rule assigns to each preference profile a probability distribution over the alternatives. First, all unanimous and strategy-proof probabilistic rules are characterized when the graph is a tree. These rules are uniquely determined by their outcomes at those preference profiles where all peaks are on leafs of the tree, and thus extend the known case of a line graph. Second, it is shown that every unanimous and strategy-proof probabilistic rule is random-dictatorial if and only if the graph has no leafs. Finally, the two results are combined to obtain a general characterization for every connected graph by using its block tree representation.
Joint work with Souvik Roy and Soumyarup Sadhukan.
---
Marcus Pivato (Cergy)
Bayesian Social Aggregation with Accumulating Evidence (Slides)
How should we aggregate the ex ante preferences of Bayesian agents with heterogeneous beliefs? Suppose the state of the world is described by a random process that unfolds over time. Different agents have different beliefs about the probabilistic laws governing this process. As new information is revealed over time by the process, agents update their beliefs and preferences via Bayes rule. Consider a Pareto principle that applies only to preferences which remain stable in the long run under these updates. I show that this "eventual Pareto" principle implies that the social planner must be a utilitarian. But it does not impose any relationship between the beliefs of the individuals and those of the planner.
Please note that this session will be held on a Friday, given that there will be three sessions on computational social choice at ECAI on the Thursday of this week.
Session Chair: Ulle Endriss (Amsterdam)
---
Dorothea Baumeister (Düsseldorf)
Manipulation of Opinion Polls to Influence Iterative Elections (Slides)
In classical elections, voters only submit their ballot once, whereas in iterative voting, the ballots may be changed iteratively. In this talk I consider the case where a social network represents an underlying structure between the voters, meaning that each voter can see her neighbors' ballots. In addition, there is a polling agency, which publicly announces the result for the initial vote. I will present results on the manipulative power of the polling agency for the destructive case.
Joint work with Ann-Kathrin Selker and Anaëlle Wilczynski.
---
Siddharth Barman (Bangalore)
Fair Cake Division Under Monotone Likelihood Ratios (Slides)
The cake-cutting problem provides a model for addressing fair allocation of a divisible resource (metaphorically, the cake) among agents with distinct preferences. Classic results of Stromquist (1980) and Su (1999) show that envy-free (fair) cake divisions are guaranteed to exist under mild conditions. These strong existential results essentially follow from fixed-point theorems and stand without an algorithmic counterpart; Stromquist (2008) has shown that an envy-free cake division with contiguous pieces cannot be computed in bounded time.
In this talk I will present a result which complements these existential (and non-constructive) guarantees by way of developing efficient cake-cutting algorithms for a broad class of valuations. In particular, our algorithmic result holds when the agents' valuations are induced by linear translations of any log-concave function, such as Gaussian, exponential, linear, or binomial.
Joint work with Nidhi Rathi.
This session was held four hours earlier than usual: 11:00 Paris, 17:00 Singapore, 19:00 Sydney, 21:00 Auckland.
Session Chair: Edith Elkind (Oxford)
---
Arkadii Slinko (Auckland)
Manipulation in Secret Sharing (Slides)
Secret sharing is a cryptographic implementation of a power-sharing agreement in society. At the centre of it is 'the secret', which is a string of 0 and 1 by knowing which a certain activity can be launched: accessing a bitcoin wallet; launching a nuclear warhead; signing a message on behalf of a large organisation, e.g., Adobe. The power structure is given by the set of authorised coalitions−coalitions who are authorised to launch the activity. We consider the digital forensics aspects of secret sharing. Sometimes an investigation is needed to establish: Who made this suspicious transaction with bitcoins? Who signed this offensive letter? Who opened a certain vault?
We investigate the problem of framing, which occurs when a coalition is able to calculate the share of a participant who does not belong to it. In the extreme case one authorised coalition can calculate shares of another authorised coalition and use the secret in some way blaming another authorised coalition for their action. We show that in any efficient and secure secret-sharing scheme framing is unavoidable.
Joint work with Yvo Desmedt and Songbao Mo.
---
Haris Aziz (Sydney)
The Vigilant Eating Rule: A General Approach for Probabilistic Economic Design with Constraints (Slides)
We consider the problem of probabilistic allocation of objects under ordinal preferences. Our main contribution is an allocation mechanism, called the vigilant eating rule (VER), that applies to nearly arbitrary distributional constraints. It is constrained ordinally-efficient, can be computed efficiently for a large class of constraints, and treats agents equally if they have the same preferences and are subject to the same constraints. When the set of feasible allocations is convex, we also present a characterization of our rule based on ordinal egalitarianism. Our general results concerning VER do not just apply to allocation problems but to any collective choice problem in which agents have ordinal preferences over discrete outcomes. As a case study, we assume objects have priorities for agents and apply VER to sets of probabilistic allocations that are constrained by stability. VER coincides with the (extended) probabilistic serial rule when priorities are flat and the agent-proposing deterministic deferred acceptance algorithm when preferences and priorities are strict. While VER always returns a stable and constrained efficient allocation, it fails to be strategyproof, unconstrained efficient, and envy-free. We show, however, that each of these three properties is incompatible with stability and constrained efficiency.
Joint work with Florian Brandl.
This July many of us were hoping to travel to Israel for the 8th International Workshop on Computational Social Choice. This is a special session, with two speakers from Israel, organised to mark this occasion. Please note that this session will take place on a Thursday, as will most sessions in the foreseeable future.
Session Chair: Bill Zwicker (Union College)
---
Reshef Meir (Technion)
A Market-Inspired Bidding Scheme for Peer Review Paper Assignment (Slides)
We propose a market-inspired bidding scheme for the assignment of paper reviews in large academic conferences. The primary contribution is an analysis of the incentives of reviewers during the bidding phases, when reviewers have private costs and some information about the demand for each paper, and want to obtain the best possible k papers for a predetermined k. We show that by assigning budgets to reviewers and a 'price' for every paper that is (roughly) inversely proportional to its demand, the best response of a reviewer is to bid sincerely, i.e., on her most favorite papers, and match the budget even when it is not enforced. Finally, we show via extensive simulations on bidding data from real conferences, that our suggested bidding scheme would substantially improve the assignment, under several common assignment algorithms.
Joint work with Natan Kaminsky, Jérôme Lang, Julien Lesca, and Nicholas Mattei.
---
Gil Kalai (Jerusalem)
New Quantitative and Qualitative Aspects of Social Choice Theory
Arrow's theorem is a far-reaching extension of Condorcet's paradox for the majority rule to general voting rules. We will ask to what extent other properties of the majority rule have such wide extensions, and what are some properties of the majority rule that distinguish it from other voting rules.
Session Chair: Dominik Peters (CMU)
---
Bettina Klaus (Lausanne)
The Core for Housing Markets with Limited Externalities (Slides)
We propose a variant of the housing market model à la Shapley and Scarf (1974) that incorporates a limited form of externality in consumption; that is, agents care both about their own consumption (selfish preferences) and about the agent who receives their endowment (altruistic preferences). First, we consider different domains of preference relations by taking selfish and altruistic aspects of preferences into account (separable preferences, additive separable preferences, and selfish and/or altruistic lexicographic preferences) and several results regarding the core for such housing markets are collected. Second, if preferences are either selfish lexicographic or altruistic lexicographic, we characterize the corresponding top trading cycles rule by individual rationality, Pareto optimality, and strategy-proofness. Since on the lexicographic preference domains the core can be multi-valued, our characterization sheds light on the fact that the properties that also characterized the strict core rule for Shapley-Scarf housing markets (Ma, 1994) characterize the top trading cycles rule and not the strict core rule (or correspondence).
Joint work with Claudia Meo.
---
Omer Lev (Ben-Gurion)
We Need a Little More Space: A Few More Spatial Problems (Slides)
Spatial models in voting have received a lot of attention in recent years, both in the physical sense (e.g., gerrymandering) as well as the conceptual sense (e.g., distortion). We will show two separate threads of spatial problems, which have received less attention recently. One deals with the well-known Hotelling-Downs model, adapted to the more common (in COMSOC literature) winner-take-all setting. We characterize Nash equilibria for multiple voting rules, and try to measure the "amount" of equilibria, by measuring the probability of randomly reaching a NE. The second deals with the problem of placing voting locations, in which we show many settings of the problem are difficult even to approximate (but a few are not!). Both problems present a rich set of questions which require further exploration.
Joint work with Zack Fitzsimmons, Alexander Karpov, and Svetlana Obraztsova.
Session Chair: Edith Elkind (Oxford)
---
Jérôme Lang (Paris)
Collective Decision Making under Incomplete Knowledge: Possible and Necessary Solutions: Survey Talk (Slides)
Solution concepts in collective decision making are usually defined assuming complete knowledge of individuals' preferences and of the mechanism used for aggregating them. This is often impractical or unrealistic. Under incomplete knowledge, a solution advocated by many consists in quantifying over all completions of the incomplete preference profile (or all instantiations of the incompletely specified mechanism). Voting rules can be 'modalized' this way (leading to the notions of possible and necessary winners), and also efficiency and fairness notions in fair division, stability concepts in coalition formation, and more. The talk will give a very brief survey of works along this line, focusing on examples.
---
Ulle Endriss (Amsterdam)
Axiomatic Justification of Election Outcomes (Slides)
In social choice theory, we routinely attempt to justify the use of a given voting rule on axiomatic grounds. In this talk, I instead want to ask the question of what it might mean to directly justify an election outcome on axiomatic grounds. A possible answer, I will argue, is that a justification has two parts: a set of normative principles (meaning, axioms) and a step-by-step explanation of how those principles force us to choose the outcome in question for a given profile of preferences. I will also discuss recent work on the use of tools from the field of automated reasoning to compute such justifications.
Joint work with Arthur Boixel.
This June many of us were hoping to travel to Mexico for the 15th Meeting of the Society for Social Choice and Welfare. To mark this occasion we are organising two special sessions with speakers from Mexico as well as some of the original keynote speakers. This is the second of these sessions. Please note that we will start one hour later than usual: 09:00 Mexico, 10:00 New York, 16:00 Paris.
Session Chair: Andrei Gomberg (ITAM)
---
Leeat Yariv (Princeton)
Who Cares More: Allocation with Diverse Preference Intensities
Goods and services—public housing, medical appointments, schools—are often allocated to individuals who rank them similarly but differ in their preference intensities. We characterize optimal allocation rules in such settings, considering both the case in which individual preferences are known and ones in which they need to be elicited. Several insights emerge. First-best allocations may involve allocating to some agents lotteries between very high-ranked and very low-ranked goods. When preference intensities are private information, second-best allocations always involve such lotteries and, crucially, may coincide with first-best allocations. Second-best allocations may also entail disposal of services. We also discuss market-based alternative approach and show how it differs.
Joint work with Pietro Ortoleva and Evgenii Safonov.
---
Tetsuya Hoshino (ITAM)
Time-Consistent Rational Inattention is Entropic (Slides)
We study a dynamic rational inattention problem in which, each period, an agent acquires costly information about an evolving state and chooses an action. A dynamic rational inattention problem is time-consistent if its value function satisfies the dynamic programming principle. The main result is that if every dynamic rational inattention problem is time-consistent then the cost function is entropic—that is, linear in the reduction in entropy of beliefs. This result corresponds to the converse to Steiner, Stewart, and Matějka (2017), who showed that every dynamic rational inattention problem with entropy costs is time-consistent.
This June many of us were hoping to travel to Mexico for the 15th Meeting of the Society for Social Choice and Welfare. To mark this occasion we organised two special sessions with speakers from Mexico as well as some of the original keynote speakers. This is the first of these sessions. We started one hour later than usual: 09:00 Mexico, 10:00 New York, 16:00 Paris.
Session Chair: Vincent Merlin (Caen)
---
Roman Pancs (ITAM)
Electoral Maldistricting
We introduce the framework to examine, both theoretically and empirically, (geographic) electoral maldistricting. Maldistricting is defined as electoral districting in pursuit of a policy or incumbency objective at the expense of social welfare. Thanks to a revelation principle-like property of our environment, analysis is performed on the set of implementable (via some districting) legislatures. Implementable legislatures are characterized multiply, including in terms of their informativeness about voter ideology. Maldistricting is quantified by an observed legislature’s distance from welfare-maximizing legislatures relative to the maldistricted ones. If voter turnout can vary across geographic locations, electoral districting becomes more delicate.
Joint work with Andrei Gomberg and Tridib Sharma.
---
Jordi Massó (Barcelona)
All Sequential Allotment Rules are Obviously Strategy-proof (Slides)
For the division problem with single-peaked preferences (Sprumont, 1991) we show that all sequential allotment rules, identified by Barberà, Jackson and Neme (1997) as the class of strategy-proof, efficient and replacement-monotonic rules, are also obviously strategy-proof. Although obvious strategy-proofness is in general more restrictive than strategy-proofness, this is not the case in this setting.
Joint work with Pablo Arribillaga and Alejandro Neme.
Session Chair: Vincent Merlin (Caen)
---
Nisarg Shah (Toronto)
Resolving the Optimal Metric Distortion Conjecture (Slides)
In the metric distortion framework of voting, voters and candidates lie in a metric space, and each voter ranks the candidates by their distance from her. The voting rule takes only the preference rankings as input, and picks a candidate that approximately minimizes the total distance from the voters. The distortion of a rule is its worst-case approximation ratio. A major conjecture in this framework is that the optimal deterministic voting rule has distortion 3. We resolve this conjecture positively by constructing a new voting rule that has distortion 3. We do so by proving a novel lemma about matching rankings of candidates to candidates, which may be of independent interest. We also provide parametric distortion bounds and improved bounds for randomized rules.
---
Péter Biró (Budapest)
Fairness in Kidney Exchange Programmes (Slides)
Most of the large European countries are operating national kidney exchange programmes (KEPs) with various policies and performances. The primal objective is always to maximise the number of transplants, but increased attention is being paid to the quality of transplants as well, especially when compatible or ABO-incompatible pairs are incentivised to join these programmes instead of conducting a direct transplant. We argue that the concept of stability can play an important role in future applications due to individual fairness and incentive considerations. Furthermore, fairness concepts can also become crucial at country level when multiple countries are joining their pools. In this talk we will review the recent developments on the aspects of individual and collective fairness for KEPs from both theoretical and practical point of views, starting with a story from the 2008 COMSOC conference.
Session Chair: Bill Zwicker (Union College)
---
Gabrielle Demange (Paris)
Resolution Rules in a System of Financially Linked Firms (Slides)
The reimbursement abilities of firms holding liabilities on each other are intertwined, potentially generating coordination failures and defaults through uncontrolled contagion. In stress episodes, these linkages thus call for an orderly resolution, as implemented by a regulatory authority assigning the amount each firm within the system reimburses to each other one. The paper studies such resolution by considering `rules', assuming their primary goal is to avoid default on external debts, say, banks' defaults on deposits. The main objective is to investigate what proportionality means for a rule, taking into account various legal and informational constraints.
---
Yair Zick (Singapore)
A Learning Framework for Distribution-Based Game-Theoretic Solution Concepts [updated]
The past few years have seen several works on learning economic solutions from data; these include optimal auction design, function optimization, stable payoffs in cooperative games and more. In this work, we provide a unified learning-theoretic methodology for modeling such problems, and establish tools for determining whether a given economic solution concept can be learned from data. Our learning theoretic framework generalizes a notion of function space dimension -- the graph dimension -- adapting it to the solution concept learning domain. We identify sufficient conditions for the PAC learnability of solution concepts, and show that results in existing works can be immediately derived using our methodology. Finally, we apply our methods in other economic domains, yielding a novel notion of PAC competitive equilibrium and PAC Condorcet winners.
Joint work with Tushant Jha.
Session Chair: Ulle Endriss (Amsterdam)
Speakers: Nick Arnosti (Columbia), Haris Aziz (Sydney), Paul Gölz (CMU), Ronald de Haan (Amsterdam), Hadi Hosseini (Rochester), Neeldhara Misra (Gandhinagar), Joe Singleton (Cardiff), Piotr Skowron (Warsaw), Zoi Terzopoulou (Amsterdam), Stanislav Zhydkov (Warwick)
Session Chair: Haris Aziz (Sydney)
This session was held at a different time, eight hours earlier than usual: 01:00 New York, 07:00 Paris, 10:30 India, 13:00 China/Singapore, 14:00 Japan, 15:00 Sydney, 17:00 Auckland on Friday; and 22:00 California on Thursday.
---
Makoto Yokoo (Kyushu University)
Game-Theoretic Analysis for Two-Sided Matching with Resource Allocation (Slides)
In this work, we consider a student-project-resource matching-allocation problem, where students have preferences over projects and the projects have preferences over students. Although students are many-to-one matched to projects, indivisible resources are many-to-one allocated to projects whose capacities are endogenously determined by the resources allocated to them. Traditionally, this problem is decomposed into two separate problems: (1) resources are allocated to projects based on expectations (resource allocation problem), and (2) students are matched to projects based on the capacities determined in the previous problem (matching problem). Although both problems are well-understood, if the expectations used in the first are incorrect, we obtain a suboptimal outcome. Thus, this problem must be solved as a whole without dividing it in two parts. We show that no strategyproof mechanism satisfies fairness (i.e., no student has justified envy) and weak efficiency requirements on students' welfare. Given this impossibility result, we develop a new strategyproof mechanism that strikes a good balance between fairness and efficiency and assess it by experiments.
---
Xiaohui Bei (Nanyang Technological University)
Fair Division of Mixed Divisible and Indivisible Goods
In this talk, we will discuss the challenges of defining fairness and designing fair division algorithms when the resources to be allocated contain both divisible and indivisible goods. Classic fairness notions such as envy-freeness (EF) and envy-freeness up to one good (EF1) cannot be directly applied to the mixed goods setting. We propose a new fairness notion envy-freeness for mixed goods (EFM), which is a direct generalization of both EF and EF1 to the mixed goods setting. We show that an EFM allocation always exists for any number of agents and discuss how to find an ε-EFM allocation in polynomial time in the Robertson-Webb model.
Joint work with Zihao Li, Jinyan Liu, Shengxin Liu, and Xinhang Lu.
Session Chair: Edith Elkind (Oxford)
---
Ágnes Cseh (Potsdam)
Popular Roommate Assignments (Slides)
Popularity is an optimality notion that connects the Condorcet criterion known from voting with stability known from matchings under preferences. The popular roommates problem is defined on a non-bipartite graph, where each vertex submits a strictly ordered preference list of its neighbors. A matching M is popular if there is no matching N such that vertices that prefer N to M outnumber those that prefer M to N. The complexity of finding a popular matching had been open for a long time before its solution last year. In this talk, we review recent progress on complexity, approximation, and algorithms for this problem.
https://arxiv.org/abs/1807.01112
---
Remzi Sanver (Paris)
A Solution to the Two-person Implementation Problem
We propose strike mechanisms as a solution to the classical problem of Hurwicz and Schmeidler (1978) and Maskin (1999) according to which, in two-person societies, no Pareto efficient rule is Nash-implementable. A strike mechanism specifies the number of alternatives that each player vetoes. Each player simultaneously casts these vetoes and the mechanism selects randomly one alternative among the unvetoed ones. For strict preferences over alternatives and under a very weak condition for extending preferences over lotteries, these mechanisms are deterministic-in-equilibrium. They Nash-implement a class of Pareto efficient social choice rules called Pareto-and-veto rules. Moreover, under mild richness conditions on the domain of preferences over lotteries, any Pareto efficient Nash-implementable rule is a Pareto-and-veto rule and hence is implementable through a strike mechanism.
Joint work with Jean-François Laslier and Matías Núñez.
Session Chair: Ulle Endriss (Amsterdam). 142 participants.
---
Piotr Faliszewski (Kraków)
Drawing a Map of Elections in the Space of Statistical Cultures
We consider the problem of forming a testbed of elections to be used for various election-related experiments (such as testing algorithms or estimating the frequency of a given phenomenon). We seek elections that come from well-known statistical distributions and are as diverse as possible. To this end, we define a (pseudo)metric over elections, generate a large set of election instances, and measure distances between them, to assess how diverse they are. We argue that our metric is a reasonable choice, and we show how the resulting set of elections can be used to evaluate election-related algorithms.
Joint work with Stanislaw Szufa, Piotr Skowron, Nimrod Talmon, and Arkadii Slinko.
---
Clemens Puppe (Karlsruhe)
Strategy-Proofness implies Minimal Participation if Voting is Costly (Slides)
We study a model in which agents with single-peaked preferences can participate in a costly voting procedure to determine the value of a one-dimensional variable. We show that, for all positive participation cost and all profiles of individual preferences, there exists a generically unique equilibrium with one single participant whenever the voting mechanism is strategy-proof, anonymous, and responsive in the sense that the outcome reacts to a unanimous move of the votes of all agents in the same direction. Since this single participant is never the median voter, we obtain as a corollary that no deterministic mechanism can implement the median if all participants vote according to their unique dominant strategy; even worse, the peak of the single participant in equilibrium is maximally far away from the median voter. By contrast, there are simple probabilistic strategy-proof mechanisms that yield the median as outcome. Inspired by this observation, we investigate if the median can be ‘approximately implemented’ by a natural deterministic counterpart, and come to a negative conclusion.
Joint work with Michael Müller.
Session Chair: Dominik Peters (CMU). 260 participants.
---
Vincent Conitzer (Duke)
Computational Social Choice for Moral Artificial Intelligence (Slides)
Many algorithms in AI require an objective function to be specified. When such an algorithm is to be deployed in the real world, it is often not obvious what the objective function should be. For example, there are often nontrivial ethical tradeoffs. Generally, stakeholders will disagree on how these should be made. Moreover, even for themselves, they will generally not be able to write down a complete objective function; perhaps at most, they can be asked to make a few judgments on some concrete examples. How do we aggregate these individual judgments into an objective function for the algorithm to use?
Joint work with Lok Chan, Yuan Deng, John Dickerson, Kenzie Doyle, Rachel Freedman, Max Kramer, Duncan McElfresh, Jana Schaich Borg, Walter Sinnott-Armstrong, and Hanrui Zhang.
---
Edith Elkind (Oxford)
Keeping Your Friends Close: Land Allocation with Friends (Slides PDF/PPTX)
We examine the problem of assigning plots of land to prospective buyers who prefer living next to their friends. They care not only about the plot they receive, but also about their neighbors. This externality results in a highly non-trivial problem structure, as both friendship and land value play a role in determining agent behavior. We examine mechanisms that guarantee truthful reporting of both land values and friendships. We propose variants of random serial dictatorship (RSD) that can offer both truthfulness and welfare guarantees. Interestingly, our social welfare guarantees are parameterized by the value of friendship: if these values are low, enforcing truthful behavior results in poor welfare guarantees and imposes significant constraints on agents' choices; if they are high, we achieve good approximation to the optimal social welfare.
Joint work with Neel Patel, Alan Tsang, and Yair Zick.
Interested in giving a talk? Fill out the quick form and we will be in touch. Thanks!