Research Overview

Wondering what my research is about, but not sure where to start? Too busy to sift through the details of my papers? This page outlines some of the major themes in my research over the past 20+ years and provides links to the best entry points for learning more (books, surveys, videos, and papers). Click on a topic to explore further. (Last updated Summer 2023.)

Algorithmic Game Theory (General)

Computer science and economics have had a productive relationship over the past 25 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to the design of blockchain protocols, fundamentally involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, with approximation and complexity notions playing an increasingly important role in the latest developments in economic theory. My textbook and accompanying lecture videos offer an accessible introduction to the subject, with case studies on online advertising, wireless spectrum auctions, kidney exchange, and network management.

For a "reader's digest" version, see For two different one-hour general-audience talks, see and for a short course suitable for first-year undergraduates see The following collection is older and targeted more to researchers than to a general audience, but is still useful for several topics.
  • (co-edited with Noam Nisan, Eva Tardos, and Vijay Vazirani) Algorithmic Game Theory, Cambridge University Press, 2007. Read the entire book online by clicking here (look under the "Resources" tab).

AGT: The Price of Anarchy

From game theory 101, we know that the outcome of self-interested behavior can be bad for everybody. Two famous examples are the mutual betrayal in the Prisoner's dilemma and the overconsumption of a shared resource in the tragedy of the commons. One branch of algorithmic game theory studies the price of anarchy (POA), which quantifies the harm caused by self-interested behavior. This measure was first proposed by Koutsoupias and Papadimitriou, and it compares two things. The first is an outcome of selfish behavior (formally, a Nash equilibrium). The second is the best outcome, such as the one that maximizes the sum of everybody's payoffs. The POA measures how much worse the first is compared to the second, and the closer the POA of a game is to 1, the better its equilibria approximate what could be achieved with centralized coordination of all of the players' actions. The "Twenty Lectures" book above has several chapters on the price of anarchy, and the following older surveys are still relevant:

For a one-hour general-audience talk, see For a two-hour tutorial on the mathematics behind price-of-anarchy bounds, see
  • Near-Optimal Equilibria (videos), Part 1 and Part 2, Economics and Computation Boot Camp, Simons Institute (2015). Slides

The POA of Selfish Routing

Not long after the price of anarchy was first defined, Éva Tardos and I studied it in a classical model of "selfish routing," where each user of a network chooses its path to minimize its own delay. (Think drivers on a highway, or packets in a data network, or even current in an electrical network. If you're familiar with Braess's paradox, then you already know this model.) How much worse, in terms of average commute times, is selfish routing than a centrally optimal solution? The following paper has two main results: (i) when the delay of each network edge is a linear function of the amount of traffic on it, then the answer is 33%; and (ii) even with arbitrary delay functions, throwing hardware at the problem (i.e., selfish routing after a modest increase in network capacity) outperforms centrally optimal routing.

The following paper generalizes the 33% result to nonlinear delay functions (where the POA grows with the "degree of nonlinearity") by showing that simple (two-node, two-link) networks always realize the worst-possible POA. Here's a book I wrote on this topic: and a "reader's digest" version: and a survey of routing games more generally:

More on Braess's Paradox and Routing Games

I was pretty obsessed with routing and other network games during the first half of the oughts (e.g., the book above). Some other highlights of my work on this topic include the first generalization of Braess's paradox to larger networks (first with one origin and destination, and then with many):

A proof that Braess's paradox is ubiquitous in random networks: And several strategies for improving the POA in various network games, including cost-sharing: tolls: and partially centralized routing:

Smooth Games and Robust POA Bounds

A price-of-anarchy bound applies when players of a game have successfully reached an equilibrium. But what if they haven't, for example because of the apparent intractability of computing a Nash equilibrium? In the second half of the oughts, there was a lot of nice work proving price-of-anarchy bounds that apply even when players have not reached a Nash equilibrium (including Mirrokni-Vetta on best-response dynamics, Christodoulou-Koutsoupias on correlated equilibria, and Blum-Hajiaghayi-Ligett-Roth on no-regret learners). The following paper gives a general theory of such "robust" price-of-anarchy bounds. It defines "smooth games," notes that many of the games studied in computer science and related fields are smooth, and proves that price-of-anarchy bounds for smooth games are automatically robust in several senses.

See also these eight-minute and hour-long videos that explain this theory.

Several extensions of the basic theory of smooth games have appeared over the past few years, such as the Syrgkanis-Tardos theory of smooth mechanisms. I've worked on extensions to incomplete-information games and large games (with many "small" players), and also "local" notions of smoothness.

The POA of Simple Auctions

Early work in computer science on auction design focused on auctions where every bidder has a foolproof strategy (like in a second-price auction). As impossibility results for such auctions mounted, attention naturally shifted to proving price-of-anarchy bounds for the equilibria of simple but strategically non-trivial auctions (beginning with Christodoulou-Kovacs-Schapira), such as simultaneous single-item auctions. The theory of smooth games and its extensions (see previous section) turns out to be well suited for this goal. Here is a survey of the state-of-the-art as of 2017:

The boot camp lectures mentioned above (Part 1 and Part 2) are also mostly about this topic, as is Part IV of my CS364B course from 2014.

A representative positive result (later improved by Feldman-Fu-Gravin-Lucier) is:

The following paper uses communication complexity to prove limits on the best-possible POA of any simple auction. For example, it makes precise the folklore rule of thumb that "simple multi-item auctions do not perform well when there are strong synergies between items."

AGT: Auction and Mechanism Design

A mechanism is a procedure for making a decision when agents have private preferences, and an auction is a mechanism specifically for the exchange of goods and money. Economists have been thinking hard about auctions and mechanisms for over a half-century. What could computer science possibly bring to the table? Several things: measures of approximation; alternatives to Bayesian analysis; techniques for learning from data; and a focus on computational tractability.

Two surveys on different facets of this area are:

Robust Revenue-Maximization

What selling procedure should a seller use to maximize her revenue? This is a hard question because the best auction to run --- like choosing the right opening bid in an eBay auction or the right reserve price in a sponsored search auction --- depends on what bidders are willing to pay. The traditional approach in economics, exemplified by Myerson's optimal auction theory, is to maximize the seller's expected revenue with respect to a known distribution over what bidders are willing to pay.

In many cases, the theoretically optimal auction is too complex for direct use (especially when bidders are not symmetric). The following paper identifies many scenarios where simple and directly usable auctions do almost as well as theoretically optimal auctions.

But what if the distribution over what bidders are willing to pay is unknown or changing rapidly? Could there be prior-independent auctions, which are simultaneously near-optimal for a wide range of distributions? The following paper gives an affirmative answer. The following two papers give prior-independent auctions for "multi-parameter" settings, like auctions with multiple distinct items, and for bidders with interdependent preferences. (See also Talgam-Cohen's BWCA book chapter.) In the theory of prior-free auctions, pioneered by Goldberg-Hartline-Karlin-Saks-Wright, the goal is to dispense with distributional assumptions entirely and prove revenue guarantees that hold no matter what bidders want (as opposed to on average over what bidders want). The following paper shows how to generate automatically meaningful revenue benchmarks for such worst-case revenue guarantees.
  • (with Jason Hartline) Optimal Platform Design, ACM Symposium on Theory of Computing, 2008. (Revised version, from 2016.)
For an example of this framework in action, see: Here is an overview talk on robust revenue-maximization:

Data-Driven Auction Design

As discussed in the previous section, the traditional economic approach to revenue-maximizing auction design posits a known prior distribution over what bidders are willing to pay, and then solves for the auction that maximizes the seller's expected revenue with respect to this distribution. But where does this distribution come from? One obvious answer is from data, in the form of previous bids by comparable bidders in auctions for comparable items. (Ostrovsky-Schwarz applied this idea to great effect at Yahoo! in 2008.) How much data is necessary and sufficient to determine what auction you should run? The following paper tackles this question for single-item auctions with asymmetric bidders. (See the STOC '19 paper of Guo-Huang-Zhang for optimal bounds.)

Two follow-up papers prove optimal bounds for learning the best selling price for a single buyer, and show how to learn an optimal auction when the willingness to pay of each bidder is drawn from an "irregular" distribution (like a bimodal distribution). There is a clear analogy between learning a good auction from data and one of the most central problems in machine learning, that of learning a good classifier from labeled data. This connection is made explicit in the following works, which also show that the optimal auction from any "simple" class (such as reserve-price-based auctions) can be learned from a reasonable amount of data. The first paper considers "single-parameter" auction settings, and second paper the more complicated case of "multi-parameter" settings. The papers above focus primarily on the amount of information needed to learn a near-optimal auction, rather than on efficient algorithms. The following paper gives efficient offline and online approximation algorithms for computing the best bidder-specific reserve prices from data:

Algorithmic Mechanism Design

In settings with monetary payments (like auctions), a well-known solution from economics (the VCG mechanism) can be used, in principle, to determine the best thing to do even when you don't know participants' preferences a priori. (The Vickrey auction is a special case.) The only problem is that, in many scenarios including multi-item auctions, implementing this mechanism requires solving computationally intractable problems. The field of algorithmic mechanism design studies when the promise of the VCG mechanism can be preserved (at least approximately) while using a reasonable amount of computation (see also Parts II and III of my CS364B course). The best-case scenario is when computationally efficient approximation algorithms (which take preferences as given) can be translated in a generic way to equally good incentive-compatible mechanisms (which need to elicit preferences from the participants), as in the following paper.

Most multi-item auctions require customized solutions, as in the following two papers. The first considers items that are "substitutes" and designs a best-possible auction for this case, the second design auctions for items that are "complements." Sometimes an approximation algorithm leads to a mechanism with incentive properties superior to those of the VCG mechanism. This is the case in deferred-acceptance auctions, where ascending/descending clock auctions are closely connected to greedy heuristics. The following two papers explore the power and limitations of such heuristics.

Cost-Sharing Mechanisms

Another well-known drawback of the VCG mechanism (see preceding section) is that, in settings where there is an outcome-dependent cost, its revenue need not cover the cost incurred. There are "budget-balanced" alternatives, where the cost incurred is shared among the winners in the chosen outcome, but such mechanisms compromise on the quality of the outcome selected. The impossibility of maximizing social welfare while balancing the budget motivates using approximation measures to quantify the feasible trade-offs between the two objectives. This is the subject of the following paper.

For several follow-up results, see Mukund's PhD thesis.

The following paper introduces acyclic mechanisms, which are more flexible than the Moulin mechanisms studied in most of the cost-sharing literature, and shows that many primal-dual algorithms naturally lead to good acyclic mechanisms. (Acyclic mechanisms can also be viewed as a precursor to the Milgrom-Segal deferred-acceptance auctions mentioned above.)

More recently, cost-sharing mechanisms have popped up in my work in some unexpected places, including on auctions with consortia:
  • (with Maryam Bahrani and Pranav Garimidi) When Bidders Are DAOs, ACM Conference on Advances in Financial Technology, 2023.
and reward-sharing schemes for blockchain mining and staking pools:

AGT: Algorithms for and Complexity of Equilibria

Can an equilibrium of a game be computed efficiently? Arguably, this is a prerequisite for interpreting an equilibrium as a prediction of players' behavior. (If no computer can find an equilibrium in a reasonable amount of time, then presumably a bunch of uncoordinated players can't either.) The following survey gives a general overview of the area, assuming minimal prior knowledge of computational complexity:

And here's a more advanced survey, aimed at theoretical computer scientists and covering the state-of-the-art as circa 2018: Strong evidence is mounting that there is no general algorithm for computing a Nash equilibrium of a game in polynomial time (see e.g. the 2010 survey above). Meanwhile, the well-known correlated equilibrium concept can be viewed as a tractable relaxation of the Nash equilibrium. (It can also be interpreted in terms of a mediator, or as the natural outcome of Bayesian rationality.) The following paper gives fairly general results for computing a correlated equilibrium of a compactly described game (in time polynomial in the game description), and also (under stronger assumptions) for optimizing over the correlated equilibria. On the negative side, returning to Nash equilibria, the following paper gives evidence that a large amount of (deterministic) communication is required to compute even an approximate equilibrium. In an exciting development (covered in the 2020 survey above), Babichenko and Rubinstein use many of the same ingredients to prove (unconditionally) that a large amount of (even randomized) communication is indeed required. (And Göös and Rubinstein later gave an essentially optimal lower bound.)

AGT: Further Applications

Algorithmic game theory keeps growing in scope, and much of my recent AGT work has been focused on porting the AGT toolbox and mindset to more and more application domains. (For example, my work on blockchain protocols and web3, surveyed in the next section, grew out of this agenda and took on a life of its own.)

Teaching materials. For an overview of applications of AGT in practice, see:

  • the Fall 2016 and Fall 2018 offerings of my "Incentives in Computer Science" course (designed for 3rd-year non-theory computer science majors); or
  • this YouTube playlist, which is aimed at a general audience (e.g., first-year undergraduates).
Contract theory. As for my research, one topic of current interest is contract theory, an area of microeconomics that bears a strong resemblance to mechanism design but has historically been understudied from a computer science perspective. (The canonical example here is the principal-agent problem.) The following paper studies linear contracts (inspired by Carroll) and min-max and approximation guarantees for them: and a second paper proves (both positive and negative) computational complexity results for optimal (not necessarily linear) contract design with large outcome spaces: Incentive-compatible prediction. A second research direction is the design of incentive-compatible methods for eliciting and aggregating predictions. For example, the following paper introduces a model for studying incentive-compatible learning from the advice of self-interested experts: Speaking of scoring rules (which are central to the paper above), here's a paper that associates to each method of eliciting honest predictions (i.e., to each scoring rule) a unique (and axiomatically justified) method of aggregating multiple such predictions (the corresponding "quasi-arithmetic pooling method"): The following paper studies the aggregation of predictions from the viewpoint of worst-case guarantees relative to a set of possible information structures (loosely paralleling the prior-free/independent auction work surveyed in the "Robust Revenue-Maximization" section above): Fair division. Third, here are two papers on the fair division of indivisible goods with combinatorial player valuations (in the spirit of combinatorial auctions, as opposed to the traditional focus on additive valuations). The first identifies sufficient conditions for the existence of "EFX" allocations (two players, or identical player valuations) and proves that the worst-case query complexity of finding them is exponential in the number of items: and the second classifies the multi-party communication complexity of computing a fair allocation (for all of the most common fairness notions and valuation classes):

Blockchain Protocols and Web3

In the 2020s, most of my research has focused on developing the mathematical foundations of blockchain protocols and the applications built on top of them (web3).

Overview and expository work. Turing-complete blockchain protocols approximate the idealized abstraction of a "computer in the sky" that is open access (anyone can install software or interact with already-installed software), runs in plain view, and, in effect, has no owner or operator. This technology can, among other things, enable stronger notions of ownership of digital possessions than we have ever had before. Building the computer in the sky is hard (and scientifically fascinating), and requires the synthesis of multiple disciplines, both within computer science (distributed computing, cryptography, algorithmic game theory) and beyond (mechanism design, macroeconomics, finance, political science). Interested in learning more? One starting point is the following work-in-progress video lecture series (30+ hours and counting...), based in part on my Columbia course:

If you only have an hour, here are two survey research talks, targeted at a general theory CS audience and an econ/CS audience, respectively: Permissionless consensus. When I say that the computer in the sky "has no owner or operator," what I really mean is that this (virtual) computer is the product of a simulation carried out by many (physical) computers, and that anyone can contribute their own computer to this simulation whenever they want. Keeping these computers in sync about the state of the virtual machine requires a consensus protocol. There's 40+ years of deep work on consensus protocols, but the "permissionless" nature of blockchain protocols poses three novel challenges: (i) the consensus protocol has no idea who's running it; (ii) whoever's running it can come and go as they please; (iii) "sybils," meaning that one entity can masquerade as many. The following paper presents a general framework for reasoning about the unruly rogue's gallery of permissionless consensus protocols, with an emphasis on fundamental impossibility results: Different blockchain protocols can look quite different (e.g., the use of proof-of-work vs. proof-of-stake for sybil-resistance, and of PBFT-style voting vs. a longest-chain rule for block finalization) and can offer quite different security guarantees (e.g., the fraction of Byzantine hashrate/stake tolerated, and the amount of synchrony required). The following paper proves senses in which the desired security guarantees dictate what the protocol must look like (e.g., security in the partially synchronous setting dictates proof-of-stake or something like it, and also PBFT-style consensus or something like it): Transaction fee mechanism design. I said that the computer in the sky is open access, meaning anyone can use it. But what if demand for that computer exceeds its processing capabilities? The component of a blockchain protocol responsible for choosing the transactions to process and what to charge for them is called its transaction fee mechanism (TFM). TFM design is a rare opporunity to incorporate mechanism design at the infrastructure layer of a new technology (as opposed to the application layer). (Imagine if mechanism designers had a seat at the table back when the BGP routing protocol or TCP/IP congestion control were being designed!) Mechanism design in a blockchain setting is tricky, however; I talk through some of the reasons why here: I started working on TFMs when the Ethereum community asked me to perform a formal economic analysis of EIP-1559, a proposed major change to Ethereum's TFM (later deployed in August 2021). Here's the report that I prepared, written for the broader Ethereum and web3 communities: Here's an academic paper derived from that work, aimed at computer science and economics researchers: The analysis of EIP-1559 above works in a pre-MEV world, in which block producers ignore the semantics of transactions and optimize only their transaction fees. The following paper incorporates private block producer valuations and proves that, in a post-MEV world, no non-trivial TFM can be incentive-compatible for both users and block producers: Foundations of automated market makers. One of the more well-developed parts of the web3 application layer is decentralized finance (or DeFi). Perhaps the most basic operation in DeFi is the swapping of one digital asset for another. Due to the high cost of on-chain computation and the desire for persistent liquidity for long-tail digital assets, much of this swapping is done via decentralized exchanges known as automated market makers (or AMMs). The following four papers develop foundations for the design and analysis of AMMs. The first two introduce and quantify LVR (for "loss-versus-rebalancing," pronounced "lever"); the main result is a Black-Scholes-style formula for calculating the adverse selection costs (e.g., due to taking the wrong side of arbitrage trades) borne by the liquidity providers of an AMM. (Previously "impermanent loss," equivalent to "loss-versus-holding," was used for this purpose, but this concept entangles the LP adverse selection costs with movements in the market prices of the assets being traded.) As noted above, the scarcity of on-chain computation necessitates compromises on the design of a decentralized exchange (DEX). The following paper identifies a fundamental trade-off between the flexibility of a DEX (measured by how well LPs can approximate a desired position) and its simplicity (measured by the storage requirements of an LP position): The next paper takes an optimal mechanism design approach to the problem of providing liquidity to an AMM; the optimal demand curve generally has a "bid-ask spread" that we show is caused by a combination of adverse selection risk and monopoly pricing: Further applications of game theory and mechanism design in blockchains/web3. Interesting game theory and mechanism design questions are everywhere you look in web3. For example, the participation in auctions by decentralized autonomous organizations (DAOs) like ConstitutionDAO changes the economics of single-item auctions more than you'd think:
  • (with Maryam Bahrani and Pranav Garimidi) When Bidders Are DAOs, Advances in Financial Technologies, 2023.
The next three papers study how to share rewards between participants in a blockchain protocol (either directly, or indirectly through a mining/staking pool). For example, the following paper shows that, subject to budget-balance, rewarding participants proportional to their contributions is the knife's-edge solution that is robust to both sybil attacks and collusion: How can a protocol or pool measure "the contribution" of a participant? In proof-of-work mining pools, participation is measured through the reporting of partial solutions to a hard cryptopuzzle (e.g., perhaps an input on which a cryptographic hash function produces an output with at least 65 leading zeroes, rather than the 75 that might required to actually produce a real block). Interestingly, the ensuing sampling error disrupts the incentives of proportional reward-sharing, and additional ideas are needed to recover them: Individuals join mining pools to smooth out their rewards (due to risk aversion). The following paper adopts variance-minimization as a first-order objective (subject to proportionality in expectation) and identifies variance-optimal reward sharing schemes:

Beyond the Worst-Case Analysis of Algorithms

Overview and expository work. The dominant paradigm in the theoretical analysis of algorithms is worst-case analysis, where the overall performance of an algorithm is summarized by its worst performance on any input. Good worst-case guarantees are great when you can get them, but for many important problems (linear programming, clustering, many online algorithms, etc.), worst-case analysis gives inaccurate advice about which algorithm to use. Going beyond worst-case analysis requires more careful modeling of the "relevant inputs" of a problem. For surveys of the many different facets of this area, written by the field's leading experts, see the edited collection:

For a two-hour tutorial on this area, see
  • Beyond Worst-Case Analysis, Part 1 and Part 2, Algorithms and Uncertainty Boot Camp, Simons Institute (2016). Slides
The above tutorial is a "reader's digest" version of a course that I started teaching in 2009; the 2014 version has a full set of lecture notes and lecture videos which explore this field much more deeply. (See the 2017 version for still more lecture notes.) I also organized the first workshop dedicated to the topic in 2011; there have been follow-up workshops in 2014 at Dagstuhl and in 2016 at the Simons Institute.

My interest in this area originally stems from my work in auction design and analysis (see the section "Robust Revenue-Maximization" above), where worst-case analysis provides no useful information and alternative frameworks are needed. My work there proposes interpolations between worst-case and average-case analysis, which inherit robustness from the former and meaningful performance guarantees from the latter (see this survey talk). This led me to explore similar approaches to "mainstream" algorithmic problems.

The following papers are representative of my interests.

Data-driven algorithm design. The first introduces a model for reasoning about how to choose the right algorithm for an application (where an "application" is encoded by an unknown probability distribution over problem instances) based on a set of benchmark instances. The goal is to obtain good expected performance (this is the average-case aspect) no matter what the unknown distribution is (this is the worst-case aspect). (See Balcan's BWCA book chapter and Vitercik's PhD thesis for the latest developments in this now-bustling area.)

Structured prediction. The second considers the problem of recovering an unknown (that is, worst-case) binary labeling of a graph (for example, foreground-background image segmentation) from noisy measurements. The model resembles the stochastic block model, but is additionally parameterized by a (non-complete) graph of pairwise measurements. Distribution-free models of social networks. The third and fourth propose new graph classes meant to capture all plausible social networks (motivated by the triadic closure property of such networks). These graph classes can be viewed as a "worst case" over all plausible probabilistic models of social and information networks. Thus structural and algorithmic results for such graphs automatically apply to the networks produced by your favorite generative model. (See also my BWCA book chapter with C. Seshadhri.) Smoothed analysis of online learning. The fifth concerns smoothed analysis, a model in which the inputs to an algorithm are "slightly random." Traditional killer applications cocnern the running time analysis of algorithms that suffer from brittle worst-case inputs (like the simplex method and local search algorithms). The following paper applies the framework instead to online learning and discrepancy minimization. For example (and answering an open question of Rakhlin-Sridharan-Tewari), online learnability against smoothed adversaries is characterized by the finiteness of the VC dimension of the hypothesis class (rather than its Littlestone dimension, as with worst-case adversaries).

ML Theory: Optimization, Sample Complexity, and Regret Bounds

Much of my work in the 2010s builds on or contributes to the theory of machine learning.

Sample complexity of learning auctions and algorithms. Models and tools from the theory of offline (PAC) learning---in which a learning algorithm is given a batch of labeled data and is responsible for computing a classifier that accurately labels as-yet-unseen data from the same distribution---are central to my work on data-driven auction design (see the "Auction and Mechanism Design" section above for details):

and data-driven algorithm design (see the section above on Beyond Worst-Case Analysis for details): Online learning. In online learning, meanwhile, adversarially chosen inputs are presented to an algorithm one by one, and the goal is to design an algorithm that guarantees small regret (i.e., with cumulative loss/reward comparable to that of the best fixed action in hindsight). I originally got interested in this topic when (inspired by Blum-Hajiaghayi-Ligett-Roth) I developed extension theorems for price-of-anarchy bounds (from pure Nash equilibria to weaker equilibrium concepts, including outcomes produced by no-regret learners): Since then, I've looked at the computationally efficient online learning of auction reserve prices: generalizations to submodular maximization: of expert advice from self-interested experts: and of expert weights in forecast aggregation: The following papers show that classical results on online learning with worst-case adversaries fundamentally change when the adversarially chosen inputs are perturbed slightly by nature (with online learnability then characterized by the VC dimension of the hypothesis class rather than its Littlestone dimension): Algorithms for inference and submodular optimization. When accurate prediction is the ends, optimization is generally the means. My work on optimization algorithms tailored for machine learning problems has primarily focused on problems related to MAP inference. For example, MAP inference for determinantal point processes requires algorithms for (approximately) optimizing a non-monotone continuous submodular function: MAP inference for structured prediction problems leads to a theory of approximate recovery resembling that for the stochastic block model (but parameterized by a non-complete graph of pairwise measurements): The following paper gives a general reduction showing that the computational complexity of MAP inference controls that of several "pure data" problems, such as checking the consistency of marginal distributions with respect to a class of joint distributions:

Extroverted Complexity Theory

Papadimitriou describes the "extroverted side of complexity theory" as "an applied mathematical discipline whose function is to gain insights into the problems of the natural, social, and applied and engineering sciences."

Expository work. The following monograph was inspired by all the great results in communication complexity that explain why so many natural algorithmic goals are impossible.

Another monograph surveys the state-of-the-art (circa 2018) in applying complexity theory to investigate and critique equilibrium concepts in game theory: Complexity-theoretic barriers in economics. "Complexity barriers" can be used to prove impossibility results in economics and elsewhere. The following survey and talk give overviews of my recent work in the area. My interest in this area started with the following paper, which uses communication complexity to make precise an old rule of thumb in multi-item auction design: simple auctions (like selling items separately) do not work well when there are strong synergies across items. One of the most basic notions in economics is that of a competitive equilibrium, which comprises prices for items that equalize supply and demand. In a market with divisible goods (milk, wheat, etc.), a competitive equilibrium exists under reasonable assumptions. Many markets with indivisible goods (houses, wireless spectrum licenses, airport landing slots, etc.), however, can fail to possess a competitive equilibrium. The following paper uses computational complexity to explain the rampant non-existence of such equilibria. Border's theorem is a basic result in auction theory that characterizes everything that can happen at equilibrium in any single-item auction. This characterization has not been generalized much beyond single-item auctions, despite significant effort. The following paper gives a complexity-theoretic explanation for this. Connecting massively parallel computation to circuit complexity. Switching application domains, the following paper considers modern parallel computation, as exemplified by MapReduce and Hadoop. It has been a challenge to understand which problems require a large number of synchronized rounds of parallel computation to solve. The following paper proves some general but modest lower bounds, and explains why stronger lower bounds have not been forthcoming: proving better lower bounds would imply a major breakthrough in circuit complexity. PSPACE hides in simple ML algorithms. Finally, the following two papers (inspired by Disser-Skutella) use complexity theory to explain why two of the most central algorithms in machine learning---the k-means method and online gradient descent, respectively---do not have good (worst-case) guarantees. The reason? Both inadvertantly solve PSPACE-complete problems along the way.

Home