Wondering what my research is about, but not sure where to start? Too busy to sift through the details of my papers? This page outlines some of the major themes in my research over the past 20+ years and provides links to the best entry points for learning more (books, surveys, videos, and papers). Click on a topic to explore further. (Last updated Summer 2023.)
Computer science and economics have had a productive relationship over the past 25 years, resulting in a new field called algorithmic game theory or alternatively economics and computation. Many problems central to modern computer science, ranging from resource allocation in large networks to the design of blockchain protocols, fundamentally involve interactions between multiple self-interested parties. Economics and game theory offer a host of useful models and definitions to reason about such problems. The flow of ideas also travels in the other direction, with approximation and complexity notions playing an increasingly important role in the latest developments in economic theory. My textbook and accompanying lecture videos offer an accessible introduction to the subject, with case studies on online advertising, wireless spectrum auctions, kidney exchange, and network management.
From game theory 101, we know that the outcome of self-interested behavior can be bad for everybody. Two famous examples are the mutual betrayal in the Prisoner's dilemma and the overconsumption of a shared resource in the tragedy of the commons. One branch of algorithmic game theory studies the price of anarchy (POA), which quantifies the harm caused by self-interested behavior. This measure was first proposed by Koutsoupias and Papadimitriou, and it compares two things. The first is an outcome of selfish behavior (formally, a Nash equilibrium). The second is the best outcome, such as the one that maximizes the sum of everybody's payoffs. The POA measures how much worse the first is compared to the second, and the closer the POA of a game is to 1, the better its equilibria approximate what could be achieved with centralized coordination of all of the players' actions. The "Twenty Lectures" book above has several chapters on the price of anarchy, and the following older surveys are still relevant:
Not long after the price of anarchy was first defined, Éva Tardos and I studied it in a classical model of "selfish routing," where each user of a network chooses its path to minimize its own delay. (Think drivers on a highway, or packets in a data network, or even current in an electrical network. If you're familiar with Braess's paradox, then you already know this model.) How much worse, in terms of average commute times, is selfish routing than a centrally optimal solution? The following paper has two main results: (i) when the delay of each network edge is a linear function of the amount of traffic on it, then the answer is 33%; and (ii) even with arbitrary delay functions, throwing hardware at the problem (i.e., selfish routing after a modest increase in network capacity) outperforms centrally optimal routing.
I was pretty obsessed with routing and other network games during the first half of the oughts (e.g., the book above). Some other highlights of my work on this topic include the first generalization of Braess's paradox to larger networks (first with one origin and destination, and then with many):
A price-of-anarchy bound applies when players of a game have successfully reached an equilibrium. But what if they haven't, for example because of the apparent intractability of computing a Nash equilibrium? In the second half of the oughts, there was a lot of nice work proving price-of-anarchy bounds that apply even when players have not reached a Nash equilibrium (including Mirrokni-Vetta on best-response dynamics, Christodoulou-Koutsoupias on correlated equilibria, and Blum-Hajiaghayi-Ligett-Roth on no-regret learners). The following paper gives a general theory of such "robust" price-of-anarchy bounds. It defines "smooth games," notes that many of the games studied in computer science and related fields are smooth, and proves that price-of-anarchy bounds for smooth games are automatically robust in several senses.
Several extensions of the basic theory of smooth games have appeared over the past few years, such as the Syrgkanis-Tardos theory of smooth mechanisms. I've worked on extensions to incomplete-information games and large games (with many "small" players), and also "local" notions of smoothness.
Early work in computer science on auction design focused on auctions where every bidder has a foolproof strategy (like in a second-price auction). As impossibility results for such auctions mounted, attention naturally shifted to proving price-of-anarchy bounds for the equilibria of simple but strategically non-trivial auctions (beginning with Christodoulou-Kovacs-Schapira), such as simultaneous single-item auctions. The theory of smooth games and its extensions (see previous section) turns out to be well suited for this goal. Here is a survey of the state-of-the-art as of 2017:
A representative positive result (later improved by Feldman-Fu-Gravin-Lucier) is:
A mechanism is a procedure for making a decision when agents have private preferences, and an auction is a mechanism specifically for the exchange of goods and money. Economists have been thinking hard about auctions and mechanisms for over a half-century. What could computer science possibly bring to the table? Several things: measures of approximation; alternatives to Bayesian analysis; techniques for learning from data; and a focus on computational tractability.
Two surveys on different facets of this area are:
What selling procedure should a seller use to maximize her revenue? This is a hard question because the best auction to run --- like choosing the right opening bid in an eBay auction or the right reserve price in a sponsored search auction --- depends on what bidders are willing to pay. The traditional approach in economics, exemplified by Myerson's optimal auction theory, is to maximize the seller's expected revenue with respect to a known distribution over what bidders are willing to pay.
In many cases, the theoretically optimal auction is too complex for direct use (especially when bidders are not symmetric). The following paper identifies many scenarios where simple and directly usable auctions do almost as well as theoretically optimal auctions.
As discussed in the previous section, the traditional economic approach to revenue-maximizing auction design posits a known prior distribution over what bidders are willing to pay, and then solves for the auction that maximizes the seller's expected revenue with respect to this distribution. But where does this distribution come from? One obvious answer is from data, in the form of previous bids by comparable bidders in auctions for comparable items. (Ostrovsky-Schwarz applied this idea to great effect at Yahoo! in 2008.) How much data is necessary and sufficient to determine what auction you should run? The following paper tackles this question for single-item auctions with asymmetric bidders. (See the STOC '19 paper of Guo-Huang-Zhang for optimal bounds.)
In settings with monetary payments (like auctions), a well-known solution from economics (the VCG mechanism) can be used, in principle, to determine the best thing to do even when you don't know participants' preferences a priori. (The Vickrey auction is a special case.) The only problem is that, in many scenarios including multi-item auctions, implementing this mechanism requires solving computationally intractable problems. The field of algorithmic mechanism design studies when the promise of the VCG mechanism can be preserved (at least approximately) while using a reasonable amount of computation (see also Parts II and III of my CS364B course). The best-case scenario is when computationally efficient approximation algorithms (which take preferences as given) can be translated in a generic way to equally good incentive-compatible mechanisms (which need to elicit preferences from the participants), as in the following paper.
Another well-known drawback of the VCG mechanism (see preceding section) is that, in settings where there is an outcome-dependent cost, its revenue need not cover the cost incurred. There are "budget-balanced" alternatives, where the cost incurred is shared among the winners in the chosen outcome, but such mechanisms compromise on the quality of the outcome selected. The impossibility of maximizing social welfare while balancing the budget motivates using approximation measures to quantify the feasible trade-offs between the two objectives. This is the subject of the following paper.
The following paper introduces acyclic mechanisms, which are more flexible than the Moulin mechanisms studied in most of the cost-sharing literature, and shows that many primal-dual algorithms naturally lead to good acyclic mechanisms. (Acyclic mechanisms can also be viewed as a precursor to the Milgrom-Segal deferred-acceptance auctions mentioned above.)
Can an equilibrium of a game be computed efficiently? Arguably, this is a prerequisite for interpreting an equilibrium as a prediction of players' behavior. (If no computer can find an equilibrium in a reasonable amount of time, then presumably a bunch of uncoordinated players can't either.) The following survey gives a general overview of the area, assuming minimal prior knowledge of computational complexity:
Algorithmic game theory keeps growing in scope, and much of my recent AGT work has been focused on porting the AGT toolbox and mindset to more and more application domains. (For example, my work on blockchain protocols and web3, surveyed in the next section, grew out of this agenda and took on a life of its own.)
Teaching materials. For an overview of applications of AGT in practice, see:
In the 2020s, most of my research has focused on developing the mathematical foundations of blockchain protocols and the applications built on top of them (web3).
Overview and expository work. Turing-complete blockchain protocols approximate the idealized abstraction of a "computer in the sky" that is open access (anyone can install software or interact with already-installed software), runs in plain view, and, in effect, has no owner or operator. This technology can, among other things, enable stronger notions of ownership of digital possessions than we have ever had before. Building the computer in the sky is hard (and scientifically fascinating), and requires the synthesis of multiple disciplines, both within computer science (distributed computing, cryptography, algorithmic game theory) and beyond (mechanism design, macroeconomics, finance, political science). Interested in learning more? One starting point is the following work-in-progress video lecture series (30+ hours and counting...), based in part on my Columbia course:
If you only have an hour, here are two survey research talks, targeted at a general theory CS audience and an econ/CS audience, respectively:Overview and expository work. The dominant paradigm in the theoretical analysis of algorithms is worst-case analysis, where the overall performance of an algorithm is summarized by its worst performance on any input. Good worst-case guarantees are great when you can get them, but for many important problems (linear programming, clustering, many online algorithms, etc.), worst-case analysis gives inaccurate advice about which algorithm to use. Going beyond worst-case analysis requires more careful modeling of the "relevant inputs" of a problem. For surveys of the many different facets of this area, written by the field's leading experts, see the edited collection:
My interest in this area originally stems from my work in auction design and analysis (see the section "Robust Revenue-Maximization" above), where worst-case analysis provides no useful information and alternative frameworks are needed. My work there proposes interpolations between worst-case and average-case analysis, which inherit robustness from the former and meaningful performance guarantees from the latter (see this survey talk). This led me to explore similar approaches to "mainstream" algorithmic problems.
The following papers are representative of my interests.
Data-driven algorithm design. The first introduces a model for reasoning about how to choose the right algorithm for an application (where an "application" is encoded by an unknown probability distribution over problem instances) based on a set of benchmark instances. The goal is to obtain good expected performance (this is the average-case aspect) no matter what the unknown distribution is (this is the worst-case aspect). (See Balcan's BWCA book chapter and Vitercik's PhD thesis for the latest developments in this now-bustling area.)
Much of my work in the 2010s builds on or contributes to the theory of machine learning.
Sample complexity of learning auctions and algorithms. Models and tools from the theory of offline (PAC) learning---in which a learning algorithm is given a batch of labeled data and is responsible for computing a classifier that accurately labels as-yet-unseen data from the same distribution---are central to my work on data-driven auction design (see the "Auction and Mechanism Design" section above for details):
Papadimitriou describes the "extroverted side of complexity theory" as "an applied mathematical discipline whose function is to gain insights into the problems of the natural, social, and applied and engineering sciences."
Expository work. The following monograph was inspired by all the great results in communication complexity that explain why so many natural algorithmic goals are impossible.