Braess's paradox and the price of anarchy

Braess’s Paradox

The scenario is simple to consider: say one has two cities, \(A\) and \(B\), and traffic flows from \(A\) to \(B\). To get there, people in \(A\) have two intermediary cities they can go through, \(P\) and \(Q\). Each of these intermediary cities has a path from \(A\) to it and from it to \(B\). At rush hour, we have \(x\) cars wanting to go from \(A\) to \(B\). In this example, suppose \(x = 10000\). Some of these roads get congested during rush hour, making their traversal more expensive in terms of time the more cars are using them. Others are modern, and even in rush hour they can withstand the traffic well, and the time to traverse them doesn’t change.

To model congestion, we say that a road has a time to traverse proportional to the amount of people using them. For instance, the path from \(A\) to \(P\) might have a time to traverse of \(\frac{p}{100}\) minutes, so that the more people who use the road (we call this number \(p\)), the slower the traversal of this road. On the other hand, the road from \(P\) to \(B\) might be more modern, and thus its time of traversal is always \(120\) minutes, regardless of how many cars are using it.

In essence, modeling this scenario with a graph, one has the following:

The natural solution

We can try to find the Nash equilibrium here, calling \(p\) the number of cars that go through \(P\), and \(q\) the number of cars going through \(Q\). The time it’ll take for someone to go from \(A\) to \(B\) via \(P\), is \(\frac{p}{100} + 120\) minutes, while if he uses \(Q\) it will be \(\frac{q}{100} + 120\) minutes.

We know \(p+q = x = 10000\). If \(p < q\), then any reasonable driver would prefer route \(A \to P \to B\), and so we would have \(p = x\) and \(q = 0\). A similar result tells us that \(p > q\) can’t be a Nash equilibrium either. The Nash equilibrium here is \(p = q = \frac{x}{2} = 5000\), and thus the time it’ll take for any driver to go from \(A\) to \(B\) during rush hour is \(\frac{x}{200} + 120 = 170\) minutes, independent of which road he takes. As with any Nash equilibrium, once we have \(p = q = \frac{x}{2}\), there is no incentive for any driver to change his planned course, because he can only be worse off (i.e. take more time) if he does.

This would be the optimal solution (meaning, it minimizes the the expected time for any particular car going from \(A\) to \(B\)) if one could choose which route each driver took. Indeed, this is also the solution the system will arrive at naturally: every car will decide to go through the least congested path, so the congestion of each path will balance out naturally.

“Improving” the system

One could try to improve this system by adding another path between \(P\) and \(Q\) - a very modern path, which does not get congested, and takes a short time of, say, \(10\) minutes to travel through. We would like to see that with added capacity in this network, drivers will take less time to go from \(A\) to \(B\) - certainly one would not expect that adding capacity would make things worse. The graph model now looks like this:

Let us now consider what any rational driver would do. If the driver picks \(A \to P \to Q \to B\), his time to travel will be \(\frac{p}{100} + 10 + \frac{q}{100}\). If he picks the route \(A \to P \to B\), his time to travel will remain as before, \(\frac{p}{100} + 120\), and if he picks \(A \to Q \to B\), his time to travel will be \(120 + \frac{q}{100}\).

Let’s look at the worst that can happen if one picks to use this new path between \(P\) and \(Q\). No rational driver would pick the path \(A \to Q\), since that path takes \(120\) minutes, whereas going from \(A\) to \(P\) and then to \(Q\) takes at most \(\frac{10000}{100} + 10 = 110\) minutes to traverse. Since no rational driver will pick the path \(A \to Q\), every driver is going to pick the path \(A \to P\), and thus \(p = x\).

When any driver reaches \(P\), he will never use the path \(P \to B\), since that path takes \(120\) minutes, whereas using the route \(P \to Q \to B\) takes at most \(10 + \frac{10000}{100} = 110\) minutes. Again, since no rational driver will take the path \(P \to B\), every rational driver will take the path \(P \to Q\), so every rational driver, in this Nash equilibrium, must take the route \(A \to P \to Q \to B\). Again, since every driver took the path \(Q \to B\), \(q = x\).

In all, the route \(A \to P \to Q \to B\), which is the one every single rational driver will take, will require \(\frac{10000}{100} + 10 + \frac{10000}{100} = 210\) minutes to traverse, which is much worse than the \(170\) minutes it would have taken before we added this path \(P \to Q\)!

Analysis

What we see here is an example where adding capacity to a network decreases its performance. When we gave the drivers more freedom to choose their routes, we did them no favors - they will now take more time to arrive at their destination. If we could direct their behavior centrally, we could make their trip from \(A\) to \(B\) take \(170\) minutes (by simply ignoring this new path \(P \to Q\)), but if we give them the choice of what to do, they will make their lives harder. And this is not because they are unintelligent or that they’re missing something. This is because this Nash equilibrium is suboptimal, even when we assumed rational, perfectly logical drivers.

This difference between this directed Nash equilibrium and the naturally arising one is called the price of anarchy.

Price of anarchy

The intuition about this topic is the one in the preceding paragraphs. In essence, it is the price of letting all players in a game act as they wish, as opposed to following instructions specifically to achieve the global optimum. In this case, what we meant by optimum was the lowest travel time for any driver, or, to be more formal, the expectation value of the time it takes to travel from \(A\) to \(B\) for any given car. In the general case, we may have other metrics to determine what an “optimal” scenario is.

Formalisms

We’ll state a formal definition of this, which will require some basic game theory machinery.

Definition. A (non-cooperative) game is defined as a \(3\)-tuple \((n, S, f)\), where \(n\) is the number of players, \(S = S_1 \times \dots \times S_n\) is the set of strategies that each player may take, where the player \(i\) may use the strategies in \(S_i\). \(f: \mathbb{N}_{\le n} \times S_1 \times \dots \times S_n \to \mathbb{R}\) is the utility function, where \(f(i, s_1, \dots, s_n)\) is the utility value for player \(i\), if player \(j\) uses the strategy \(s_j\), \(\forall\ 1 \le j \le n\).

We omit here the formal definition of a (mixed) strategy, but, intuitively, it is an assignment of probabilities to the decisions a player may face. A pure strategy is the actual, non-probabilistic decision that a player will take in any given scenario (i.e. it fully describes the behavior of a player at any given time). When using a mixed strategy, we assign probabilities to the different choices we may make in a given situation.

Next, we’ll define a Nash equilibrium. Intuitively, this is a state in a game where it is inconvenient for any single player to change his strategy, since his utility function will decrease if he does.

Definition. Given a game \(G = (n, S, f)\), a choice of strategies for each player \((s_1, \dots, s_n)\) is a Nash equilibrium if, for every \(1 \le j \le n\) and every \(s_j, s_j' \in S_j\),

\[ f(j, s_1, \dots, s_n) \ge f(j, s_1, \dots, s_{j-1}, s'_j, s_{j+1}, \dots, s_n) \]

We call the set of all such \(s = (s_1, \dots, s_n) \in S\) in a Nash equilibrium \(Eq(G)\).

What these equations say is that for every player, changing his strategy while other player’s strategies remain the same is disadvantageous. Thus, every player stays in this “equilibrium” state, and has no incentive to move.

Since we are trying to maximize/minimize some sort of “value” for a game, which in the previous example was minimizing the expected time for any driver to get from \(A\) to \(B\), so for a given choice of strategies, we should associate a number that indicates how much we like the state of this game. Let’s formalize this:

Definition. A health function is a function \(H: S_1 \times \dots \times S_n \to \mathbb{R}\). It is also usually called a social welfare function.

Now we are ready to define the price of anarchy.

Definition. Given a game \(G = (n, S, f)\), and a health function \(H : S \to \mathbb{R}\), we define the price of anarchy \(PoA\) as \[ PoA(G) = \frac{\max_{s \in S} H(s)}{\min_{s \in Eq(G)} H(s)} \]

That is, the best possible outcome if one is allowed to choose strategies, over the worst possible equilibrium the game will naturally arrive at.

Now we will see an application of the concept of price of anarchy, to routing problems, and how we can obtain a sort of bound on “how bad” Braess’s paradox can get.

Selfish routing, Braess’s paradox and the price of anarchy

One question we could ask ourselves is how bad could Braess’s paradox hurt us? If we leave the players to pick their own strategies, how bad could the health of the resulting game be, compared to the best solution we could force ourselves?

Specifically, let us restrict ourselves to routing games like we saw above, on arbitrary graphs. Let us also assume that the weights of the edges are linear, in the sense that for every road \(e\), we have that its cost is \(w(e) = c_e + f_e d_e\), for some constants \(c_e, d_e\), and \(f_e\) the flow of traffic going through that road. This is the case for the example at the beginning of this article.

The surprising result is that the price of anarchy is never more than \(33\%\), and this bound is tight. This means that no matter what the players do, when they reach an equilibrium, this equilibrium will be at most \(33\%\) worse than the best possible solution the system could achieve. This result is due to Roughgarden and Tardos (1999).

A short proof of this fact is due to José R. Correa, Andreas S. Schulz, and Nicolás E. Stier-Moses, in their 2008 paper “A geometric approach to the price of anarchy in nonatomic congestion games”.

They prove some more general theorems. Namely, if one allows road costs not to be linear, one can still obtain bounds on the price of anarchy.

Conclusion

I hope this has been interesting, it certainly was for me. I’m entirely new to the field of algorithmic game theory, but hope to write a few more articles on the subjects in the near future. Especially interesting scenarios, discussed by Stier-Moses in his talk, are the variations of Hotelling’s location game in arbitrary graphs. Some further reading on the area can be found in the free book Algorithmic Game Theory by Nisan et al.

It is also interesting to note that this is not due to a peculiar form of the network. This same paradox can be evoked using a simpler network, called a Pigou network, which is nothing but two parallel arcs between two nodes. It also applies to any family of cost functions, not just linear ones. Some examples (using infinitesimal players, as opposed to atomic ones) can be found in these class notes from the Greek National Technical University.

I’d love to hear comments on the matter :)