The correlated equilibrium is a much underappreciated application of game theory in situations involving uncertainty. As such, it is of huge import to me, but apparently not to those who want to know about “strategic advantages” and such nonsense in situations without much subtlety.
The heart of correlated equilibrium is the presence of exogenously generated “signals” that provide “instructions” for the players to follow through. So, the signal might be, if “full moon, buy” or “if green light, go.” (This is, in a sense, why and how “astrology” works.” Not necessarily because movement of the stars necessarily “cause” people to behave certain way, but because they can coordinate people. If you don’t believe me, try and see if people show up to work at most workplaces on Sundays.) The efficacy of the signals vary depending on the specifics of the game, however. In Prisoners’ Dilemma type games, there is no point in cooperating, unconditionally, so the signals are irrelevant. In pure coordination games, the more precise the signal is, the better. If you want to meet others at some locale, knowing the city is more useful than knowing the state, knowing the block, better than the city, knowing the address even better than knowing the block. In Hawk-Dove games (aka “chicken” or “battle of the sexes” games), however, things are a bit more complicated.
In Hawk-Dove games, you’d rather “lose” than wind up in the “bad” outcome. (in contrast, in Prisoners’ Dilemma, you’d rather wind up in the “bad” outcome than lose.) The benefit of the correlated equilibrium is that, by providing the signals with just enough uncertainty, you can help the players avoid the “bad” outcome and improve their net welfare. What happens is that, if one player gets the signal to “defect,” the other player will always get the signal to “cooperate.” In other words, the player is certain to win when he follows the signal. When the player is given the signal to “cooperate,” however, the other player may have gotten the signal to cooperate or to defect. So the player might wind up either losing or end up in the “meh” outcome. Because the player does not know what signal the other side has gotten, he is discouraged from cheating since, if he does, there is a significant probability that the “bad” outcome, which he wants to always avoid, might occur–if the other player has been given the signal to “defect” and is following it. As such, then, the signals (or rather, the process that generates the signals) effectively become the “institutions” shaping the game. The signals can be rigged: as long as the players find it to their advantage to follow the signal rather than ignoring it altogether, the mixture of “cooperate” and “defect” for each player can be anything, as long as following the signal makes them better off than not following the signal.
The key feature that makes correlated equilibrium honest is the presence of uncertainty. If you have been given the signal to cooperate, you do not want to defect because you do not know if the other player has been told to cooperate or defect. What happens if that uncertainty is eliminated? There is nothing you can do if the other player has been told to defect: if the game is working as designed, he is always supposed to win if he does defect and there is nothing you can do to change the outcome. BUT you can always defect if he has been told to cooperate. The other expects to either lose or wind up in the “meh” outcome. You can ensure that he always loses, at least until he figures it out. In other words, if you gain the due “informational advantage,” you can force the other player into getting just two outcomes–“defect” will always get him a “win” but “cooperate” will ensure that he loses.
This subverts the rationale for correlated equilibrium: just playing a regular old mixed strategy, without paying attention to the signals, would yield a better payoff for the other player than following the signal (and predictably losing). Conceivably, the other player might adopt a more complex strategy in which he mixes strategy only if he is given the signal to cooperate, in order to shake out your taking advantage of your ill-gotten information. While the math becomes a bit messy, it is fairly easy to show that, in course of eliminating the gains from the ill-gotten information, the average payoff for both sides becomes smaller than under the “ignorant” correlated equilibrium. The bottom line is that, regardless of what happens, the neat “institution” that served everyone reasonably well falls apart once people know “too much,” at expense of everyone.
I think there is a huge set of implication from this line of thinking. The promise of “data science” and “predictive analytics,” literally, is to make everything predictable: give us enough data, we will help you figure out exactly what to expect, conditional on the situation. We all live in a universe where we respond to a whole bunch of signals, of course. We know that, if we see, say, someone who is a Democrat, he’d act in a certain fashion, in a particular set of circumstances. But there is enough uncertainty that his behavior cannot be predicted with precision: so, in the lingo of correlated equilibrium, there is some probability that he might “defect” or “cooperate,” and since we want to avoid the “bad” outcome where we both “defect,” we should just “cooperate.” By knowing precisely what “signal” he has gotten out of the situation on hand, however, we can predict if he will “defect” or “cooperate” and choose accordingly and benefit in the short run–that is, until he gets wise and changes his behavior so as to make our information advantage irrelevant, to the disadvantage of us both. In other words, the more information there is, the harder it is to sustain a “let’s agree to disagree because we don’t know any better”–because “we don’t know any better” does not apply.
I was thinking about this when I was reading this blog post. More information makes the world increasingly predictable and eliminates the uncertainty in which “cooperation” can take place. So called “alpha” in investing, depending on the informational asymmetries falls apart not just because of the simple textbook arbitrage, but because your competitors can better anticipate your moves and make countermoves that nullify your potential gains rather than hold back, deterred by their uncertainty as to how you might respond (so, it may not be TOO different from the textbook arbitrage in abstract.) The room for tacit cooperation for mutual profit shrinks. Market purists might think this a welcome development: they see the universe as a zero sum game where the profits that do not accrue to the firms/investors somehow automatically to the consumers. But here, it is not obvious where consumers benefit, especially if one considers the likely next step where investors find ways to ignore the information, or, at least, nullify the advantages that their competitors accrue from them. In the application to the politics, the problems seem much starker: the uncertainty that provided cover for many politicians from across the political divide could cooperate has been taken away, forcing them to take up the “obvious” and “confrontational” stances, ironically with increasingly more of their “real” activities shrouded in secrecy. Not exactly an obviously “positive” development, I would think.