Sandeep Baliga at the Cheap Talk blog has an outstanding summary of the contributions by Bengt Holstrom and Oliver Hart, the latest winners of the Nobel Prize in Economics.
The Holstrom-Hart Nobel is a bit personal to me, albeit through an indirect route, via one of my former teachers Paul Milgrom. Paul liked to talk about how he came to graduate school not for PhD, but for MBA because he wanted to be an actuary, and how he found ways to apply actuarial thinking to economic theory. Given the contributions by Holstrom and Milgrom that I found most enlightening brought together statistics and epistemology to a theory of incentives, this is an apt starting point for my reflection on their work.
The example by Baliga is an excellent illustration of the basic problem: a worker at a burger joint does two things, one easily observable (the number of burgers), the other not so observable (the work in the kitchen). By tying the incentives only to the number of burgers sold, the principal winds up discouraging kitchen work, and in so doing, subverting his own interests. The solution is to create a low-powered set of incentives that depend comparatively little on burger sales.
But this opens up a whole slew of other questions. Two questions pop into my mind immediately because these concern my eventual work in political science, especially with regards the relationship between voters and elected officials. First, does the principal really know where the unseen parts of the business is? Second, how does the principal know if the kitchen is being genuinely looked after?
In the legislative arena, the functional equivalent of burger sales come from the public record of legislative accomplishments and actions: the number of bills, the voting record, etc. Yet, these constitute comparatively little (and often, easily “faked”) aspects of the legislative work. Fenno and Mayhew, back in 1960s and 1970s, had written about how valued the “gnomes” (to borrow Mayhew’s terminology) who slave away at the unseen aspects of legislative and policymaking work without public accolades are by the legislative insiders, who reward them with currency that are particularly valuable intralegislatively. Yet, this understanding is not shared by the members of the voting public, nor, apparently, by political scientists lately. Very few regular voters appreciate how complicated the inner workings of the legislative process is, the kind of hidden negotiations and compromises that are needed to put workable bills and coalitions together–especially bipartisan coalitions. Still, there is an implicit understanding that, without legislative outcomes, something isn’t being done right, that their agents are shirking somewhat and somehow that prevents their production–perhaps they are right in their suspicion.
The more problematic might be the obsession of the political science in putting data in place of theory (notwithstading the immortal Charlie Chan quote, “Theory, like fog on eyeglass, obscures facts.”–because “data” is not same as “facts.”) The visible part of the legislative accomplishments, often padded by “faked” votes designed only to put votes on records (for example, the increasingly innumerable but meaningless “procedural” votes in the Senate designed only to publicly show who’s on which side, more or less), are used to generate various statistics that purport to measure things like “ideology,” which, in turn, are assumed to be homologous to Euclidean space, and are fitted into models. Since the measures are derived from the observed facts, they describe what goes on fairly accurately–but with significant exceptions that change over time, which are usually dismissed with the claim that they are mere “errors” and “nuisance.”
Fenno and Mayhew thought things differently. Granted, they didn’t have the kind of legislative data or the tools for analyzing them that their more modern counterparts do (this is literally true: the changes in Congressional rules around 1975 immediately tripled the number of recorded votes in the House, for example–coinciding neatly with the changes in House organization that followed the ouster of Speaker McCormick, engineered by the liberal Democrats.) They saw the paucity of data that prevented data intensive analysis on their part as a normal part of the political process, where the seen and the unseen coexist and the importance of the unseen aspects of politics is deemed as important, even by those who did not know the specifics–e.g. the voters. That brings the question back to what prompted to Holstrom to wonder, why so few contracts are written based on the “sufficient statistic” criterion, and as such, echoes the argument by Weber 100 years into the past (to be fair, there’s a paper by Oliver Williamson on this very point–if I could find it.) Weber’s argument was twofold. First, the compensation for the “professional” (“bureaucrat” in his terminology) should be low-powered, set without much regard for the visible indicators of performance because how exactly the professional “performs” is too noisy and complicated to measure with precision. In turn, the professional should develop a code of ethics and honor–“professional conduct,” literally–whereby their work is carried out dutifully and faithfully without regard for the incentives in the contracts. If you will, the mail will be delivered with utmost effort, as a point of honor, through rain, snow, or sleet, because that’s what mailmen do, so to speak. Most important, both must be part of the common knowledge: the professionals “know” that they will be paid no matter what, while the principals “know” that the professionals are doing their utmost, even though the results are not necessarily obvious. In other words, I don’t know what exactly they are doing, but whatever it is, I know it’s important, dang it.
This is a difficult equilibrium to sustain, with a LOT depending on the players’ beliefs, and potentially open to a lot of abuse and suspicion. Mike Chwe might say that these beliefs, in turn, would require a lot of cultural trapping to sustain, various rituals carried out to show that the “professionals” indeed are being “professional.” The “home style” by the legislators whereby they return home and engage in various ritualistic interactions with their voters to show their tribal solidarity might be seen in the same regard. One might say that a lot of seemingly irrational socio-cultural activities, such as belief in creationism, are exactly that as well. Of course, this is the kind of equilibrium that IS being subverted by the tilt towards visible data: as we can see below, the correlation between Democratic shares of House votes and the DW-Nominate scores of the incumbents (with signs adjusted):
What the graph is showing is that, if you know the voting records of a House member in the preceding session of Congress, you can predict his vote share with increasing accuracy as 20th century progressed. It does mean that the voters were becoming more “policy-minded,” in the sense of measuring their evaluation of the politicians more on the basis of visible record, but does it mean that the voters were becoming more “rational”? To claim that would presuppose that the performance of the burger joint depends only on the burger sales and that kitchen is irrelevant to its success. Holstrom (and Max Weber before him) would say in no uncertain terms that that’s stupid. But what does this mean for the trends in politics today? I’ve been making a series of argument (and was halfway through a book manuscript) on this very point, but shockingly few people seemed to care, even if, I strongly suspect, the mess of the 2016 elections is a sharp reminder of this problem.
This is an illustration of the potential danger that the data-intensive environment of today is posing us: because we have so much data, we become contemptuous of the unquantifiable and unaware of the potential limitations of the data that we are using. If the data is always right, so to speak, i.e. has zero error, there can be no statistics that can be done with it, so to speak. Then we’d know THE answer. We do statistics to be less wrong, not necessarily to be “right” (I’m paraphrasing my old stats prof.) If we insist on mistaking statistics (or indeed “science”) for the “right answer,” woe be upon us.
PS. One great irony is that, while, intellectually, Paul was one of major influences on my way of thinking, I had precious little interaction with him when I was actually at Stanford. By the time he was teaching his “module,” (Stanford econ reorganized its graduate courses when I was there so that we had 4 “modules” instead of 3 quarters. Go figure) I was fairly deep in my occasional depressive spirals and was unable to do practically anything, let alone prepare for prelims. In a sense, studying for econ prelims is easy–you literally have to study the textbooks and know the formulas, so to speak–just the answers you are supposed to know, even though, admittedly, the questions will be hard. But depressed people have the hardest time doing routine chores when locked up, figuratively speaking, without anyone talking to them. It is easy, in a sense, for people who have no stakes to think that depressed people ought to be locked up by themselves until they are better. In practice, what almost always happens is that, after being locked up for a few months, they will be far more damaged than when they began. But talking to depressed people requires way too much commitment for people without stakes of their own, too much to be asked of “strangers.”