What Stats Do and Don’t Say…

Edward Leamer had written (what was then) a famous essay on pitfalls of econometrics in 1983 that is worth rereading today.  There are two things worth expounding on.

First, in absence of true experimental data, the data that we have is usually ambiguous in the insights they provide.  One might add that, even when we think we are running a true experiment, we cannot always know whether the data is being contaminated by unknown factors that we do not even know about.  The anecdote that Leamer starts his essay with is entertaining and insightful in this regard:

The applied econometrician is like a farmer who notices that the yield is somewhat higher under trees where birds roost, and he uses this as evidence that bird droppings increase yields.  However, when he presents this finding at the annual meeting of the American Ecological Association, another farmer in the audience objects that he used the same data but came u with the different conclusion that moderate amounts of shade increases yields.  A bright chap in the back of the room then observes that these two hypotheses are indistinguishable, given the available data.  He mentinos the phrase “identification problem,” which, though no one knows quite what he means, is said with such authority that is totally convincing.  The meeting reconvenes in the halls and in the bars, with heated discussion whether this is the kind of work that merits promotion from Associate to Full Farmer, the Luminists strongly opposed to promotion from Associate to Full Farmer, the Luminists strongly opposed to promotion and the Aviophiles equally strongly in favor.

This is, I think, a familiar experience to many.  There is no single way to interpret the data for a uniform conclusion, as captured by the quote, attributed to another economist, Ronald Coase:  “if you torture the data enough, it’ll confess to anything.”  Econometricians and statisticians will object to the seeming implication of the quote:  if we know how the data was tortured and connect it to the data itself, we will know how the conclusions reached as arrived at.  But very few people (myself among them, fortunately or not fortunately), are interested in different ways of torturing the data more than the conclusion reached therefrom.  For most people, it is good enough that we have the bestsest, mostest data and the bestest, most beautiful models put together by the bestest technology and the smartest people, and gosh darn it, the data says we are right and you are wrong.  In some cases, like the election, we do have to come to a reckoning that some people will be more right than wrong, but this is not always the case, as there is no set “end date” for many things that we discuss with tons of data.  (And even in case of elections, do we really appreciate if one set of data was more right than another, beyond the superficial appearance?  James Kwak did wonder aloud if the models really were right in 2012, when they predicted an Obama win–because they really did underestimate the magnitude of Obama victory.  In 2016, the same question applies still:  it is one thing that some models predicted a Trump win, while most others did not.  But the real question is how did they torture the data differently so that they got a different result from all other torturers.  To the degree that they got some aspects of data more right, what did the “right” torturers get wrong?

This sets up the other significant observation Leamer makes that I think is critical, regarding the role of “prior information.”

Economists have inherited from the physical sciences the myth that scientific inference is objective and free of personal prejudice.  This is utter nonsnese.  All knowledge is human belief, more accurately human opinion.  What often happens in the physical sciences is that there is a high degree of conformity of opinion.  When this occurs, the opinion held by most is asserted to be an objective fact, and those who doubt it are labeled “nuts.”  But history is replete with examples of opinions losing majority status, with once objective “truths” shrinking into the dark corners of social intercourse.  To give a trivial example, coming now from California, I am unsure whether fat ties or thin ties are aesthetically more pleasing.

Leamer is being a bit unfair to the physical sciences, in that the common opinion that unifies them is that data is always right and trumps the theory–if the data is indisputable.  This does not, of course, mean that physical sciences are necessarily on the right side:  at the time of Galileo, Galileo was more right, in the grand scheme of things, than the contemporary scientific view.  But Galileo lacked good enough (indisputable enough) evidence to overturn the consensus and his being a jerk who stepped on the wrong toes in the process led to his being put on a trial (although he suffered very little ill consequences from it, contrary the legends that grew out of the incident.)  In addition, physical sciences enjoy the additional benefit of having more easily interpretable data in general and having much greater ability to conduct true experiments.  Amartya Sen quipped that virtuous economists are reincarnated as physicists, to be rewarded with the simplicity of the subject matter, while the unvirtuous ones are reincarnated as sociologists (I’d substitute political scientists instead, having been, at various times, a physicist, mathematician, economist, and political scientist, in a manner of speaking–and based on my progression, apparently a very unvirtuous person in my previous life) to be punished with complex, hard to interpret subject matter and with it, ambiguous and misleading data–even if there is a lot of it, with far greater difficulty in conducting meaningful experiments.

This means that, in social sciences, opinions of the researchers necessarily take the central place of the analysis.  The opinions need not mean an “ideology” of the usual kind, but often, the faith in (or against) the theories that the researcher is dealing with in course of the inquiry.  The common form of academic papers is “I test theory X.  The data confirms theory X.  Theory X is good.”  But is the point of the paper that “theory X is good”?  A real data torture specialist, so to speak, would want to know how the data was tortured in course of obtaining the result, not necessarily whether theory X is good or bad.  For most people, however, including many in academia, the point is whether the theory X is good or bad, not so much how the data was analyzed.  Thus the fight is between Luminists and Aviophiles, in the tale related by Leamer, nevermind how exactly the Farmer ran his numbers.

The intersection between academia and policy was not always like this:  the complaint by Harry Truman about economists was that they never had a straight conclusion.  They always said different things are possible, depending on the assumptions–thus Truman’s desire that there were more one-handed economists.  It seems that economics specifically and social sciences in general have taken Truman’s suggestion since then and possibly then some.  It has always bothered me when students claimed that they felt they were being “indoctrinated” in economics or some poli sci courses (In the latter, I think there is more serious “indoctrination” going on sometimes, but that’s a different problem.), but I’ve also come to realize that there is legitimate basis for that suspicion.  Lumnist economists will put forward Lumnist theories of crop yield and the interpretation of data that is consistent with the Luminist theories as evidence that their theory is right and vice versa for Aviophiles.  In this sense, they are “indoctrinating,” but on behalf of their favorite theories, not necessarily particular “ideologies.”  Multiarmed economists, who will lay down different theories on the basis of their moving pieces, lay out in detail how different methods of torturing data, based on what assumptions, might yield different interpretations that are consistent with multiple theories, and eventually raise arms and say that there is no way to distinguish them all in absence of better data (although the good ones will point to the kind of hypothetical data necessary to tell them apart) will be asked what their point was, as Truman asked of his economic advisers.

I don’t know if I can blame Truman, not just the president, but as the representative of the audiences of academic theories in general, for his frustration with the academics.  The audiences don’t want lessons in the technical nuances of social science theories and all the different ways that the data can be spliced.  They want to know what the best course of action is given the expertise academics have to offer.  So do I raise taxes, or what?  Will Trump win the election or lose the election?  This is where the wonks and the (Weberian) bureaucrats can play a role, albeit in different capacities.  The key characteristic of both is (or, at least should be) that they both understand the nuances of academic theories and the needs of the decisionmakes so that distill the former into actionable advice for the latter.  The difference is that wonks are advocates for a particular policy agenda while the bureaucrats are specialists simply in (honest) translation.  Of course, my view on the wonks vs. bureaucrats has been laid out before so it probably isn’t worth repeating in detail again, other than the fact that we need bureaucrats more than wonks.

Advertisements

2 thoughts on “What Stats Do and Don’t Say…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s