As is well known, “the market” is usually smarter (as in better able to predict the real events) than any single expert. The reason is simple: everyone who has some stake in and knowledge about the change in the future (and is able to do so) will participate in the market process and their actions will affect the prices. Thus, the prices reflect the net aggregate knowledge of any and everyone who has some stake and knowledge. By definition, this is superior to any single expert.
The problem with this, as pointed out by Stiglitz and Grossman is that this is prone to suffer from prisoner’s dilemma type problem: prices are easier to observe than obtaining real expertise. Yet, prices are more informative than almost any real expertise by itself. So the incentive to invest in real expertise is depreciated, cutting down the aggregate amount of information flowing into the market. So the very efficiency of the price system subverts itself: the market becomes dumber as it becomes more efficient in aggregating information.
I’ve ranted about Google Translate a lot: it is simultaneously a marvel of technology and an insult to linguistic expertise, the way price system is an insult to the real knowledge. The vast data of linguistic usage provides a lot of information about languages without any knowledge of the languages themselves, enough to permit very effective translation. While Google translations are not “good” in the aesthetic sense, it is very efficient and “profitable” in quasi-economic sense. This is, of course, a typical mindset in computerized data analysis–i.e. “data science.” The lack of expertise in the actual substance is not seen as a hindrance to digging into the data and finding patterns and pronouncing the regularities thus uncovered “insights.” To be fair, this is not necessarily a fatal flaw: nobody really can narrow down “liberalism” and “conservatism” enough to actually define it in a systematic and “measurable” fashion. But the votes, speech patterns, and any number of data can be churned through to generate numbers that roughly approximates what we think “ideology” adds up to.
One problem that emerges when we take these quantified “ideology” measures (and other such things, if we venture outside the political realm) too seriously, we increasingly make decisions based on them, creating even more patterns for “Google Translators” of the world to pick up and build even more measures of “ideology” around (this is how we built the mechanism for identifying “ideology” from speeches, literally–by observation correlations between speech patterns and the speakers’ DW-Nominate scores. I had trouble buying the logic beyond what essentially amounted to tautology even then–although, here, tautology has a useful meaning but not because we are actually measuring “ideology” in speech, but it indicated that people speak on record like they vote.). This sort of self-sustaining “untruth” has a name in economics, “sunspots,” although likes of Tom Shelling had already written about the underlying logic decades before. Of course, this is the sort of pseudoscience that superstitions and astrology are built on (in case of astrology, almost literally), and the kind of “science” that Richard Feynman contemptuously called “Cargo Cult Science.”
Data, even lots of it, does not make science. Logical approach thereto, accompanied by a good bit of skepticism does. If we think data is telling us everything, we will be following sunspots off the cliff. This bothers me to no end.