I have no idea if I should agree with the kind of things that Nassim Nicholas Taleb writes. Many arguments he raises arise from the same mode of thinking as mine: the emphasis on the import of the variances rather than the mean, the power of interested minorities over careless majorities (to be fair, this comes from Olson, and even earlier than that, Schattschneider.), and even the distaste for pseudointellectualism and scientism by the formula. Perhaps this is not surprising: Taleb, throughout his career, has literally lived on distinguishing between deviation the average and the deviances, by profiting off of the latter and building his reputation around this fact, whereas I, throughout my career, have been trying to divert attention away from the means-obsession a more careful consideration of the variances, and have largely fallen on deaf ears. Perhaps this is where the his worldview and mine diverge.
I think he errs by emphasizing too heavily how little the fancy innovations offered by behavioral economics offer by means of understanding the universe, in a sense, exactly the opposite of what he accuses their advocates of. If I were to attempt translating Taleb’s argument to terms more familiar to me, I’d say that, for all the hoopla, the effects implied by the fancy innovations are very small. This becomes especially true when the setting is the real world, with its huge variability, rather than a controlled lab setting. The naturally occurring variability by itself will be enough to swamp the small effects produced by these neat but ultimately insignificant factors, and far more importantly, these large naturally occurring variabilities will interact with these “fancy effects” in highly variable ways, either magnifying or dampening their effects, depending on the circumstances. When Nudge came out and so many serious people seemed to get hung up on how they can nudge people to do all sorts of “virtuous” things, I was wondering what the limits of this “nudge” are: who can nudge whom to do what when (I remember blogging about it, but I can’t remember if it were here or elsewhere.).
My reaction to this, I think, starts diverging from that of Taleb’s. Like Hyman Rickover said, the devil may be in the details, but so is salvation. Don’t tell me what the big story is: give me the details about the moving parts so that we can think about how they, and their interactions, would add up. I may not have much faith that knowing these moving parts would themselves add up to an ability to predict what will happen–that requires an understanding of the larger environment beyond the individual moving parts. But understanding of the larger environment and where the moving parts fit in is just as much part of Rickover’s “details” as are the moving parts themselves: is the process nonlinear? is the environment noisy? etc. The environmental details help narrow down which details of the moving parts one should concentrate on. Stroggatz famously modeled the patterns of how fireflies behave without understanding the details of firefly biology, but he did base his models on particulars of firefly ecology, and specifically, was able to model variations in patterns for different species by being aware of how different species fit into their environments–including, if I recall correctly, mating patterns, predations, etc. Some details matter more than others, and which details matter is a function of knowing the bigger picture better.
I think the real problem with “scientism” is knowledge of the details detached from the “big picture,” not the details themselves. Taleb seems aware of this sometimes, and not at other times (e.g. his seemingly unconditional hostility to GMO)–perhaps he himself falls into the same traps as those whom he attacks. You don’t always use the hammer for the same thing, all the time. Sometimes, you can use hammer as an antenna, even if a lousy one,…if the circumstances require it. This is not necessarily a revolutionary statement, but a simple application of conditional probability logic. The caveat to this, of course, is that this turns all the “rules” into conditional statements: a hammer is X, conditional on A, B, and C. If not C, even if A, or B, hammer is Z, and so forth. If the probability of not C is thought to be small, why bother with hammer being Z? So the argument of punctuated equilibrium strikes: as long as the environment is sufficiently stable, a simple, but highly “adapted” knowledge to the prevailing probability distribution prevails. If snakes don’t smoke, why bother planning on the contingencies when they do? (If you don[t know what snakes smoking refers to, read this.) Most of the time, it will work well. Sometimes, it will fail spectacularly. The added problem comes from the sort of pathology recognized by Stiglitz and Grossman (and probably Hayek before them): that, if certain knowledge is deemed to be valueless (i.e. knowing where prices come from, rather than the prices themselves), it will be underinvested, to the point of undermining the rest of the informational environment. Worse, of course, is if the “knowledge” that C “always” happens becomes part of the dogma, so that even raising the possibility that C might not be there (so that the hammer could be Z) becomes part of heresy to be persecuted. In such cases, a widespread collapse crosses from being merely likely to inevitable. (e.g. “the market is always right.”)
None of these is really all that new: many books have been written on these points, not least by Taleb himself. Most informed people know of the arguments, but, at the same time, Taleb is right: being aware of these arguments is not stopping them from following the herd, not unlike informed traders who play the bubble, even when they know it’s a bubble. There are just too many obstacles for anyone who’d buck the trend. AJP Taylor, writing about German diplomats on the verge of World War I, made a curious observation: how most German chancellors in late 19th century were independently wealthy aristocrats who could simply threaten to resign if the gov’t–whether the kaiser or the Reichstag–tried to force their hand, while the chancellors in the early 20th century were professional bureaucrats who could not afford to say so, often literally–since they were not wealthy, they needed the money; since they were not “naturally” respected, they needed the dignity of the office, and so on. This runs contrary to the idea of “ideal bureaucrats” being written by Weber about the same time–who, perhaps, was reacting to the forces he was observing. The bottom line is that, even if the professional bureaucrats of the early 20th century may have been more informed and competent than their aristocratic predecessors, the environment had shifted so that their professional expertise was negated, i.e. in the manner of IYI’s that Taleb writes about.
This, in a manner of speaking, brings us back to the power of interested minorities: as someone put it on Twitter a few days ago (I don’t know who it was and can’t find it again), even if very few people read Washington Post op ed pages, they are important people who pay attention to politics, and as such, matter for future prospects of a lot of “bureaucrats.” If these people buy into the conventional wisdoms that the people outside the Beltway cannot understand, the former’s conventional wisdom is what the bureaucrats will follow.
The world is complex. We need to be cognizant of that. But more important, we need to be cognizant of what parts of complexity are relevant when, for what reasons. This is not easily summarizable in bite-sized pieces. Appreciating them, indeed, demands something beyond bookish “conventinoal wisdoms,” but a nuanced understanding of the “big picture” and how it changes. But these too are “details,” just different kinds of details–those that people hadn’t been paying enough attention to lately (or, have they? there are plenty of people writing about these after all….)