I might have been completely missing the point, but this article in LA Review of Books had me refining my thinking about means-centric thinking vs. variance-centric thinking.
In effect, the article is bemoaning the decline of critical thinking in all manner of venues in favor of “content,” or as the author terms it, #content. What counts as #content, in turn, is determined by whatever it is that the audience wants. Or, in other words, the problem quickly comes to resemble the beauty contest game. The statistically meaningful consequence of beauty contest games, in turn, where the players try to base their decisions not on what they themselves want but what they think other players want is not that they necessarily misjudge, at least on average, but that they become too stereotyped in their thinking, so to speak. The distribution of true preferences usually feature substantially larger variance than the distribution of anticipated variances. Put differently, when the media attempts to deliver the #content that they expect that people want, rather than trust their judgment to come up with something on their own, they not be too different on the average across multiple instances, but their variances certainly would be. In the context of individual draws, in turn, this has a significant implications.
A relatively simple demonstration is in order. What is the expected distance between observations drawn randomly from a distribution with a small variance and another distribution with the larger variance? For the sake of simplicity, let us take a standard normal distribution and another normal distribution with the standard deviation of 10. The distribution of the difference will be distributed as a normal distribution with mean 0 and the standard deviation of around 10.5, but the mean 0 is only due to the negative differences exactly cancelling out the positive differences. The actual distances (or squares thereof) are distributed as the chi-squared distribution with two degrees of freedom (sort of–since true chi-squared distribution is the square of a standard normal distribution) arising from squaring a normal distribution of the variance 101 (or the square of 10.5). Of course, this is practically definitional: variance = E(x squared) – (mean of x) squared. So as the difference in variance between the reflected conventional wisdom (what the players think other players want) and what the players really want increases, the actual average gap grows as well, on both sides!
It is also striking that this bears a curious resemblance to the coalition politics of “populism” today. The talk of “dangerous radicals of left and right” became an epithet, but it captures a certain truism: “radical” political leaders draw support from the left and the right, even if not exactly in the same proportions. So the conventional politicians operating on the anticipation of what the electorate gets the mean right, but hideously underestimates the variance, and the gap between the reality and their program is increasing, even as they get the mean with ever precise precision, thanks to the Big Data and associated technologies? This is an interesting thought….
PS. An important caveat is that being able to guess the mean with ever increasing precision will not necessarily increase the gap. The key is simply that the gap, if the true underlying data is sufficiently variable, cannot be made to go away even if you do know the mean precisely. So the more accurate rendering of the argument is that the marginal gain from a more accurate guessing of the mean is small, when the natural variance is high. Mathematically, the variance of the differences will decrease of the variance of one of the distributions goes down.
An interesting problem emerges, however, if this is viewed in context of polarization. One might say that both parties, while reducing the variance, have grown farther from the mean of the popular distribution. So, whereas, in the old era, the distribution of the differences might have been N(0, a+b) where both a and b were fairly high variances, we now face N(c, a+b’), where b’ < b, but c is significant. So the mean gap squared (i.e. E(mean squared for the distribution of differences)) is now a+b’+ c^2, while it was simply a+b in the past. If this gap is to be construed as the extent of “mean representation,” it is not necessarily clear if the present parties are any more “representative” than in the past.
PPS. Yes, I am assuming independence and model completeness in distributions, which assuredly is not the case–even in the high variance era, things like “popular politics” and “home style” assured that outliers were, in fact, reflecting unobserved variables–which, all practical purposes, meant that the outliers in one distribution were correlated with outliers in the other distribution somehow. But this goes well beyond a simpleminded model.