Blog is Being Temporarily Suspended…

Due to the health of the author, this blog is being suspended until things improve.  Hope it won’t be too long.

Advertisements

Cheap Data vs. Good Data–The Case of Assessing Japanese Military Aviation before World War II

This is a fantastic dissertation, too good for a mere MA thesis.  While it is fascinating enough just as a source of historical information on an interesting topic, it is also useful as an instructive illustration of the problems in successfully using and abusing data.

Intelligence assessment and statistical analysis are, fundamentally, the same problem. Both are extrapolations of the knowns (the data) to evaluate the unknown, with a certain set of assumptions that guide the process.  Both can never be entirely accurate, as the “knowns” do not match up neatly with the “unknowns,” but if we do enough homework and/or are sufficiently lucky, we can deduce enough about the relationship between them to make the pieces fit.  Or, in other words, we cannot rely on the data itself to just tell us what we want to know.  What we really want to know, the really valuable pieces of information, will be somehow unavailable–otherwise, we wouldn’t need to engage in the analysis in the first place.

The thesis points to an all too common in data analysis:  the good data pertained to something that we don’t really need to know, or worse, something potentially misleading, while the problems that we really do want to know did not generate enough high quality data.

In case of military aviation in Japan, the good data came from 1920s, when the Japanese, being aware of the backwardness of their aviation technology, actively solicited Western support in developing its capabilities, both military and industrial.  Since they were merely trying to catch up, their work was largely imitative.  Knowing the limitations of their existing technological base, they were happier to copy what was already working in the West, even if they were a bit old, rather than try to innovate on their own.  Since they had, literally, nothing to hide, they were open about the state of their technology, practices, and industries to the Westerners, who, in a way, already knew a lot of what the Japanese were working with anyways since most of them were copies of Western wares.  In other words, the data was plentiful and were of extremely high quality.  But they also conformed to the stereotype of the Japanese in the West as not especially technologically advanced or innovative.

By 1930s, things were changing: not only were  Japanese developing new aviation technologies of their own, the relationship with the West has cooled decisively.  They became increasingly secretive about what they were doing and, as such, good data about the state of Japanese military aviation became both scarce and unreliable.  But, in light of the increased likelihood of armed clash between Japan and the West, the state of the Japanese military aviation in 1930s (or 1940, even, given when the war eventually did break out) was the valuable information, not its state in 1920s.  The problem, of course, is that, due to the low quality of the data from 1930s, there was nothing conclusive that could be drawn from them.  While there were certainly highly informative tidbits here and there, especially viewed in hindsight, there were also a lot of utterly nonsensical junk.  Distinguishing between the two was impossible, since, by definition, we don’t know what the truth looked like.  Indeed, in order to be taken seriously at all, intelligence reports on Japanese aviation had to be prefaced with an appeal to existing stereotypes, that the Japanese were not very technologically savvy–which was, of course, more than mere prejudice, as it was very much true, borne out by the actual data from 1920s.  In other words, this misleading preface became, in John Steinbeck’s words, the pidgin and the queue–some ritual that had to be practiced to establish credibility, whether it was actually useful or not.

This is, of course, the problem that befell analyzing the data from the 2016 presidential election.  All the data suggested, as per the state of Japanese military aviation, that Trump had no chance.  But most of the good data, figuratively speaking, came from the wrong decade, or, involved the matchup that did not exist.  In all fairness, Trump was as mysterious as the Japanese military aviation of 1930s.  There were so many different signs pointing in different directions that evaluating what they added up to, without cheating via hindsight, would have been impossible.  While many recognized that the data was the wrong kind of data, the problem was that the good data pertaining to the question on hand simply did not exist.  The best that the analysts could do was to draw up the “prediction,” with the proviso that it is based on “wrong” data that should not be trusted–which, to their credit, some did.  This approach requires introspection, a recognition of the fundamental problem of statistics/intelligence analysis–that we don’t know the right answer and we are piecing together the known information of varying quality and a set of assumptions to generate these “predictions,” and sometimes, we don’t have the right pieces.  The emphasis on “prediction,” and getting “right answers,” unfortunately, interferes with the perspective.  If you hedge the bet and invest in a well-diversified portfolio, you may not lose much, but you will gain little.  Betting all on a single risky asset ensures that, should you win, you will win big.  Betting all on the single less risky asset, likewise, would ensure that you will probably gain more than hedging all around–and if everyone is on the same boat, surely, they can’t be all wrong?  (Yes, this is a variant of the beauty contest problem, a la Keynes, and its close cousin, Stiglitz-Grossman problem, with the price system.)

I am not sure, if the benefit of hindsight could be removed, an accurate assessment of Japanese military aviation capabilities in 1941 could have been possible.  The bigger problem is that, because of the systematic problems in data availability, the more rigorously data intensive the analysis (at least in terms of the mechanics), the farther from the truth its conclusions would have been.  A more honest analysis that did not care about “predicting” much would have pointed out that the “good” data is mostly useless and the useful data is mostly bad, so that a reliable conclusion cannot reached–i.e. we can’t “predict” nothing.  But there were plenty of others who were willing to make far more confident predictions without due introspection (another memory from 2016 election) and, before the election day, or the beginning of the shooting war, it is the thoughtless and not the thoughtful that seem insightful–the thoughtless can at least give you actionable intelligence.  What good does introspection due?

Indeed, in absence of good information, all that you can do is to extrapolate from what you already “know,” and that is your existing prejudice, fortified by good data from the proverbial 1920s.  This is a problem that all data folks should be cognizant of.  Always think:  what don’t we know and what does that mean about the confidence we should attach to the “prediction” we are making?

Mirage of Data and Analytics–Baseball Again.

Fangraphs has a fascinating piece that echoes some of my ideas from a post a little while ago.

Dave Cameron starts by pointing to the problems of securing “intellectual property” in baseball:  most people who do analytics are, essentially, mercenaries, who are hired on short term contracts and between different organizations frequently.  You cannot keep them from bringing ideas with them when they change jobs.  So ideas spread rapidly from organization to organization and the opportunity to arbitrage previously underappreciated ideas are reduced.  But he also alludes, without being explicit, to the fact that the ideas and concepts themselves are pretty simple, or at least are given to being interpreted very simply.  In other words, ideas are viewed as commodities that have constant values, rather than something that fits better with a particular philosophy or organization strategy.  To use Cameron’s example, a batter’s swing plane being an uppercut is a “better” approach. To use an example that I often find annoying, FIP is considered a “better” measure of a pitcher’s effectiveness than simple ERA.

Are these in fact “better” measures?  People often don’t seem to realize that FIP does not measure the same thing as ERA at the technical level:  FIP only incorporates the three “true” outcomes–HR, BB, and K’s.  It is probable that a pitcher who gives up many home runs and/or walks many batters is not very good.  But, conversely, there is something to be said if a pitcher who gives up many home runs and walks many batters don’t give up many runs.  Or, indeed, the same thing might be said for pitchers who give up many “unimportant” runs (i.e. give up runs only when it doesn’t count–and somehow, manages to persistently keep leads, even small leads).  It could be that FIP might, on average, capture the “value” of a pitcher better than ERA, which, in turn does a better job than simple wins and losses, but I don’t think the value of a player is a simple unidimensional value that always translates to a real number readily.  Conditional values of a pitcher varies depending an organization’s strategy and philosophy, and these are more difficult to change–but also offer the potential of finding more lasting value than the easier, commodifiable statistics.  The optimum strategy, in a high variance matching game, is to know your own characteristics (i.e. philosophy, approach, endownments in budget and talent pool, etc.) and optimize conditional on those characteristics–and sign on especially those that don’t fit other organizations’ characteristics neatly.  Universally good traits are easily identified and their value competed away fast, now that technology is readily available.

Much had been made of the Royals’ success in seemingly going against the grain, with regards “analytics.”  Now, several authors claimed that the Royals were in fact making good use of moneyball concepts, focusing on the traditional but still valuable ideas that have been neglected due to sabermetric fetish.  I think both are somewhat mistaken:  I suspect that the Royals began with a philosophy first and tried to incorporate statistics to fit the philosophy, not bounce around “analytics” chasing after the fool’s gold of commodified “good” stats whose value dissipates rapidly.  Copying the Royals’ approach, without having similar basic philosophy and organizational strengths and weaknesses, probably will not pan out.  Building the philosophy and style–and assemble personnel who appreciate them–is a long term process that requires, ironically, a deeper appreciation of what analytics do and don’t offer–specifically, the subtle differences between the many seemingly similar stats and how they mesh with the particulars of the team in order to find better “matches.”

This is hardly a new idea in business management:   in 1980s, as per this TAL story, GM execs were puzzled that Toyota was so willing to reveal the particulars of its management strategy to its competitor in course of their joint venture.  It turns out that Toyota’s management strategy is effective given the organizational philosophy of the firm and turned out to be very difficult to implement in GM without upending its fundamental characteristics.  It does seem that Toyota did overestimate the import of “Asian culture” as a component of its corporate philosophy, as GM was reasonably successful, over the (very) long term, in implementing many of the lessons it learned from Toyota–but most of these successes came in overseas subsidiaries far from the heart of GM’s corporate culture that impeded their implementation.  Perhaps this provides a better explanation of the much ballyhooed feud between Mike Scioscia and Jerry DiPoto that eventually led to the latter’s departure.  I don’t think Scioscia and the Angels organization have been necessarily all that hostile to the idea of “analytics” per se–they seemed to have interesting, quirky, and often statistically tenuous ideas about bullpen use and batting with runners in scoring position dating back to their championship year in 2002 at least. So a peculiar organizational culture already existed that could absorb analytical approach of certain strains but potentially hostile to others, and I wonder if what showed up was this, rather than “traditional” vs. “analytical” as commonly portrayed.

Here, I speak from personal experience:  I looked enough like a formal modeler to be mistaken for one by non-formal modelers, but I usually started from sufficiently different and unorthodox assumptions that I did not mesh with a lot of formal modelers who either did not understand that their assumptions need not be universal or were hostile to different ideas in the first place.  I will concede that, on average, the usual assumptions are probably right most of the time–but when they are wrong, they are really wrong, and a great deal of value resides in identifying the unusual circumstances when usually crazy ideas might not be so crazy.  Of course, that is why people, not just baseball teams, should take statistics and probability theory more seriously when they delve into “analytics.” Nevermind if stat X is “better,” unconditionally.  Is stat X more valuable given such and such conditions than stat Y, and do these such and such conditions apply to us more than those other guys?

PS.  This is the repeat of the story on beer markets and microbreweries, in a sense.   Bud Light is a commodity beer that seeks to fit “everyone” universally.  Its fit to any one market is imperfect, but, given the technology on hand, it can be produced much more cheaply than most beers that fit a smallish market better.  Only beer snobs are so willing to trade off much higher price for better fit in tastes.  This is independent of the methodological problem of identifying the fits–the question is, once you identified the better fit, how many people are willing to pay the price for the better taste?  But technological change forces a reconsideration of this business model:  microbrewery revolution was preceded by technological change that made production of smaller batches of beer much cheaper.  Producing massive quantities of the same taste is still cheaper, but the gap is much narrower.  Less snobby beer drinkers will pay a smaller premium for better taste fit.  So the problem is much more two-dimensional (at least) than before:  you find the better taste fit, and conditional on the taste fit (and the associated elasticities), try to identify the profit maximizing price.  This requires a subtler, more sophisticated strategy and analytical approach and is liable to produce a much more complex market outcome.  As noted before, people who are more sensitive to price than taste will still gravitate towards Bud Light, even if there is a taste that they prefer more, as long as the price gap is large enough.

With baseball (and indeed, all other forms of “analytics,”), the problem is the same.  FIP or SIERA or any other advanced statistics are still in the realm of commodity stats, something that is supposed to offer a measure of “universal” value.  If you will, these are the means to produce a better Bud Light.  But soon enough, Bud Light is still Bud Light.  It is not easy to find something that suits everyone that much better.  So you trade off:  you give up the segments of the market that have a certain taste for another segment that you can cater to more easily.  Or, in baseball context, you grab the players who may not be so good, in the overall sense, but whose strengths and weaknesses, whether quantifiable or not, complement your organizational goals and characteristics better, with the caveat that, even if they are quantifiable, the measures will be more complex than simple commodity stats like ERA or FIP, in that their usefulness would be conditional.  Perhaps one could come up with some sort of “fitness” or “correspondence” stats (incidentally, online dating services use this sort of stats–and this has long history of its own:  the “stable marriage problem” is one of  my favorites and is foundationally linked to the logic of equilibrium in game theory (and my research interest for years had been on “measuring” the stability/fragility of equilibria (Which, in a sense, is a paradoxical notion–if it’s not stable, how can it be an equilibrium?  But the catch is that most things are in equilibrium only conditionally–this is the core of PBE notion:  an outcome is stable conditional on beliefs that are justified by the outcome, i.e. a tautology.  If people, for whatever reason, don’t buy into the belief system, it may fall apart, depending on how many unbelievers there are.).

Using and Abusing Statistics–Baseball Edition

I like statistics and I like baseball, but the way I approach baseball stats might be a bit different from most other people.

At one time, it used to be that pitchers were evaluated on the basis of wins and losses, then came along pitchers who were ludicrously good with lousy win loss records like Nolan Ryan and people started realizing that the wins and losses make for lousy stats and started looking at alternatives.  By and large, that was a good thing–but with a caveat that people have forgotten.

More recently, people started realizing that some pitchers have ludicrously good ERAs and are not that good, and others with lousy ERA’s who were better than their numbers.  Came along more advanced stats like SIERA and FIP.  By and large, this probably is a good thing–but again, with a caveat that people forget.

The caveat in both cases is that the objective in baseball is winning.  Even if you allow an average of just one run every nine innings, if you keep losing, you still lost.  So winning is perfectly valid way of measuring a baseball player’s performance.  It is, indeed, the only measure that is actually meaningful.  Everything else is secondary.

The problem is that there are 25 players on a major league roster so that contribution to a win by a single ballplayer is conditional.  Steve Carlton, on terrible Philly teams, was more valuable, relatively speaking, than he would have been on a good team, even if he lost 20 games (i.e. 1973).  So how valuable is a single player on another team, ceteris paribus?  This involves constructing counterfactuals and it is something statistics–the real statistics–is supposed to be good at, as it came out of experimental research tradition.  But this is something that requires a bit more complex thinking than what most users of data, baseball and otherwise, seem too interested in consuming, as it often cannot reduce the performance to a single set of numbers.

Personally, I think ERA is still the best single set of numbers, for example, for evaluating pitchers for the ease of interpretation that it allows.  A pitcher with ERA of 3 on a team that averages 4 runs a game is a winner, on average, while the same pitcher on a team that averages 2 runs a game, on average, will be a loser, assuming that everything except the average offense (e.g. fielding, bullpen quality, etc.) stays that same.  That’s a bad assumption, obviously, but it omits an even more egregious and troublesome assumption from measuring pitchers by their win-loss records:  that everything, including offense, is the same–except, that is, the pitcher.  

Note that, one can actually do a bit better even to just use ERA or win-loss record, to evaluate a pitcher, by incorporating better statistical methods that don’t reduce themselves to a single number.  Pitching performance and everything else are random variables:  the offense might score an average of 5 runs a game, but with variance of 2, say.  The pitcher may give up 2 runs a game, but with the variance of 2.  Another lineup may score an average of 4 runs a game, but with no variance whatever.  Another pitcher might give up 3 runs a game, but with the variance of 0.  The second pitcher always wins, in front of the second lineup.  The first pitcher might be better on average, but he might lose, even in front of the second lineup.  But if you have the first lineup and if the pitching and hitting performances are independent (might not be–personal catchers and all that), perhap you might want the first pitcher rather than second–or not, perhaps, depending on the distribution (which may not be normal).  Of course, this is a baseball application of the “tall Hungarian” problem.  A high variance distribution allows for gambling in a way that low variance distributions do not–whether you choose to gamble depends on the circumstances.  Sometimes, gambling is the only way–and occasionally, it pays off.

Further incorporation of additional variables–fielding, relief corps quality, ground ball/fly ball ratios, and all that, will further reduce the variance, but will it completely eliminate the uncertainty?  Sometimes, a Mark Lemke hits a grand slam after all and an Omar Vizquel boots a grounder, after all.  You don’t want to intentionally put Mark Lemke in a spot where he HAS to hit a home run–that would be silly.  But risks and gambles are what make baseball interesting, and betting on high variance/low mean is sometimes exactly what you must do to win–even if you will probably lose your gamble.

Now, what being able to add more variables and reduce “errors” means is that you will be able to make better, safer gambles, but that is hardly a sure thing.  An interesting observation that has been made about investments into risky assets is that, the more data-intensive the research and analyses have become, the smaller the arbitrage opportunities have become:  not shocking, since, if it is obvious, people will grab on to them and pay a premium for it.  The consequence of this is that people are taking on more risk, because it is easier to bet on your getting lucky than being good–because all the obvious answers have been addressed.  I don’t know if this tradeoff is as well understood as it should be:  (relative) success is increasingly a sign of luck than skill.  But, at least when it comes to sports, we want to see the lucky as much as we do the skill.  You don’t expect a nobody to hit the walkoff hit to win a playoff series, but that happens often enough.

The bottom line is twofold.  First, all useful statistics are conditional (or Bayesian in a sense).  Unconditionally good stuff get arbitraged away fast–especially since unconditionally good stuff are obvious, even if you don’t know high powered stats.  The good players, good tactics, good approaches are good only if they are good for the situations that you need them for, which is almost certain to vary from team to team.  The real value is not that player X has WAR of 2, but how to best use a -2 WAR player (for another team, given how he was used there) to get positive win out of him for your team.  This can be tackled statistically, but not by calculating a single number that putatively captures his entire value.  Second, the value of a player is spread out over the entire season.  A player’s performance at any one time is variable, a gamble, a lottery ticket.  You invest in probabilities, but sometimes, General Sedgwicks get shot at improbable distances.  Working with probabilities and statistics CAN improve your chances at the gamble, but this is two dimensinoal–do you want to win big, at a big risk, or do you want to win small, at a small risk?  This comes with the additional proviso, of course, that your understanding of the universe is limited.  The lack of the appreciation for the risk and uncertainty is usually how one lies with statistics, or surprise the Belgians with unexpectedly tall Hungarians.

What Makes Humans Smart…and Dumb

I never heard about the Great Emu War until recently.  What happened in that “conflict” seems fairly predictable, actually:  in response to demands for action by farmers in marginal lands in Western Australia, the government sent soldiers armed with machine guns to cull marauding herds of large flightless birds, only to discover that mowing down wildlife with machine gun does not work as well as mowing down humans, only to scrap the project amidst much embarrassment.  What made me wonder about this venture, though, is something a bit different:  why was it so much easier to cut down humans with machine guns than birds?

Emus are big birds–pretty much about human sized.  They are fast runners, but I don’t think they are so fast that it makes it impractical to shoot them down with machine guns.  As far as I can tell, what made it so difficult for the Australian army machine gunners to shoot at emus effectively was that the birds ran whenever the soldiers approached them and when the shooting began (usually at considerable distance if only because soldiers couldn’t approach them closely) they ran in all direction in panic making it difficult for even a hail of bullets to hit many of them.  Of course, these are exactly the kind of natural reaction that almost any critter would engage in, if they were shot at–that is, except, one:  humans, especially those who are trained and disciplined.  What made it so easy for machine gunners to shoot down great masses of men during World War 1 was that humans are trained to behave unnaturally:  they kept their formation even in face of bullets and they actually approached the machine gunners even as the bullets were flying towards them–still packed in formations.

This is, in a sense, what human sociality achieves.  Humans do strange and unnatural things that definitely run counter to the natural instinct of self-preservation.  This is how a human “society” can remain organized even in face of adversity–which might do them good sometimes:  a group of people engaged in effective teamwork is far more effectively than just the sum of individuals (the Greek phalanx was nearly invincible in close combat as long as they could maintain formation where each pikeman could support (and could count on support from) his neighbors.)  But the same discipline that allows an entire society to operate as a team can be used as a bait to wipe out an entire society–Mongols and other steppe nomads were quite good at luring an entire army into a trap–which a highly disciplined army was more apt to–and wipe them out as a group.  (In a sense, the Romans were trapped and annihilated so completely at Cannae and Carrhae, by Hannibal and Surenas respectively, precisely because of the disciplined nature of their legions.)  The same, I suppose, applies to World War I and machine guns:  it takes discipline and training–precisely what make the human animal social and usually powerful–for an army to keep formation under attack, and that makes them so easy to wipe out with industrial machinery.

Discipline turns humans into machines, in a sense–but machines can outmachine humans.  Perhaps, at least some of the time, humans need a bit of animal instinct, to break the pack and run away from machine guns, like real living creatures, not dumbly march towards it only to be cut down in droves like stupid machines that aren’t so durable like real machines?

Measurements and Hypotheses

Let’s consider a scenario where you can measure the velocity over time of a pair of objects that are, presumably, subject to the same force.  Can you estimate their relative masses?  The short answer is yes, because we know that F = ma and a = dv/dt.  The relative masses of the objects would simply be m1/m2 = dv2/dt / dv1/dt.

Suppose we observe one of the same objects, say, the object 1, and another object, let’s dub it object 3, in another environment where they are, again, subject to the same force but not equal to the previous case.  Can the relative messes of object 2 and object 3 be compared?  Yes, in principle.  We know, theoretically, m1 = m2*dv2/dt/dv1/dt = m3*dv3/dt/dv1/dt.  So we can rearrange the terms and obtain m2/m3 = dv3/dt/dv2/dt.  Seemingly simple, isn’t it?

But have we actually “measured” the relative masses of objects 2 and 3, even indirectly? NO!  The relative masses that we have estimated is a conjecture, derived from a theoretical assumption, NOT a measurement.  We have measured the relative masses of objects 1 and 2, and again, objects 1 and 3.  We suppose that, in both instances, objects are subject to the same force and that the laws of motion that we have assumed to hold equally in both cases is valid.  In this sense, we extrapolate the assumptions to the observations and derive what “seems” to follow logically from what we believe to be true–i.e. the laws of motion. But this is not based on an actual observation and lacks the certitude of such.  It is a mistake to think that, just because the steps we have taken to derive this seem flawlessly logical, this is necessarily as true as the direct observation.  In other words, this is only a hypothesis and should be taken with a bit more caution until we can obtain a direct measurement.  This is, in other words, true as LONG AS THE ASSUMPTIONS WE MAKE REMAIN VALID.  This latter qualifier is frequently lacking in the way we use data, unfortunately, in all manner of settings.

Indeed, there are many potential reasons that the the relative masses might be off:  one possibility, for example, is that the objects are moving through a viscous medium in the first setting and vacuum in the other, and object 1 is far more aerodynamic than object 3.  The relative velocities in the respective settings cannot be compared using the basic laws of motion that fails to account for friction.  Ergo, the relative masses estimated on the assumption that f=ma is equally applicable in both settings are wrong.

In order to obtain the actual relative mass, there is no substitute for the direct comparison.  At minimum, some setting has to be created where the relative velocities of objects 2 and 3 can be compared against each other–an actual experiment.  The old estimate remains valid in this setting, though:  To quote Fermi, if the relative masses confirm what we estimated before, we have made a measurement, an actual one this time.  If not, we have a discovery to make–we might discover friction or something, if we keep at it.  The old numbers, wrong as they might be, were not a waste of time, but only if we remember how we got them in the first place–i.e. the assumption that the same laws of motion are applicable in both settings that we took the observations from.  The caveat is that until we have an actual measurement, we cannot presume that we have a measurement just because we have only semi-related measurements we can piece together through assumptions.

This raises an interesting question:  Supposing that the environments in which the objects move are indeed different, is the assertion that the relative masses of objects 2 and 3, which are never actually compared against each other, follow m2/m3 = dv3/dt/dv2/dt “fake”?  This information is, in light of actual “facts,” which are not yet available, “false” in the sense that they don’t jive with them.  However, it is “true” in context of information available to the observer and the seemingly logical, but, in full knowledge of the facts, misguided and incomplete, set of steps taken to derive the relative masses.  It just happens to be factually wrong, even if procedurally and conditionally true.  To condemn the information on the bases of being “wrong” and therefore “fake” would be misguided because, as it were, the estimates are wrong for the right reasons, so to speak. To reject the old numbers on the basis of being “wrong” would deprive us the opportunity for discovery, as to why the different sets of information are, well, different.  The important thing, then, is not so much whether a given piece of information is “fake” or “false,” but how that information was arrived at–the how, not the what.

The how, however, is often lacking in today’s informational environment.  How requires too much thinking and doesn’t even get us right answers:  we are too busy to bother with answers that aren’t “true.”  We merely trust or distrust the sources, and expect them to tell us the right answers so that we don’t have to think about the details.  So in this context, whether news is “fake” or “false” winds up taking an importance beyond it is worth.  This, in a sense, is the real problem posed by the “fake” news crisis:  we have so much information to deal with that we forgot how to think, and without thinking, it matters only if the information on hand is right or wrong, and having “wrong” information becomes far more damaging.

You Can’t Say That!

There is a lot to chew over in this essay and a much older essay that it links to.  I do wonder if the authors of these posts are themselves a bit trapped in their own (somewhat self-congratulatory, I’m afraid) bubbles of their own.

One book that went far to shape my thinking about the epistemology of science, even before I read Popper and Feyerand, was The Nemesis Affair, by David Raup.  Raup, who passed away in 2015, was a notable paleontologist who contributed to the idea that mass extinctions are cyclical and may have extraterrestrial origins–e.g. comets.  He began the book by going over theories like his that came before, almost invariably posed by great scientists with vast knowledge across multiple fields, including no small amount of expertise in paleontology and astronomy, but not professional astronomers or paleontologists–people like Harold Urey, the Nobel prize winning chemist.  Their arguments invariably made their way to top journals like Nature, journals that many scientists pay attention to, only to be met by total silence.  The reason, Raup suggested, was that the arguments of the sort that people like Urey were selling were just so far outside the conventions of the field that they were addressing that nobody knew how to respond, but, unlike some no namers who cannot be safely brushed aside  with the snide attribution that they say such things because they don’t know better, famous and accomplished  scientists cannot be so easily dismissed as cranks. So they get their hearing, the polite applause, and a publication in Nature, then everyone goes around around as if the whole thing never happened–because, for all practical purposes, it never did as their argument cannot be placed in context.

Could great scientists be the only ones who saw puzzling clues like what motivated his publication about comets causing extinctions?  It is doubtful:  far more likely, the younger, less accomplished scientists who thought up such crazy ideas were told that, if they press further, they will simply ruin their reputations and not get tenure.  Once they get tenure but settle down into being a routine scientist of middling sort, it is far easier to simply take conventional wisdom for what it is and live their lives.  It takes both a great scientist and a madman, someone who enjoys such prestige and influence that they cannot be brushed aside so easily, to obnoxiously push forward new ideas.  Of course, history reminds us that Galileo was such a person, forgotten though it is amidst all the mythmaking about his persecution–he was a friend of the Pope and half the cardinals who were presiding over his trial, was treated like an honored guest when he was being “tried,” and his sentence was to live outside the city limits at a luxurious mansion of his friend and supporter for a while.  Hardly “persecution.”  The rest of us have to conform to the conventions, if we value our lives, and quite frankly, “That’s something I should comment on. Nah, what’s the point? Too much downside” is the rule that all of us live by most of the time.

The trouble with this, of course, is that this creates an echo chamber of sorts–not necessarily one where everyone repeats the same thing and believes the same thing, but one where everyone knows what “the truth” is, repeats the same sanctimonious things, and keeps to themselves.  Societies like this are not uncommon:  USSR was like this:  everyone knew what the official Truth was–that’s literally what it says on the label on Pravda (the Truth, literally).  So everyone repeated it, acted like they took it seriously, and nobody believed a word of it, whether it was true or not.  Without means of evaluating the “truth” to satisfaction, everyone was essentially entitled to their private truths–whatever they believed was “really” going on in the world.  But this is not just true of an authoritarian society:  every society has certain myths that are “true” just because, that one cannot question.  To question these “truths,” indeed, is to expose oneself as an outside who cannot be trusted–say, a Korean who questions some of the national myths about horrors of Japanese rule, or an American who does not believe in Russians hacking the 2016 elections.  It is not so much that these official truths are false:  in fact, I’d imagine that, on average, vast majority of the content on Pravda was in fact very true factually throughout the entire Soviet era.  It is simply that overt questioning of the myths is not permitted.

The problem goes farther than that:  the truth is wrapped in layers of uncertainties, while the definitions that we use are poorly defined.  Can we even handle the truth when we see it?  As the saying might go, if we see God face to face, would we recognize even Him?  How people lie with statistics is at the margins, assumptions, and definitions:  the important thing, when dealing with data, is not whether something is or is not “true,” but whether the estimates are within an accepted set of margins of error given certain definitions about how things work.  What makes physicists, in particular, so much better at these than most other people is that they are very good at precisely crafting these definitions and assumptions and thinking through them logically.  But when the universe is itself murky, these clear definitions are self-deceiving:  as per my ever persistent rant about DW-Nominate:  yes, the numbers would indicate the “ideology” if the ideology were spatial and people acted both geometrically and asocially (i.e. based only on their own “preferences” without politics), but those would be some pretty damn stupid assumptions to make when you are dealing with politics.  If one is a physicist, the proper course of action as a scientist would be to conduct experiment in a setting where nuisances like air resistance or friction do not exist–or can be minimized, not pretend that universe everywhere is frictionless and pretend that the models that assume away friction provide usable guidance.  If one is an engineer, the theoretical models would be taken with a big grain of salt, consisting of a bunch of formulas and tables that account for frictions and such things for practical purposes.  By not being able to question sacred cows of assumptions that may not be challenged, we can do neither.  (I had the good fortune of just rereading this essay by Freeman Dyson about his experience crunching numbers for the RAF Bomber Command during World War II.  Basically, you can get people to trust you not just because your numbers are good–they wouldn’t know it even if they saw them:  they are NOT self evident, especially in a world where uncertainty is high–but because you are a famous scientist and, more importantly, decorated navy officer from World War I.  If you are neither, they trust you only so far as your “information” confirms their existing beliefs, rightly or wrongly.  Dyson has a wonderful description for this:  if the former, you are giving “advice”; if the latter, you can only give “information.”)

In a sense, this is the fundamental problem:  even if what you are saying is true–and, you yourself don’t always know this–there is no guarantee that your interlocutor will recognize it as true.  They have a certain set of ideas about what the “truth” should look like and if what you say does not look like it, you’d better give them reasons why your truth is bigger than their truth.  Not easy if they outrank you and tell you to “shut up.”  Feynman, in his famous Cargo Cult essay, had this to say about this:

“We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that …”

Jessica Livingston is right:  when there is silence, we do lose in insights, as per the aftermath of the Millikan oil drop experiment.  But we also know that every new idea we have is potentially mad, and we have much to suffer if we are perceived to be mad.  Agreeing with the “right people” that their worldview is right, and only minimal changes are necessary, if any, is something we do all the time.  Of course, this pollutes the information provided:  some of the information says “I am your friend and I support you, whether you are right or wrong.”  Only a little bit says, “I think you are wrong.”  In a highly uncertain environment where the right and the wrong is not obvious, even on the mattes of facts, it’s better, easier, and safer, not to mention more rewarding in career, to be on the side of the conventional wisdom, or what the present important people have to say about the universe.  Since the presently important people are usually not stupid, they are probably right anyways and you probably did not make an important, earth-shaking discovery.  But, if they are wrong, they can be very wrong, and if everyone is trying to be friends of the powerful rather than tell the truth  Since we can only tell the truth secondarily, from the analyses by these people who crunch the numbers and NOT from our own analyses of the truth–remember, we can’t handle the truth, literally, at least not all of it, so we almost always have to learn about the universe secondarily–when everyone who crunches numbers is interested more in appeasing the powerful and important rather than raise questions, we don’t even know how big a mistake we are making.  (In retrospect especially, I think everyone knew that there was something fundamentally wrong with the Clinton candidacy, and there was so much wrong with it that everyone saw something different.  But everyone knew that she had to win because the alternative did not make any sense, so they all minimized their sense of how likely Clinton defeat was, until only the truly mad expected Trump victory.  I think that’s a worse outcome than just hedging bets–it helped validate the true nutjobs as if they are only sane people.)

If Livingston is only discovering it now, I think she has lived a charmed life that most of us don’t have the luxury of.

Trust, Christianity, and Economics.

There is one story, supposedly originating from Buddhism but, to me, forever associated with Christianity, as I had originally heard it in a homily when I was young:  Heaven and Hell are, in terms of physical configurations, exactly the same–people are dining using extra long chopsticks.  The difference is that, in Hell, everyone is trying to feed themselves to no avail because the chopsticks are too long, while, in Heaven, everyone is happy because everyone is feeding each other.

Indeed, to me, this is at the core of Christianity, or, indeed, any organized religion that works well.  Religion, if done right, provides a focal point for a society, a set of basis on which communal trust is built on.  Christianity is not, to me, fundamentally about God or Christ, but being able to trust one’s fellow humans.  God, in this sense of Christianity, is the God of Kierkegaard.  Trust in others IS irrational:  the outcome where everyone feeds each other even if they cannot feed themselves, under most configurations of payoff structure, is not an equilibrium–it is far easier to exploit others’ goodwill while contributing nothing for sake of others.  With no one contributing, what might have seemed heavenly to begin with descends rapidly to hell.  The necessary condition for the Heavenly equilibrium, then, is the willingness of those who would contribute to do so even at loss to themselves.  Not everyone needs to make that kind of sacrifice:  once there are enough of them to change the norm, the magnitude of “sacrifice” is reduced for all.  Even if you are making the endeavor to feed others at your expense, others will feed you to make good your losses.  But in order to keep the equilibrium going, to prevent it from sliding down the slippery slope, at least some people need to be of “true goodwill.”   This is the story of Bishop Myriel from Les Miserables, in a sense, but also the basic story of Christianity, perhaps:  the Christian message, as far as I see it, is not so much about faith in God as much as faith in other humans and their goodwill because God commands that faith–and hopefully, maybe, may even offer a reward not of this world for doing so even if others humans might not.

Is religion strictly a necessary condition for the existence of such people of true goodwill?  Perhaps not.  But some sense of “community,” the sense that there is something greater than one’s selfish well-being is, I think.  Religions that offer a reward not of this world can only help.  In a sense, offer of reward to those of goodwill–i.e. those who would willingly make the sacrifice for others–is a common feature of most religions that win over many sincere adherents.  Perhaps it is part  of human nature as social beings that fosters such religions after all, countering the evolutionary evils of the “selfish gene” at the individual level.

But faith is hard, precisely because it is so irrational.  Bishop Myriel offers Jean Valjean the other candlestick, after the latter stole the first one is caught red-handed, seemingly proving his faith in humanity wrong.  Bishop Myriel persists in his irrational faith, and turns around Jean Valjean instead:  Jean opts to steal because he feels that the only way to get by is to try to feed himself, because he resides in hell.  But in so doing, he met the Buddha in Hell. (actual Buddhist story:  Dizang Buddha refuses paradise so that he can help redeem souls trapped in Hell.)  Unfortunately, Bishop Myriel is a rarity, even among those who think they believe.  I can certainly attest to trying hard to believe, not in God, per se, but in basic goodness of the people–myself included–and always having trouble doing so.

The basic message of economics, at least of classical economic theory, with its emphasis on “selfish rationality” starts to deviate from that of Christianity (and of old “irrational” religions in general) from this point on.  Ultimately, the assumption is that people do what is good for themselves, for their own sake.  There is little or no room for “faith,” irrational as it is.  It is not so much that people who respect rationality cannot respect faith:  Tocqueville wrote admiringly of the people who believe and the power of religion in sustaining societies, as did Marx–who, after all, lived in an era that appreciated the good that opiates bring to the suffering.  But, in the end, they could not believe–and Tocqueville tried very hard to foster faith in himself–and, especially in case of Marx, bet their view of the future on a “rational” worldview.  Even in game theoretic terms, as long as there are enough people who would persist in behaving irrationally out of faith, a Heavenly equilibrium can be maintained, by at least not discouraging others from actively offering their now reduced sacrifice for the common good (even if not actively discouraging it).  The problem is that it offers no obvious reason as to why anybody should do so in the first place.

This is something that I was trying to explore in my novel-writing venture about Judas Iscariot–which, I suppose, has always been about myself and my own search for that faith.  Judas’ betrayal, as so many have pointed out, is an essential component of the Christian story of Redemption and not something for which he should be condemned for–and perhaps, his true sin was not that he betrayed Jesus, but he lacked faith.  And an extraordinary faith that would have taken–that, against all that is obvious, the act of betrayal is a good thing and that he should do so happily.  The story of Judas in the aftermath, in the Gospel of Matthew, is that he took it badly, that he threw away the 30 pieces of silver–the reward that he did not need or seek–and hanged himself out of shame and guilt, and perhaps it is this shame and guilt that was Judas’ central sin, much more than his betrayal.  In a sense, a New Testament version of the story of Job–God did not simply take away Judas’ possessions, health, and family, but He directly subverted the focus of his faith, and this challenge Judas could not overcome.  But how many mortals could keep their faith in face of such adversity?  Faith is irrational:  you believe in and put trust in things that make no sense, and perhaps, even things that you should not in good sense do so.

One person that I wonder who actually understood this contradiction is Frank Herbert, the author of the Dune series.  One scene, towards the end of God-Emperor of Dune, that always stuck to my mind, is how Leto, the near-immortal superhuman-Sandworm hybrid emperor who is the object of the cult, dies:  one of his own worshippers, confident in her faith that she is doing the right thing, obeys Leto’s command to destroy the bridge that causes him to fall into water–the only thing that could kill him.  In a sense, same action as Judas–an act of betrayal that leads to the death of the object of worship–and, in the storyline of the Dune series, leads to an analogous outcome, for Leto’s death leads to re-emergence of sandworms, in each of which resides “a pearl of his consciousness,” and, in the grand scheme of things, part of the same larger plan for the redemption of humanity (you need to follow the entire Dune series for this to make sense.)  But fundamentally different, at the same time, perhaps, because Judas, when confronted with his test of faith, obeyed at the cost of his faith–well, at least, that’s my novel and story.

In the end, the Heavenly equilibrium is where people do the “right thing,” without asking for reward–i.e. feed others with their extra long chopsticks–relying only on the faith that, in the end, God will provide.  When they ask “what is in it for me” and try to stipulate conditions, the path is set for a descent to the Hellish equilibrium when the arrangements start breaking down.  But without faith, or at least, enough people with strong enough–i.e. completely irrational–faith, Heavenly equilibrium cannot sustain itself.  Beneath this simple characterization is that what “the right thing” constitutes is hardly obvious:  is handing the other candlestick to Jean Valjean the right thing?  Is betraying Jesus Christ without questioning it the right thing?  Is creationism and other forms of religious extremism the right thing?  Is it simply doing your utmost as you see fit, or is it trying to find out what others want from you and trying to adjust everything to their sense of what they see fit?  Perhaps it is none and all of these things, and that the thing to do is to just place trust in the unknown and let things be while doing what you can, without trying to be too “rational”–that brings the point back to Kierkegaard’s original argument, that you “believe in” things that you do not understand, things that make no sense.  Perhaps Pangloss was right, after all–not because the world is objectively the best that it can be, but because cynicism subverts faith, and without faith, the world is set on a swift path to the Hellish equilibrium.

Perot and Trump

One of the phenomena in history of American politics that never seems to draw enough serious attention is the Third Party phenomenon.  There are plenty of good reasons for this, I suppose:  Third Parties never win (true for all FPTP systems, although, technically, US Presidency is not FPTP); Third Parties in US are never consistent (unlike Third Parties in all other FPTP systems, where there usually is the same persistent third  party), and, most importantly, the politics of third party don’t fit neatly into the mold of how American politics is conceptualized (particularly in the spatial framework and the estimates of DW-Nominate scores and such.)

I had rather a lot of firsthand experience with this, the latest being not too long before I left academia.  I was starting to undertake a fairly sizable project ostensibly about the Perot campaigns of 1992 and 1996, but really about how ossification of the two-party system was creating a large number of “alienated” voters who find an outlet in an alternative candidate, or, in other words, a theory of Third Party movements rooted in how institutions of two-party politics (and overuse of agenda setting by the political insiders) operate.  This ran into exactly the three problems noted above:  very few people were interested in the Perot campaign, which was a one-off strange event that was years ago; the alleged “uniqueness” of the Perot candidacy meant that there simply wasn’t enough data to generalize the theoretical underpinnings; and those who read the early versions insisted on evidence based on DW-Nominate scores and/or spatial reasoning, in spite of the central point of the theoretical argument being that spatial reasoning and DW-Nominate scores are misleading and the evidence presented focusing on errors and misclassifications being associated with the strength of Perot votes (or rather, disparity between votes for Perot and the local Democratic–almost always Democratic, at least in 1992, which is where I was paying more attention–incumbents.)  To be fair, the evidence was not very strong:  you’d believe it if you bought into the theoretical argument, but if you didn’t, you wouldn’t, and most people just didn’t bother.  A lot of my recent thinking about both Trump and Sanders arise from thinking about Third Party movements at a conceptual level, though.

Drawing an analogy between Perot (and Wallace before him) and Trump/Sanders (or even between Trump and Sanders) is a tricky business.  First, there are more than two decades separating 1992 and 2016 (and same number of years between 1968 and 1992, ironically).  These are not the same voters.  These are not the same geographies.  Things have changed enough that it is worse than useless to try to match up the county maps between the elections.  The voters may be analogous in the sense that they are alienated and devalued by the political process, but the reasons that they feel alienated and devalued are different.  There are echoes that resonate between any pair of these four insurgent candidates, but there are also significant differences, especially on what they emphasized (and what the outside audiences think they heard from them.)  If you will, it is too easy to get too carried away with the analogies:  an elephant might be like a spear, but only at a rather limited and superficial level–not many of the properties associated with a spear carry over to an elephant (this is something I find troubling with spatial models, incidentally–“left” and “right” make for useful analogies, but to expect that something that is true within framework of Euclidean assumptions-not universally applicable even within geometry–should apply with equal validity in politics is dangerous.  To think that measurements can be taken naively on the facile assumption that they are applicable is a madness)   What is more, it is not obvious that it was the specifics of the messages are even all that important (we don’t know what exactly Wallace’s, Perot’s, or even Trump’s programs are, other than they were addressing the contemporary voters’ grievances–they were NEVER very clear on the specifics.)

To compound the matters even more, Trump’s success in capturing the Republican nomination means that Trump’s electorate was a mixture of Republican voters and Trump voters, of whom the former are far more numerous, even if, without the latter, he’d probably never have won the presidency (conversely, one might imagine that a 2016 version of Perot, without association with either party but with the same sort of appeal as Trump, might have done better than Trump with “Trump voters”–many Sanders voters, ironically, especially in the Midwest,although, apparently NOT the Northeast, seem to have supported for Clinton, after all.  Whether Perot drew more from the Republicans or not was a hot topic then–indeed, seemingly the ONLY thing that people cared about his candidacy, not what made Perot and his voters tick.)

These suggest a rather complex array of problems inherent in studying Third Party (or Third Party type) movements, especially in United States.  Since Third Parties, if they emerge in a form serious enough to be noticed, operate in a realm apart from the usual mode of politics that we normally get accustomed to, we often lack the conceptual framework to place them in and without the conceptual framework, we are either using completely mismatched yardsticks (wow, that mountain sure is high!  how many kilograms is it?) or making stuff up out of thin air, often based on nothing other than our opinions.  In order to try to make sense of them, we need to step back and rethink the truisms that we take for granted about “normal” politics–for example, rather than simply assume that “the party always wins,” we might wonder, why should people bother with the party if they know they always lose if they play through the parties?   But this, in turn, is difficult to achieve:  we KNOW that the party always wins.  We can predict the outcomes better because we know the party will rig things so that they never lose–until the party hits the limit which we don’t know because we never thought about that…because that limit is only a “theoretical” that happens so rarely.  But the ability to “predict” normal politics with greater accuracy comes with a high cost when the normal politics falls apart–a sort of intellectual Punctuated Equilibrium.  “Normal politics” is built on a fragile foundation:  enough people trust the institutions to “work”–not unlike fiat money, perhaps.  As long as the trust remains sufficiently high, as long as most others can be expected to follow the rules and norms, most people find it to their advantage to respect them as well.  If so, knowing the rules and norms and that they “work,” no matter why they work (and when they might stop working) helps make better sense of the politics.  Better adaptation to the given institutional environment yields greater payoffs, so to speak.

When that trust undergirding the norms and rules is lost among enough people in society, it would be foolish for most people to follow the rules and norms then:  many sets of institutions collapse completely when that happens.  Knowing the “rules and norms” of a bygone era ensures only extinction.  But has the point of mass extinction been reached, yet?  When a Third Party should win, that constitutes a sign that the point of collapse has arrived (this has taken place only once, in 1860).  When they merely perform well, it suggests that dangerously many people are willing to set aside the rules and norms of “normal” politics.  With the strange candidacy of Donald Trump, of course, it is not at all obvious if a Third Party candidate did or did not win or if the politics of the next era will abide by the same rules and norms as before.

There are two entirely different scenarios here, whose prospects depend not only on the motives of Trump, but other political actors as well.  The Republican leadership in Congress might be sufficiently emboldened that, with a nominally Republican White House, they can do whatever they like.  But Trump has won the “Trump voters” (rather than Republicans who voted for Trump because he’s a Republican and all that) on the basis of promises that run completely opposite that of the usual Republican agenda–a big infrastructure program, something “wonderful and beautiful” in place of Obamacare, etc. Will they be cast aside just because Congressional Republicans demand it?  That is one possibility, but it is equally, if not more, plausible that Trump should seek to maintain his own agenda rather than reduce himself to Paul Ryan’s plaything.  Perhaps these programs will be achieved with big side payments to the Republicans (quite likely), but, Trump may easily seek to draw in potential allies among the Democrats as well, especially among the alleged “far left” (whose misleading DW-Nominate scores were created by their record of votes against the agenda of the Democratic Party, indicating more their outsider status within the party machinery rather than “ideology”–in other words, perfect potential allies for the Trump administration if they should seek to buck conventional agendas of both Republicans and Democrats.)  Odd coalitions that defy the existing rules and norms, were that to take place, can be the norm.  for example, W-Nominate scores for 2017-8, if so, will be dramatically different from DW-Nominate scores.  (In attempt to create a “universal” scale, DW-Nominate scores force a set of constraints across different sessions of Congress, while W-Nominate scores can be calculated separately for separate batches of votes–usually, session by sessions.  If things change dramatically between one session of Congress to another, the changes will show up in W-Nominate with much greater clarity than in DW-Nominate, even though the scores will not be directly comparable.).

When all these happen, will the “Perestroikists” (or, at least, their equivalents among the Trump fans) say “I told you so”?  But they didn’t say any of these:  only that the conventional wisdom, as defined by spatial modelers and others, is wrong and misleading. Perhaps they might try to explain things by invoking the “Magical Great Man” who, by sheer voodoo, can do things that others can’t?  Well, maybe that’s not entirely wrong, but if so, that would only have been possible because the existing institutions of politics will have collapsed under the weight of their own internal contradictions.  (Yes, I am deliberately invoking Marx, and he was right:  the institutions of capitalism are internally contradictory, and if taken to logical extreme, are bound to collapse under their own weight.  Marx himself recognized, though, that the internal contradictions are in fact held together by seemingly “irrational” but in fact very logical, flexible, and powerful sinews–the phrase “religion is opiate of masses” came out of the admiring recognition of how powerful religion is at holding together societies that otherwise might be too brittle.  In a sense, multicultural myths might be taking the place of the religious sinews in a modern society–but moderns don’t seem to realize, as Marx actually did, incidentally, that the religion did not become the sinews of society by fiat–it took many religious wars, pogroms, and burning heretics to sustain it.)

 

Bankruptcy of American Social Policy

One of the objections by the embattled American working class against the present social policy (e.g. welfare, Medicaid, etc.) that came up repeatedly during the 2016 campaign season is that the way the present policy is structured is “humiliating.”  I think this characterization is a little misleading:  it gives the impression that the primary objection to the policy is mainly “psychological,” that the working class is rejecting the policy that is substantively “good for them” out of sheer obstinate pride.  I think this is basically a wrongheaded characterization that, quite frankly, makes the wonks feel better about themselves and further convinces that these people are “undeserving of help.”  Let’s think of the specifics.

One point that was raised repeatedly in course of Japanese economic morass was that it is impossible to actually lower the savings rate of the Japanese public because Japan is a country with limited social welfare programs where the population is quite old.  When government spends money like mad, the money that makes its way into the pockets of the people find themselves into their savings.  Even when the interest rates are lowered absurdly, to the point of effectively punishing savers, the people will keep saving.  They have no other recourse other than accumulate savings:  they want to keep a reasonable standard of living. They cannot depend on the government (due to the limited nature of Japanese welfare state coupled with high cost of living).  They cannot depend on children (partly due to decaying social networks, partly due to the fact that, to a degree a consequence o decaying social networks in the previous generation, they have no children.)  The magic words here are “a reasonable standard of living.”  Japanese welfare system is adequate enough, as far as I know, to keep people from starving on streets.  But very few Japanese are so impoverished that they are in immediate danger of starving.  Indeed, they are desperate to save so that they would not be in danger of imminent starvation if not for government aid.  A lot of Japanese, no doubt, are in fact quite far from such dire situation.  This does not matter of course:  they do not want to fall into a more desperate set of circumstances and they need savings to ward it off.  Of course, this proclivity to save subverts the very point of expansionary fiscal policy:  very little money will be actually spent to stimulate the economy.

The analogue to the situation in United States is analogous, on multiple fronts.  First, all the monetary policy undertaken to expand the money supply did little good over the past decade.  Banks, firms, and various wealthy holders of capital are almost literally sitting on piles of money.  They see a lackluster economy that is not worth investing in so they keep their capital dry.  But because the investment activity is sparse, the economy remains depressed, except for short term infusions that temporarily raise employment as long as the firms need make no long term commitment to the workers–thus, employment seems getting better-ish at times, but not at a “fundamental” level that provides for long term work for the many underemployed people.  Second, in combination with the fallacies of spatial thinking, the Japanese story underscores why the seemingly beneficent social policy of the American left meets with such disdain among the working class and the lower middle class.  The spatial thinking suggests that, to those who like welfare state, a little welfare state is better than none, even if not as good as a big welfare state.  This is not so.  In order to qualify for the aid offered by government, the little savings of the working class get in the way.  The little bit of resources that they have for themselves to shore up their own cause have to be abandoned in order that they might qualify for little bit of aid.  As it were, even if they do, that they used to have a little bit of resources would actually disqualify them from receiving aid when they need it.  The pride, in other words, is not simply psychological:  it is the little bit of security that they have built for themselves with their own hands, which wind up being worse than security at the time of need, coupled with the fact that they have to lose everything they have and throw themselves at the mercy of the bureaucrats–which adds a psychological dimension to the humiliation.

My personal opinion about Japan had always been that the Japanese government was mistaken in believing that they could pave the country into economic vitality.  It is not construction projects they need, but a guarantee of generous pension for the elderly as a fundamental right, something that can pre-empt the need for obsessive savings.  The same idea should be equally applicable for reforming “welfare” in U.S.: eliminating means testing so that people can qualify for benefits without their small savings that they worked hard for stripped away.  This is hardly a revolutionary idea:  the same idea, in the name of  “Share Our Wealth,” floated around during the Great Depression, although its association with demagoguery of Huey Long and Charles Coughlin discredited it quite a bit.   It is, of course, the same basic idea as “universal basic income” at its core.  Hayek, when he advocated for the basic idea, was concerned about preserving the dignity and independence of the individual.  When those who are struggling economically are forced to choose between their dignity and independence–which, in economic terms, is underscored by their homes, their savings, and limited means to support their dignity on their own–and having to strip them under duress in order to escape privation, the society is transformed to that of serfdom.  The serfs may be superficially well-off, even rich.  But without dignity or independence, they are still serfs and with little ability to run the society they occupy that happens to be a tyranny that operates at the whim of their masters.   U.S. and Japanese welfare state further subverts the social stability because they not only demand degradation of their recipients as the qualification, but because they offer rather little even after the forced degradation.

In an odd way, this is what should make a successful “welfare state” work:  eliminate welfare and replace it with a “right to dignified livelihood,” for which the only qualification is “citizenship,” defined in an older sense–a social, rather than a legal sense.  This is, an a sense, something that even Hayek would have approved of, even if not his less than half-brained self-claimed descendants.