Now I know it isn’t sexy to blog about complexity but I am fascinated by it. The more I understand, the less I seem to know, more often than not I cannot explain something yet I truly believe it to be true.
Nassim Taleb, originator of the black-swan theory has just had an essay published in The Edge called THE FOURTH QUADRANT: A MAP OF THE LIMITS OF STATISTICS. I have to say it is quite heavy going for a casual read, but near the end it neatly summarises two of the issues I find most interesting about complex systems.
1 The inadequacies of metrics. Increasingly we see organisations and policy makers rely on targets and measures that are wholly inadequate for the job. Taleb details his proof that ‘variance’ and ‘standard deviation’ are far from useful in the complex domain:
“There is a measure called Kurtosis that indicates departure from “Normality”. It is very, very unstable and marred with huge sampling error: 70-90% of the Kurtosis in Oil, SP500, Silver, UK interest rates, Nikkei, US deposit rates, sugar, and the dollar/yet currency rate come from 1 day in the past 40 years, reminiscent of figure 3 [graph shown above]. This means that no sample will ever deliver the true variance. It also tells us anyone using “variance” or “standard deviation” (or worse making models that make us take decisions based on it) in the fourth quadrant [the complex domain] is incompetent.”
In other words just because the extraordinary event hasn’t yet happened, doesn’t mean it will not, just like the finding of the first black swan. Now this is not all written in hindsight to explain the current economic turmoil. In 2006 Taleb wrote:
“The government-sponsored institution Fannie Mae, when I look at its risks, seems to be sitting on a barrel of dynamite, vulnerable to the slightest hiccup. But not to worry: their large staff of scientists deemed these events “unlikely.””
2. To beware of risk numbers
“Not only we have mathematical problems, but risk perception is subjected to framing issues that are acute in the Fourth Quadrant. Dan Goldstein and I are running a program of experiments in the psychology of uncertainty and finding that the perception of rare events is subjected to severe framing distortions: people are aggressive with risks that hit them “once every thirty years” but not if they are told that the risk happens with a “3% a year” occurrence. Furthermore it appears that risk representations are not neutral: they cause risk taking even when they are known to be unreliable.”
Now, I find this all rather unnerving. Major decisions are being taken on our behalf based on inadequate metrics, and true risk levels are being severely misunderstood. Where are all the complexity graduates who have come to terms with uncertainty, understand emergence, feedback and phase changes and can put a more appropriate system into place? We need them in place now if we are dampen this increasingly topsy turvy economic roller coaster before it is too late.