During EXPO REAL 2015, Black Swans were the subject of a panel discussion in the EXPO REAL Forum. Hugh F. Kelly, PhD., CRE, Real Estate Economics, Brooklyn, NY, gave a keynote speech in which he asked just how predictable unpredictable events in fact are, and how we can prepare for them, especially in terms of real estate. For those who were not able to assist in this keynote – here is the chance to read a summary:
The term “Black Swans” popularized by Nassim Nicolas Taleb names highly improbable events nobody is able to predict, still less to forecast the impact of such an event. According to Taleb we suffer from a triple opacity of Conventional Risk Analysis:
- An illusion of understanding – we think we know more than we actually do
- A retrospective distortion – we tend to ‘explain’ causality only after the fact
- An overvaluation of information – both quantitatively and categorically
Taking three cases to examine the triplet of opacity at the heart of Taleb’s black swan challenge Hugh Kelly answered the question:
How do we put The Black Swan to work for us?
The Illusion of Understanding
As the first Black Swan event Hugh Kelly took the 9/11 destruction of New York’s World Trade Center. Immediately after the horror and the shock of the terrorism and the awful loss of life settled into our awareness, real estate people – along with everyone else – tried to come to grips with what this “new era” would mean, and how the world had changed.
There was fairly widespread consensus about what 9/11 was going to mean for high-rise office building – and the outlook was, in a word, dire. “Experts” understood that it would be impossible to lease high floors in the tallest towers ever again. Downtown New York, in particular, was finished as an office location – and given the toxic cloud that emanated from Ground Zero for months after 9/11, it was probably not going to be very desirable as a place to live, either. And, as a wider consequence, big cities and big properties were listed as prime “targets of opportunity”; therefore, central business districts were going to be eclipsed by less-visible suburban locations. That was the common understanding, and it was nothing if not illusory.
The best guide to what business would do after 9/11 was what they actually did do when forced to make the tough choices of relocating in the one-to-six months after the tragedy. The result: Firms did not go far afield, didn’t go to the locations that happened to have the most space opportunistically available, didn’t elect to hedge costs by going to the least expensive locations either. And, with rare exceptions, they did not move more than three miles from Ground Zero. Where they did move was to sites that had a combination of characteristics – access to the entire regional labor force (that is, locations toward the region’s center), places with a volume of modern, highly functional office space, and places with multiple options in transportation modalities. In other words, business actually sought to replicate the attributes they had first chosen when they elected to be at the World Trade Center and its environs in the first place.
In one sense, this approach fits Taleb’s recommendation of a skeptical, bottom-up empiricism. But in another sense it relies upon standard tools of data analysis that Taleb frequently derides in his attack on prediction. The antidote to the illusion of understanding is deeper understanding. The cure for superficial analysis can be simply better analysis.
The Retrospective Distortion
The next step was to confront Nassim Taleb’s controversial position, stated in the title of Part 2 of The Black Swan. This Part is labeled, “We Just Can’t Predict,” and leads off with Chapter 10, “The Scandal of Prediction.” In large measure, Taleb believes this failure is less because we lack clairvoyance, than because we fall into three big errors.
First, we dismiss the role of chance in life, in business, and in the course of history.
Second, we succumb to what he calls “the narrative fallacy,” the tendency to look at events after the fact and rationalize – over simplistically – cause and effect so that we can feel comfortable that we grasp how and why things happen. We tell ourselves stories.
And then, in the third error, those stories become the basis for a mathematics that quantify those narratives into forecasting models that extrapolate the presumed cause and effect relationships – perhaps partially correct, but only partially so – into the future, yielding predictions based upon central tendencies, usually based on regression equations, that avoid extremes as a deliberate statistical strategy.
This critique is Taleb’s basis for connecting the concept of “improbable” to the concept of “unexpected.” It is extremely germane for real estate investors, who must rely on the Principle of Anticipation in evaluating and pricing individual deals and overall investment strategies. The Principle of Anticipation states that present value is the discounted present worth of expected future benefits – projected cash flow and project appreciation – of the assets we purchase. Taleb asks us to consider if we aren’t really foolish to presume we can anticipate future performance in any meaningful sense.
Hugh Kelly used the run-up to the Global Financial Crisis as the case in point. Alan Greenspan, in an apologia published in Foreign Affairs magazine, confesses “I never saw it coming,” but then protects his flank by remarking that the Federal Reserve’s extremely sophisticated models didn’t predict the systemic threat – and neither did the models of the big banks like JP Morgan Chase, nor did the models maintained by the World Bank and the IMF. But rather than throw out the models – which is Taleb’s recommendation – Greenspan advocates tweaking the models, incorporating elements of behavioral economics to account for what Keynes long ago called “animal spirits: the entry of fear bringing risk aversion into dominance; the claims of immediacy, short-term thinking, over long-range perspective; and the herd instinct that first drives speculation and then panic, first irrationally inflating prices, and then over-correcting that inflation on the downside. But even with a nod toward acknowledging “tail risk” (a key Black Swan argument), Greenspan concludes, “Forecasting will always be something of a coin toss.”
That sounds discouraging and Hugh Kelly asked:
Can’t we do a better job of anticipating? Shouldn’t we do a better job of anticipating?
And gives the answer:
You bet we can, and we should!
There are a number of fine books treating the kind of Black Swan that appears as an economic bubble and the repeated pattern of bubbles reveals a pattern of recurrent symptoms that can help us diagnose future bubbles in advance. A few of these symptoms are:
- The proclamation of a “new economy” with new rules, usually with new technology driving the change
- The onset of speculative fever, especially manifested by the dominance of expected future appreciation in setting market prices, as opposed to current operating profits and tangible utility in producing profitable goods and services
- Over-confidence, bred by a specious association of wealth with intelligence
- The rise of an influential generation lacking financial memory
- And the rise of leverage, excessive borrowing against weak collateral and the pursuit of one’s own financial gain by the use of “other people’s money.”
The confluence of these factors – the emergence of this complex of symptoms – seems to be how that mental disease, the euphoria that inflates financial bubbles, manifests itself in advance of the collapse occurring.
The Overvaluation of Information
At several spots in The Black Swan, Taleb recounts the events of October 19, 1987. That was the day that stocks dropped 22.6% in value on the New York exchange, and markets around the world followed suit.
This is still the market’s largest one-day decline in percentage terms in its long history.
The Narrative Fallacy certainly came into play, as analysts attributed the crash to a market spooked by the escalation of violence between Iran and the U.S., to a sense of over-valuation sparking arbitrage selling in New York against the options on the futures markets in Chicago, even a shortage of liquidity as the London markets were closed due to the Great Storm of 1987.
But most observers attributed the free fall in prices to computerized program trading, triggered by the parameters of portfolio insurance. The algorithms in the programs incorporated vast amounts of market data very quickly, and were thought to be a measure of protection in signaling reasons to “get out”, to sell, if conditions hit trigger points that had previously been associated with accumulating risk. That is, the programs were supposed to use data, numbers representing real information, to mitigate risk. They exacerbated risk instead.
In an unexpectedly powerful demonstration of the Efficient Market Hypothesis, though, all the major traders had the same information and the same algorithms – meaning that they all sent “sell signals” at the same time. With a huge imbalance of sellers over buyers, there was only one direction for a market seeking a clearing price – down into the abyss. Only the closing bell could halt the cascade – and even then it took hours to book and register the trades that would finally tally the day’s losses.
A Black Swan like this has been seen several times since. And Hugh Kelly makes the argument that the problem is not too much information and too sophisticated math.
Rather, the problem is that using algorithms as substitutes for prudent judgment, and valuing speed over careful consideration is a mistaken approach to thoughtful decision-making.
Data and analysis are always about the past, but good judgment is of the essence in making decisions shaping the future.