So tell me, what will actually happen in the future?

The future is a fascinating thing. Not only our private discussions, but also public debates on TV are shaped by questions about what’s to come. How is this policy going to affect favorability ratings? How is your fellow student going to perform on his graduation admission test? Which team will win the UEFA Champions League? When will self-driving cars be allowed on the streets? The range of potential questions spans the entire spectrum: from personal life, to politics, economics, business and sports. “Prediction is very difficult, especially if it’s about the future”, captured Niels Bohr the essence of our fascination about the uncertainty of the future. Only when we challenge ourselves to constantly track and adjust our assumptions about the future are we able to appreciate the difficulty of the endeavor.

My own personal forecasting challenge

The turn of the year is usually a well-recognized occasion to reflect on past event and assess, what the upcoming year might hold in stock. This year, as so many years before, I discussed these issues with my friends. However, we didn’t keep it at discussions. To keep us accountable, we agreed on a list of questions, which forced us to give precise forecasts about the future. Here are some of them:

  • Who will be the GOP frontrunner for the 2016 US presidential elections?
  • Will the French Air Force fly air strikes in Libya before 1 September 2016?
  • Before 1 May 2016, will Britain set a date for a referendum on EU membership?
  • Which team will win the European Football Championship in France this year?
  • Before 1 November 2016, will it be officially announced that Greece is leaving the eurozone?
  • Will negotiations on the Transatlantic Trade and Investment Partnership (TTIP) be completed before 1 January 2017?
  • What will be the price of oil (Crude Oil Brent) on December 16, 2016?
  • Before 1 November 2016, will UBER have filed for an IPO?
  • On December 16 2016, what will be the stock price of Rocket Internet SE?
  • What will be the price of bitcoin (in Euro) by December 16 2016?

How can forecasting skills be assessed?

While some of these questions are make-or-break (yes-or-no) questions, others have a wider range of potential options. Nevertheless, all questions share two similar traits:

  • A clear date is associated to the event
  • They enforce clear-cut decision making

Only questions phrased in this way allow for subsequent tracking and evaluation. Simply asking “Is social media going to be more important for business success this year?” will evoke vivid answers, but how should an answer be credibly assessed: What does “important” even mean? What metrics determines the importance of social media for business success? And what threshold should it have surpassed in the course of the year?

Nevertheless, those vague questions still dominate public discussion and so-called experts are more than happy to share their opinion. It goes without saying that one cannot really lose the ‘expert status’, if there is no way to track performance. Once you’re perceived as an expert, you are very likely to remain one. In their book “Superforecasters”, Phil Tetlock and Dan Gardner share a great story of how they had asked experts in the 80s about the probability of a collapse of the Soviet Union. They asked the same experts after the Soviet Union had indeed dissolved about their prior probability attribution. And those experts were – on average – of the belief that they had attributed a high probability (even though they hadn’t). This story serves as a great reminder of how tough it is to forecast and how easily we trick ourselves into believing it’s easy (hindsight bias).

Good Judgement and Superforecasters

Indeed, our small forecasting challenge was inspired by the superb book “Superforecasters”, which shares the wealth of experience Gardner and Tetlock gathered in the course of the “Good Judgement Project” (GJP). The GJP essentially crowdsourced forecasting, providing a platform where interested people could forecast events from the sphere of business, politics, sports and society. Through years of recurring “forecasting tournaments”, they actually found that some people significantly outperformed others when it came to forecasting – even when benchmarked against “professional forecasters”, as to be found in the intelligence community. They labelled these people “superforecasters”. Through the GJP, Gardner and Tetlock established a rather bottom-up approach for determining who actually deserves the expert label (which comes with its own challenges – the book discusses these in length).

Where are all those superforecasters?

Even though the GJP has provided new insights, the predominant method to forecast remains different. Experts, defined a priori by the those institutions interested in the forecast, engage in a dialogue about the future, casting their votes on the importance of macro-trends and global dynamics. The European Union, for example, continuously engages in a foresight process to stay on top of technological and societal trends and shape policy accordingly. Other institutions are also aiming to leverage their network of experts. One method, which is regularly employed in this regard is the DELPHI method. The method draws its name from the oracle and was originally designed by the Rand Corporation in the 1950s. A set of questions is posed to a group of experts which are tasked to intuitively and independently share their judgement. In following round, the methods aims to foster consensus among the experts. While the DELPHI technique might have advantages in bringing a group of diverse experts in line (or/and identifying outliers and ask for their qualitative feedback), there is no direct way to track the experts’ performance.

This essentially leaves us with the problems described at first. Without reliably tracking experts’ performance, we cannot credibly determine and define experts in the first place.

As for our small forecasting challenge, we’ll certainly be able to determine a “winner” at the end of the year. But does that make the winner a superforecaster or an expert? Probably not, as it could have been pure luck. Therefore, forecasting will (and this is also a forecast) for the time being remain a blend of bottom-up trackable forecasts and ‘expert’ opinion and judgement. As long as we are aware of the pitfalls of both approaches this does not even have to be that bad.


Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.