Alec Finney, Managing Director of Forecast Insight, explores the challenges faced by forecast creators to communicate the vital information that allows stakeholders to get a full picture of the contextual information surrounding the forecast. The assumptions that drive the forecast and an expression of confidence in the numerical output are seen to be the most important.
To quote a senior VP in Pharma:
“We see the numbers – but to assess the quality of the forecast we need to see much more.”
The iceberg analogy might be a bit hackneyed – but forecast outputs are like icebergs, 80% of the work needed to produce the output is invisible.
We see ‘iceberg’ forecast every day through the media – from which soccer team will win which trophy to how interest rates will vary over the next year – to what the weather will be like tomorrow. You will have noticed I have left out political election forecasts – as these seem unforecastable at the moment.
So what is there hidden under the water?
Essentially two things. First, a description of what the future will look like contained in a set of self-consistent assumptions – and second, a modelling component that shows how the changes determined by the assumptions will affect the forecast.
The assumptions can be categorised in several ways; how the cultural and environmental world will change; how the competitive landscape will evolve and what the impact of new technology will be – to name a few.
Forecast models divide themselves into two classes – those that have no back data with which to work will look for causal links between assumptions and forecasts, for example the relationship between interest rates and inflation. Models with back data use extrapolation techniques (there are lots of algorithms available) to determine the future. Algorithms alone give a naïve vision of the future given that the environment stays the same – and the presented trend continues. Under the iceberg is the process of identifying future events and assessing their impact on the naïve forecast. This part of the modelling process can get lost in the information cascade but should be visible to make a judgement call on the forecast quality.
An example will be useful here.
Here is a forecast:
Interest rates in the UK will stay at 0.25% in 2017 but increase to 2.0% by the end of 2018
But below the water we have:
Consumer spending will be flat over the rest of 2017 maintaining low interest rates (assumption) – but inflationary pressure as a result of a weaker pound will lead to an increase in interest rates to 2% in 2018’ (modelled output).
From many conversations with forecast users, creators and approvers it is essential we find ways in which the ‘below the iceberg’ forecasting information is made more available and transparent. The debate about investment opportunities then moves from subjective debates about the numbers – to a more informed and productive debate about the assumptions that drive the forecasts and the strengths of the models that transform the qualitative assumption into a quantifiable output.
The problem facing businesses about to make significant strategic investments are that the assumptions and insights needed to create a quality forecast – those underwater – are contained in the wide range of divers non-aligned systems within the business.
There are also significant problems in bringing this information together to help stakeholders to make confident decisions. It’s not easy to get interactive systems to combine numerical and narrative information and even more difficult to keep it up to date.
The current solution to these problems is about 350 PowerPoint slides that provide the answer to all the possible 5000 questions that could be asked about the forecast – instead of finding out what the 25 real concerns are in the mind of the key stakeholders. This over complication of forecasts comes from a series of discussions with the forecast creator which add up to ‘What could our senior managers possibly want to know about this forecast?’ Everybody makes a contribution to this. Some of the questions are not capable of being modelled – but the forecast creator is forced to make a guess. However, most senior managers have a small number of questions about a particular asset forecast…’ How will the competition react, how confident are we on price and (form a pharma viewpoint) are there really that number of patients suffering this condition. By knowing these trigger points the forecast can then focus on modelling these specifics – and enrich the debate on possible investment.
A particular issue that causes much more heat than light is resolving the problem when two forecasting organisations, within one company, forecast the same product asset – usually the global view versus the affiliate view. Again the discussions can be about the forecast numbers and somewhat sterile. However, the creation of a simple comparator matrix comparing the two sets of assumptions shows very clearly where the differences lie. An informed discussion can then take place.
In conclusion we need to find ways of bringing integrated, interactive and distributed forecasting information into one system to enrich the data and provide decision makers with the narrative information that supports the forecast in a structured and navigable way.
We also have to not only live with forecast uncertainty – but embrace it. No matter how many variables we introduce into algorithms or causal models the forecast will be inaccurate to some degree. We have to broaden the debate into which future looks the most likely, what is the bandwidth of uncertainty – and what will be the impact of our new investment?
Then we can make informed, confident decisions.
And rather than trying to second guess what insights stakeholders need…
Just ask them.