Accepted Manuscript
Title: Looking back on looking forward
Authors: Patrick van der Duin, Martijn van der Steen
PII:
DOI:
Reference:
S0016-3287(12)00045-6
doi:10.1016/j.futures.2012.03.003
JFTR 1731
To appear in:
Please cite this article as: P. van der Duin, M. van der Steen, Looking back on looking
forward, Futures (2010), doi:10.1016/j.futures.2012.03.003
This is a PDF file of an unedited manuscript that has been accepted for publication.
As a service to our customers we are providing this early version of the manuscript.
The manuscript will undergo copyediting, typesetting, and review of the resulting proof
before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that
apply to the journal pertain.
Introduction
Looking back on looking forward
us
cr
ip
t
Futures researchers are mainly involved in two activities. The first, of course, involves looking
towards the future. The second activity involves looking back at the past to see how people, in
particular futures researchers themselves, looked to the future and assessing to what extent reality
relates to the envisaged future(s). The first activity is interesting and important and is the core of the
work of futures researchers. The latter activity is not only interesting and important, but can also be
entertaining. It is amusing to think about all those wrong predictions. Bill Gates, for example, at one
point felt that 640 kB would be enough for people to link up to the Internet, a medium in which,
incidentally, he had little faith and which he described as hype. Ken Olson, founder and director of
Digital, made a similar mistake by claiming there is no reason why people would want to buy a
computer. However, the volume and seriousness of erroneous predictions not only cause immense
social and economic damage, they are also the reason that futures research has such a poor
reputation among many outsiders.
pt
ed
M
an
The more often we look back at former predictions, the less we are inclined to take seriously current
studies of the future. The funny mistakes from the past put the use of evaluation in a problematic
perspective. We call this the retrospectivity trap: looking back to futures research in the past gives
those studies of the future unintentionally an undesirable and unproductive accuracy perspective
which hurts the profession of looking to the future. But at the same time retrospective exercises are
inevitable and crucial for the further development of the field of futures research.
This uneasy relation between present and past of studies of the future places futures researchers for
a dilemma. On the one hand, looking back at former studies of the future is productive and
necessary; the profession of futures research improves by structural and systematically investigating
the own performances and its effect, and drawing lessons from that. On the other hand, it has
become clear that looking back does not only lead to learning on how to improve studies of the
future but also to ‘bashing’ the profession. This leads to the question how we can both in a
systematic and productive look back at former studies of the future. This special issue deals with
that dilemma.
Ac
ce
The necessity of looking back
Futures researchers need to go through a great deal of trouble to explain what they are doing and
often tell people that it is as much about exploring the future as it is about making predictions.
Often, they like to refer to the speed with which developments create a future that is very different
from the past and the present. However, although that is a valid reason to conduct futures research,
it is not sufficient, at least in case where future research fails to produce good and usable results.
The usefulness of scenarios is often defended by referring to the success with which they were used
by Shell in the 1970s and the South-African government in the 1990s. Ironically, however, the
success at Shell was due to the fact that one of the predictions happened to come true, while the
South-African government decided to pick one scenario and run with it. In both cases, the actual
application clashed with the scenario philosophy, but they are both seen as classic showcases of
futures research! The existence of an uncertain future alone is insufficient justification for looking
to the future. The quality of the results has to be up to scratch and we feel that that quality is the
result of a rigorous and high-quality futures research process.
Looking back to contribute to predictive and explorative studies of the future
Evaluating futures studies is about more than satisfying our historical curiosity. Its real aim is to
examine their quality, success, and impact and to determine what we can learn from them. There is
a special relationship between quality, success, and impact. The success of a futures study refers to
Page 1 of 7
the accuracy of its results, while its impact is determined by the extent to which it has helped an
organization make a good decision. In this case, a good decision means that the decision that has
been made based on the futures study is better than the one that would have been made without the
help of the futures study and that the organization has made a better decision than a competing
organization that has not conducted or used a futures study.
an
us
cr
ip
t
Evaluating the quality of a futures study is not simple. The evaluation depends, among other things,
on the nature of the method that has been used (for example, is it predictive or exploratory in
nature) and the goal for which it was conducted. The quality then depends on decisions that the
futures researcher has to make about aspects like time horizon, the geographical scope of the futures
study, and the way the results of the study are communicated. The other elements of the futures
study (like time horizon and geographical scope) can reasonable be deduced from the method being
used and the nature of the strategic and organizational decisions, which implies, correctly in our
view, that the method and goal of the study need to be consistent with its subsequent
implementation, which in turn means that the number of elements needed in a futures study is not
extremely high (and unlimited), but that it is determined by the method and goal of the study. On
the other hand, the sheer diversity of possible methods and goals makes it difficult to determine a
straightforward set of success factors. Although a contingency approach has to be recommended,
due the above-mentioned diversity of methods and goals, any evaluation approach is then bound to
have a certain level of complexity.
ed
M
In addition, success and impact are not necessarily connected. In other words, a futures study that is
successful does not have to have a major impact if the organization in question decides to ignore it.
Often, there is a desire to decide independently what to do with the results of the study. And if the
results are not satisfactory, they will not be used, regardless of the success of the futures study.
There is also a positive side to the strict division between futures researcher and decision-maker,
because it allows the futures researcher to operate independently. However, in general, we assume
there is a positive relationship between the success of a futures study and its impact.
Ac
ce
pt
Summary: looking back on quality, success and impact
Based on the discussion presented above, it is possible to establish the following link between
quality, success and impact:
Figure 1: Quality, success and impact of a futures study.
Quality relates to the futures study itself and the decision the futures research has made with regard
to the elements that make up the future study (like time horizon and geographical scope); success
relates to how well the futures study manages to predict or explore the future. Prediction in turn
Page 2 of 7
pt
ed
M
an
us
cr
ip
t
refers to the accuracy of the prediction, while exploration has to do the extent to which decisionmakers have been surprised and inspired. Impact relates to the extent to which the futures study has
helped the organization to make the right decision. In this sense, a high-quality futures study
produces a successful outcome and has a positive impact on the organization. This reasoning is
based on the assumption that success and impact coincidences but the logical outcome of a highquality futures research process. In other words, we do not believe in the 'ugly but useful' that
golfers use to indicate that a badly hit ball can still end up in a favorable position (for instance as a
result of a lucky bounce). Although luck is always welcome, a professional futures researcher has to
shape the futures research process as best he or she can. But although we don’t ‘believe’ in ‘ugly
bus useful’ that doesn’t mean that it doesn’t happen. There can be impact, without quality and
success, success without impact and quality, and quality without impact and success. For instance,
Figure 2 shows a possible relationship between (lack of) quality and (lack of) success:
ce
Figure 2: possible relationships between the quality of the futures research process and its success.
Ac
In figure 2, the bottom-left and top-right quadrants are logical from our perspective that the quality
of the futures research process determines its success. The top-left and bottom-right are interesting
anomalies. The top-left quadrant refers to futures studies that were carried out badly but that
somehow became a success, for instance in cases where a correct prediction was based on a faulty
analysis. We have labeled the bottom-right quadrant 'tragic' because, despite the high quality of the
futures study, its results were not translated into a (good) decision. This can be the fate of any
adviser or external expert.
Any evaluation of a futures study starts with an evaluation of the futures research process, because
that is considered an important factor in explaining the success and impact of a futures study. In a
sense, the quality of a futures study is a prediction of its success and impact. We call this 'forward
evaluation' (or ex-ante evaluation). Making the right decisions with regard to the subject of a futures
study has to create the conditions for the success and impact of the futures study.
The link from quality to success to impact reflect the reasoning presented above, while the link
form impact tot success to quality are meant to indicate that, often on the basis of the impact that a
futures study may or may not have had, it is determined how the futures study was carried out and
what its quality is or has been. We call this 'backward evaluation' (or ex post evaluation). It may be
clear that most evaluations are of the latter category, often because the success and impact of the
futures studies in question are unsatisfactory. These kinds of unsuccessful studies may prompt an
Page 3 of 7
evaluation, but often erroneous decisions are also linked to possible mistakes that have been made
in the design and implementation of the futures studies.
us
cr
ip
t
The precarious relationship between the success and impact of futures studies is reflected in the
well-known concepts of self-fulfilling or self-denying prophecies. Self-denying prophecies are
futures studies whose impact has a negative influence on their success. In the case of self-fulfilling
prophecies, an attempt is made to increase the success of the futures study through the intended
impact. The relationship between success and impact is not always a straightforward linear one
(from success to impact), but often there is also a cyclical element.
Ac
ce
pt
ed
M
an
Figure 3 shows the connection between self-denying and self-fulfilling on the one hand, and
intentional and unintentional. In principle, the relationship between success and impact is made
deliberately by futures researchers and decision-makers. A bad prospect is intended to scare people
and make them modify their behavior in an attempt to avoid the scary future with which they are
presented. Examples of this are the classic doomsday scenarios. J.F. Kennedy's famous vision about
putting a man on the moon is an example of an attractive future that inspired many people to make
it come true. However, the relationship between success and impact is not always a conscious one,
possibly because a futures study is only intended to map the future as accurately as possible without
wanting to influence a decision-maker or other potential stakeholders. An example of this is the
famous report of the Club or Rome (although one might argue about whether or not the grim future
they presented has been completely avoided). An example of a self-fulfilling prophecy that came
true unintentionally is PeakOil, where the statement that oil production has reached its peak inspired
people to look for other sources of energy, as a result of which oil production stopped growing.
Figure 3: possible relationships between self-denying and self-fulfilling prophecies, and intentional
and unintentional.
Looking to the future is riddled with paradoxes, like the paradox that we want to gather knowledge
about the future based on historical information. This paradox can only be 'solved' by assuming that
historical structures will continue to exist in the future. This paradox also applies to the evaluation
of futures studies. Evaluations are often designed to help us avoid making the same mistakes in the
future and to keep the good things. However, much like it is impossible to predict the future based
on historical information, it is very doubtful to assume that lessons learning from futures studies
conducted in the past will stand us in good stead with futures studies that have yet to be conducted.
This is something that we should always keep in mind when evaluating futures studies and using
the outcomes to make recommendations for the future.
Page 4 of 7
Evaluating the future: a few empirical studies
After exploring some theoretical considerations about the evaluation of futures research, let us
briefly discuss a number of empirical evaluation studies.
pt
ed
M
an
us
cr
ip
t
Evaluations of futures studies may originate from the areas of academics or popular science. Most
evaluations focus on predictive futures research, we suspect because they are fairly easy to evaluate
and because the large number of wrong predictions appeal to our imagination. One of the first
studies focusing on evaluation futures studies is William Ascher's Forecasting: an appraisal for
policymakers and planners (1978) [1]. Ascher’s main finding is that people who predict the future
suffer from what he calls ‘assumption drag'. Although new values are added to the predictive model,
the assumptions that form the structure of the models are not adjusted and are still based on
structures that were valid in the past. However, it is more than likely that those structures will not be
valid in the future, which means that assumption drag will lead to erroneous predictions. In his book
Megamistakes, Steven Schnaars (1989) [2] emphasizes that many predictions suffer from a hefty
overdose of technological optimism. It is assumed that whatever is technologically possible will in
fact be applied in the future. However, the adoption of new technologies has a strong social
component. People need time to become accustomed to new technologies and often will make a
cost-benefit analysis before make their decisions. While Ascher warns us about clinging to
principles that applied in the past, because they make it harder to see what is new in the future,
Schnaars argues that the speed with which changes take place, in particular when it comes to
technology, should not be exaggerated and that old structures may be more tenacious than we tend
to assume. The Fortune Sellers by William Sherden (1997) [3] makes it even clearer that predicting
the future is a tricky affair that is rarely successful, whatever the area of investigation (finance,
meteorology). Nevertheless, predicting the future has become a billion dollar industry. It may be the
only industry where there is money to be made from products and services that have little value.
What the future holds. Insights from social science, edited by Richard N. Cooper and Richard
Layard [4], also contains an overview of predictions in various domains, including demographics,
climate change and labour, and again the message is that a great many of them got it wrong. A study
by Orrin H. Pillkey and Linda Pillkey-Jarvis [5] on the area of environmental subjects like fishery,
plants and coastal management also shows that making accurate predictions is difficult. In fact, the
subtitle to their publication is: Why environmental scientist can't predict the future.
Ac
ce
Of course, the question is why all those predictions miss the mark so often. Although we lack the
space to address this question in detail here, one of the reasons has to be that any predictive model,
no matter how good or complex it is, is based on historical data and patterns and the assumption
that they will more of less be present in the future. However, it is precisely the inherent uncertainty
of the future that makes the future different from the past and the present (a prediction in itself …),
which in turn makes the predictive value of historical data and patterns limited. People also often
refer to chaos theory, which states that tiny deviations from starting values may affect the course of
the future in major ways, which means that having a clear picture of the past in itself has little use
for the future, and in addition the required precision of historical data is an unattainable goal.
The analyses by Ascher [1] and Schnaars [2] show that the truth about the future lies somewhere in
the middle. But it hardly comes as a surprise that neither underestimation nor overestimation is very
helpful. Gartner's famous hype cycle places overestimation and underestimation in time by arguing
that many predictions first overestimate and then underestimate future reality.
In addition to the question as to which factors explain the accuracy of certain predictions, it is also
interesting to see who manages to predict the future accurately and who do not. According to
Sherden [3], we should not expect too much from stock exchange analysis, because they often
become rich from making a single accurate prediction, and whose initial success is usually followed
by a growing inability to predict what the markets will do with any degree of accuracy. According
Page 5 of 7
us
cr
ip
t
to Tetlock [6] (also see Garder [7]), we should not bet on 'hedgehogs', but on 'foxes'. Hedgehogs
know one major thing, but foxes know many things. When it comes to predicting the future, the
generalists beat the specialists. Specialists tend to project the one thing they do know on everything
they encounter, which is unwise in an uncertain and diverse society. Generalists have a greater
ability for learning and are good at combining information. So it is not only the data, but especially
the kind of predictor that offers us a clue as to the accuracy of a prediction. We should add to this
that it may be smarter, when evaluation the accuracy of predictions, it may be better not to have
others do it, which is what Aldous Huxley did with his Brave New World [8], because that may get
in the way of objectivity.
ed
M
an
The evaluations discussed above focus on predictions. There are evaluations that use a different
approach, but they are far outnumbered by those that focus on predictions. An example of an
evaluation of a more exploratory kind of futures study can be found in this special issue. This study
looks back at a futures study that was carried out by the WRR in 1977 [9]/ Although this study was
initially predictive in nature, a decision was later made, based on the advice of third parties, to use
two scenarios. This futures study was evaluated 25 years later. One of the results was that it is wise
to carry out a preliminary 'meta-exploration' first, which means that, before proceeding with the
actual futures study, one should first determine to what extent the subject under investigation is
likely to be sufficiently relevant in the future. However, explorations (and predictions) are often
magnifications of issues that are relevant as it is. A meta-exploration, difficult and paradoxical
though it may be (after all, it involves an exploration into the question whether or not the subject of
the exploration has any kind of future …), is important to ensure that the proper subjects are being
investigated. In addition, the evaluation study showed that more attention needs to be paid to the
time horizon of an exploration. The different subjects of the futures study of the WRR from 1977
(including labor, climate, justice, energy and the economy) were all examined using a 25 year time
horizon, while the subjects in question all have different dynamics.
Ac
ce
pt
It turns out that evaluating is not an unknown activity for futures researchers. Nevertheless, we
found it necessary to ask futures researchers to make a contribution to this area, because we still
need more knowledge to improve the theoretical foundation and practical application of the
evaluation of futures studies. In this special issue, we offer six contributions. The paper by
Bouwman, Haaker and De Reuver is aimed at evaluating the high expectations with regard to
developments in the ICT industry. Although it is in line with the evaluations of predictive futures
studies discussed earlier, the article also formulates conclusions and recommendations with regard
to the developments and application of methods of futures research. Rijkens-Klomp discusses
futures studies in the local public domain. Its unit of analysis, unlike many evaluation studies, is not
the producer of the futures study (in other words, the futures researcher), but its user. In her
evaluation, Rijkers-Klomp draws a distinction between process-related and organizational factors.In
contrast, Rohrbeck, looks at futures research in the business community and evaluates not futures
study itself as the value it may generate for the companies involved. On the basis of 20 case studies,
four success factors for corporate foresight can be identified. Splint and Van Wijck evaluate a
specific project in the government domain, where scenarios are linked to a so-called signpost
method, which is used to test the content of the scenarios against current developments, making is
possible to expand the scenario method and improve the follow-up of the scenarios. Piirainen,
Gonzalez and Bragge have attempted to design a systematic framework for evaluation futures
explorations, after mapping the entire futures exploration process by including the input, throughput
(process), output and impact. Finally, Van der Steen and Van Twist focus on the (possible) impact of
futures research on policy-making in a governmental setting. In conclusion, we offer an analysis of
how contributions to this issue help us in evaluating and understanding policy and how decisions
happen.
Patrick van der Duin
Page 6 of 7
Delft University of Technology, faculty of Technology, Policy and Management
Jaffalaan 5
2628 BX Delft, the Netherlands
0031-15-2781146
[email protected]
and
us
cr
ip
t
Martijn van der Steen
The Netherlands School of Public Administration
Lange Voorhout 17
2514 EB The Hague, the Netherlands
0031-70-3024910
[email protected]
References
Ac
ce
pt
ed
M
an
[1] W. Ascher, Forecasting: an appraisal for policymakers and planners, John Hopkins University
Press, Baltimore, 1978
[2] S.P Schnaars, Megamistakes and the myth of rapid technological change, The Free Press, New
York, 1989
[3] W.A. Sherden, The fortune sellers. The big business of buying and selling predictions, John
Wiley & Sons, New York, 1998
[4] R.N. Cooper and R. Layard, What the future holds. Insights from social science, MIT Press,
Cambridge, Massachusetts, 2003
[5] O.H. Pilkey and L. Pilkey-Jarvis, Useless arithmetic. Why environmental scientists can’t predict
the future, Columbia University Press, New York
[6] P.E. Tetlock, Expert political judgment. How good is it? How can we know?, Princeton
University Press, New Jersey, 2005
[7] D. Gardner, Future babble. Why expert predictions are next to worthless and you can do better,
Dutton, New York, 2011
[8] A. Huxley, Brave New World revisited, Vintage, 2004
[9] P.A. van der Duin, C.A. Hazeu, P. Rademaker & J.S. Schoonenboom, The future revisited.
Looking back at ‘The next 25 years’ by the Dutch Scientific Council for Governmental Policy
(WRR), Futures 38 (2006), 235-246
Page 7 of 7