Academia.eduAcademia.edu

Looking back on looking forward

Futures researchers are mainly involved in two activities. The first, of course, involves looking towards the future. The second activity involves looking back at the past to see how people, in particular futures researchers themselves, looked to the future and assessing to what extent reality relates to the envisaged future(s). The first activity is interesting and important and is the core of the work of futures researchers. The latter activity is not only interesting and important, but can also be entertaining. It is amusing to think about all those wrong predictions. Bill Gates, for example, at one point felt that 640 kB would be enough for people to link up to the Internet, a medium in which, incidentally, he had little faith and which he described as hype. Ken Olson, founder and director of Digital, made a similar mistake by claiming there is no reason why people would want to buy a computer. However, the volume and seriousness of erroneous predictions not only cause immense social and economic damage, they are also the reason that futures research has such a poor reputation among many outsiders. The more often we look back at former predictions, the less we are inclined to take seriously current studies of the future. The funny mistakes from the past put the use of evaluation in a problematic perspective. We call this the retrospectivity trap: looking back to futures research in the past gives those studies of the future unintentionally an undesirable and unproductive accuracy perspective which hurts the profession of looking to the future. But at the same time retrospective exercises are inevitable and crucial for the further development of the field of futures research. This uneasy relation between present and past of studies of the future places futures researchers for a dilemma. On the one hand, looking back at former studies of the future is productive and necessary; the profession of futures research improves by structural and systematically investigating the own performances and its effect, and drawing lessons from that. On the other hand, it has become clear that looking back does not only lead to learning on how to improve studies of the future but also to 'bashing' the profession. This leads to the question how we can both in a systematic and productive look back at former studies of the future. This special issue deals with that dilemma. The necessity of looking back Futures researchers need to go through a great deal of trouble to explain what they are doing and often tell people that it is as much about exploring the future as it is about making predictions. Often, they like to refer to the speed with which developments create a future that is very different from the past and the present. However, although that is a valid reason to conduct futures research, it is not sufficient, at least in case where future research fails to produce good and usable results. The usefulness of scenarios is often defended by referring to the success with which they were used by Shell in the 1970s and the South-African government in the 1990s. Ironically, however, the success at Shell was due to the fact that one of the predictions happened to come true, while the South-African government decided to pick one scenario and run with it. In both cases, the actual application clashed with the scenario philosophy, but they are both seen as classic showcases of futures research! The existence of an uncertain future alone is insufficient justification for looking to the future. The quality of the results has to be up to scratch and we feel that that quality is the result of a rigorous and high-quality futures research process. Looking back to contribute to predictive and explorative studies of the future Evaluating futures studies is about more than satisfying our historical curiosity. Its real aim is to examine their quality, success, and impact and to determine what we can learn from them. There is a special relationship between quality, success, and impact. The success of a futures study refers to the accuracy of its results, while its impact is determined by the extent to which it has helped an organization make a good decision. In this case, a good decision means that the decision that has been Futures xxx (2012) xxx–xxx

G Model JFTR-1731; No. of Pages 6 Futures xxx (2012) xxx–xxx Contents lists available at SciVerse ScienceDirect Futures journal homepage: www.elsevier.com/locate/futures Introduction Looking back on looking forward Futures researchers are mainly involved in two activities. The first, of course, involves looking towards the future. The second activity involves looking back at the past to see how people, in particular futures researchers themselves, looked to the future and assessing to what extent reality relates to the envisaged future(s). The first activity is interesting and important and is the core of the work of futures researchers. The latter activity is not only interesting and important, but can also be entertaining. It is amusing to think about all those wrong predictions. Bill Gates, for example, at one point felt that 640 kB would be enough for people to link up to the Internet, a medium in which, incidentally, he had little faith and which he described as hype. Ken Olson, founder and director of Digital, made a similar mistake by claiming there is no reason why people would want to buy a computer. However, the volume and seriousness of erroneous predictions not only cause immense social and economic damage, they are also the reason that futures research has such a poor reputation among many outsiders. The more often we look back at former predictions, the less we are inclined to take seriously current studies of the future. The funny mistakes from the past put the use of evaluation in a problematic perspective. We call this the retrospectivity trap: looking back to futures research in the past gives those studies of the future unintentionally an undesirable and unproductive accuracy perspective which hurts the profession of looking to the future. But at the same time retrospective exercises are inevitable and crucial for the further development of the field of futures research. This uneasy relation between present and past of studies of the future places futures researchers for a dilemma. On the one hand, looking back at former studies of the future is productive and necessary; the profession of futures research improves by structural and systematically investigating the own performances and its effect, and drawing lessons from that. On the other hand, it has become clear that looking back does not only lead to learning on how to improve studies of the future but also to ‘bashing’ the profession. This leads to the question how we can both in a systematic and productive look back at former studies of the future. This special issue deals with that dilemma. The necessity of looking back Futures researchers need to go through a great deal of trouble to explain what they are doing and often tell people that it is as much about exploring the future as it is about making predictions. Often, they like to refer to the speed with which developments create a future that is very different from the past and the present. However, although that is a valid reason to conduct futures research, it is not sufficient, at least in case where future research fails to produce good and usable results. The usefulness of scenarios is often defended by referring to the success with which they were used by Shell in the 1970s and the South-African government in the 1990s. Ironically, however, the success at Shell was due to the fact that one of the predictions happened to come true, while the South-African government decided to pick one scenario and run with it. In both cases, the actual application clashed with the scenario philosophy, but they are both seen as classic showcases of futures research! The existence of an uncertain future alone is insufficient justification for looking to the future. The quality of the results has to be up to scratch and we feel that that quality is the result of a rigorous and high-quality futures research process. Looking back to contribute to predictive and explorative studies of the future Evaluating futures studies is about more than satisfying our historical curiosity. Its real aim is to examine their quality, success, and impact and to determine what we can learn from them. There is a special relationship between quality, success, and impact. The success of a futures study refers to the accuracy of its results, while its impact is determined by the extent to which it has helped an organization make a good decision. In this case, a good decision means that the decision that has been 0016-3287/$ – see front matter ß 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.futures.2012.03.003 Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003 G Model JFTR-1731; No. of Pages 6 2 Introduction / Futures xxx (2012) xxx–xxx made based on the futures study is better than the one that would have been made without the help of the futures study and that the organization has made a better decision than a competing organization that has not conducted or used a futures study. Evaluating the quality of a futures study is not simple. The evaluation depends, among other things, on the nature of the method that has been used (for example, is it predictive or exploratory in nature) and the goal for which it was conducted. The quality then depends on decisions that the futures researcher has to make about aspects like time horizon, the geographical scope of the futures study, and the way the results of the study are communicated. The other elements of the futures study (like time horizon and geographical scope) can reasonable be deduced from the method being used and the nature of the strategic and organizational decisions, which implies, correctly in our view, that the method and goal of the study need to be consistent with its subsequent implementation, which in turn means that the number of elements needed in a futures study is not extremely high (and unlimited), but that it is determined by the method and goal of the study. On the other hand, the sheer diversity of possible methods and goals makes it difficult to determine a straightforward set of success factors. Although a contingency approach has to be recommended, due the above-mentioned diversity of methods and goals, any evaluation approach is then bound to have a certain level of complexity. In addition, success and impact are not necessarily connected. In other words, a futures study that is successful does not have to have a major impact if the organization in question decides to ignore it. Often, there is a desire to decide independently what to do with the results of the study. And if the results are not satisfactory, they will not be used, regardless of the success of the futures study. There is also a positive side to the strict division between futures researcher and decisionmaker, because it allows the futures researcher to operate independently. However, in general, we assume there is a positive relationship between the success of a futures study and its impact. Summary: looking back on quality, success and impact Based on the discussion presented above, it is possible to establish the following link between quality, success and impact (Fig. 1). Quality relates to the futures study itself and the decision the futures research has made with regard to the elements that make up the future study (like time horizon and geographical scope); success relates to how well the futures study manages to predict or explore the future. Prediction in turn refers to the accuracy of the prediction, while exploration has to do the extent to which decision-makers have been surprised and inspired. Impact relates to the extent to which the futures study has helped the organization to make the right decision. In this sense, a high-quality futures study produces a successful outcome and has a positive impact on the organization. This reasoning is based on the assumption that success and impact coincidences but the logical outcome of a high-quality futures research process. In other words, we do not believe in the ‘ugly but useful’ that golfers use to indicate that a badly hit ball can still end up in a favorable position (for instance as a result of a lucky bounce). Although luck is always welcome, a professional futures researcher has to shape the futures research process as best he or she can. But although we do not ‘believe’ in ‘ugly bus useful’ that does not mean that it does not happen. There can be impact, without quality and success, success without impact and quality, and quality without impact and success. For instance, Fig. 2 shows a possible relationship between (lack of) quality and (lack of) success: In Fig. 2, the bottom-left and top-right quadrants are logical from our perspective that the quality of the futures research process determines its success. The top-left and bottom-right are interesting anomalies. The top-left quadrant refers to futures studies that were carried out badly but that somehow became a success, for instance in cases where a correct prediction was based on a faulty analysis. We have labeled the bottom-right quadrant ‘tragic’ because, despite the high quality of the futures study, its results were not translated into a (good) decision. This can be the fate of any adviser or [(Fig._1)TD$IG]external expert. Fig. 1. Quality, success and impact of a futures study. Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003 G Model [(Fig._2)TD$IG] JFTR-1731; No. of Pages 6 Introduction / Futures xxx (2012) xxx–xxx 3 Fig. 2. Possible relationships between the quality of the futures research process and its success. Any evaluation of a futures study starts with an evaluation of the futures research process, because that is considered an important factor in explaining the success and impact of a futures study. In a sense, the quality of a futures study is a prediction of its success and impact. We call this ‘forward evaluation’ (or ex-ante evaluation). Making the right decisions with regard to the subject of a futures study has to create the conditions for the success and impact of the futures study. The link from quality to success to impact reflect the reasoning presented above, while the link form impact tot success to quality are meant to indicate that, often on the basis of the impact that a futures study may or may not have had, it is determined how the futures study was carried out and what its quality is or has been. We call this ‘backward evaluation’ (or ex post evaluation). It may be clear that most evaluations are of the latter category, often because the success and impact of the futures studies in question are unsatisfactory. These kinds of unsuccessful studies may prompt an evaluation, but often erroneous decisions are also linked to possible mistakes that have been made in the design and implementation of the futures studies. The precarious relationship between the success and impact of futures studies is reflected in the well-known concepts of self-fulfilling or self-denying prophecies. Self-denying prophecies are futures studies whose impact has a negative influence on their success. In the case of self-fulfilling prophecies, an attempt is made to increase the success of the futures study through the intended impact. The relationship between success and impact is not always a straightforward linear one (from success to impact), but often there is also a cyclical element. Fig. 3 shows the connection between self-denying and self-fulfilling on the one hand, and intentional and unintentional. In principle, the relationship between success and impact is made deliberately by futures researchers and decision-makers. A bad prospect is intended to scare people and make them modify their behavior in an attempt to avoid the scary future with which they are presented. Examples of this are the classic doomsday scenarios. J.F. Kennedy’s famous vision about putting a man on the moon is an example of an attractive future that inspired many people to make it come true. However, the relationship between success and impact is not always a conscious one, possibly because a futures study is only intended to map the future as accurately as possible without wanting to influence a decision-maker or other potential stakeholders. An example of this is the famous report of the Club or Rome (although one might argue about whether or not the grim future they presented has been completely avoided). An example of a self-fulfilling prophecy that came true unintentionally is PeakOil, where the statement that oil production has reached its peak inspired people to look for other sources of energy, as a result of which oil production stopped growing. Looking to the future is riddled with paradoxes, like the paradox that we want to gather knowledge about the future based on historical information. This paradox can only be ‘solved’ by assuming that historical structures will continue to exist in the future. This paradox also applies to the evaluation of futures studies. Evaluations are often designed to help us avoid making the same mistakes in the future and to keep the good things. However, much like it is impossible to predict the future based on historical information, it is very doubtful to assume that lessons learning from futures studies conducted in the past will stand us in good stead with futures studies that have yet to be conducted. This is something that we should always keep in mind when evaluating futures studies and using the outcomes to make recommendations for the future. Evaluating the future: a few empirical studies After exploring some theoretical considerations about the evaluation of futures research, let us briefly discuss a number of empirical evaluation studies. Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003 G Model JFTR-1731; No. of Pages 6 [(Fig._3)TD$IG] 4 Introduction / Futures xxx (2012) xxx–xxx Fig. 3. Possible relationships between self-denying and self-fulfilling prophecies, and intentional and unintentional. Evaluations of futures studies may originate from the areas of academics or popular science. Most evaluations focus on predictive futures research, we suspect because they are fairly easy to evaluate and because the large number of wrong predictions appeal to our imagination. One of the first studies focusing on evaluation futures studies is William Ascher’s Forecasting: an appraisal for policymakers and planners (1978) [1]. Ascher’s main finding is that people who predict the future suffer from what he calls ‘assumption drag’. Although new values are added to the predictive model, the assumptions that form the structure of the models are not adjusted and are still based on structures that were valid in the past. However, it is more than likely that those structures will not be valid in the future, which means that assumption drag will lead to erroneous predictions. In his book Megamistakes, Steven Schnaars [2] emphasizes that many predictions suffer from a hefty overdose of technological optimism. It is assumed that whatever is technologically possible will in fact be applied in the future. However, the adoption of new technologies has a strong social component. People need time to become accustomed to new technologies and often will make a cost-benefit analysis before make their decisions. While Ascher warns us about clinging to principles that applied in the past, because they make it harder to see what is new in the future, Schnaars argues that the speed with which changes take place, in particular when it comes to technology, should not be exaggerated and that old structures may be more tenacious than we tend to assume. The Fortune Sellers by William Sherden [3] makes it even clearer that predicting the future is a tricky affair that is rarely successful, whatever the area of investigation (finance, meteorology). Nevertheless, predicting the future has become a billion dollar industry. It may be the only industry where there is money to be made from products and services that have little value. What the future holds. Insights from social science, edited by Cooper and Layard [4], also contains an overview of predictions in various domains, including demographics, climate change and labour, and again the message is that a great many of them got it wrong. A study by Pillkey and PillkeyJarvis [5] on the area of environmental subjects like fishery, plants and coastal management also shows that making accurate predictions is difficult. In fact, the subtitle to their publication is: Why environmental scientist can’t predict the future. Of course, the question is why all those predictions miss the mark so often. Although we lack the space to address this question in detail here, one of the reasons has to be that any predictive model, no matter how good or complex it is, is based on historical data and patterns and the assumption that they will more of less be present in the future. However, it is precisely the inherent uncertainty of the future that makes the future different from the past and the present (a prediction in itself. . .), which in turn makes the predictive value of historical data and patterns limited. People also often refer to chaos theory, which states that tiny deviations from starting values may affect the course of the future in major ways, which means that having a clear picture of the past in itself has little use for the future, and in addition the required precision of historical data is an unattainable goal. The analyses by Ascher [1] and Schnaars [2] show that the truth about the future lies somewhere in the middle. But it hardly comes as a surprise that neither underestimation nor overestimation is very helpful. Gartner’s famous hype cycle places overestimation and underestimation in time by arguing that many predictions first overestimate and then underestimate future reality. In addition to the question as to which factors explain the accuracy of certain predictions, it is also interesting to see who manages to predict the future accurately and who do not. According to Sherden [3], we should not expect too much from stock exchange analysis, because they often become rich from making a single accurate prediction, and whose initial success is usually followed by a growing inability to predict what the markets will do with any degree of accuracy. According to Tetlock [6] (also see Garder [7]), we should not bet on ‘hedgehogs’, but on ‘foxes’. Hedgehogs know one major thing, but foxes know many things. When it comes to predicting the future, the generalists beat the specialists. Specialists tend to project the one thing they do know on everything they encounter, which is unwise in an uncertain and diverse society. Generalists have Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003 G Model JFTR-1731; No. of Pages 6 5 Introduction / Futures xxx (2012) xxx–xxx a greater ability for learning and are good at combining information. So it is not only the data, but especially the kind of predictor that offers us a clue as to the accuracy of a prediction. We should add to this that it may be smarter, when evaluation the accuracy of predictions, it may be better not to have others do it, which is what Aldous Huxley did with his Brave New World Revisited [8], because that may get in the way of objectivity. The evaluations discussed above focus on predictions. There are evaluations that use a different approach, but they are far outnumbered by those that focus on predictions. An example of an evaluation of a more exploratory kind of futures study can be found in this special issue. This study looks back at a futures study that was carried out by the WRR in 1977 [9]. Although this study was initially predictive in nature, a decision was later made, based on the advice of third parties, to use two scenarios. This futures study was evaluated 25 years later. One of the results was that it is wise to carry out a preliminary ‘meta-study’ first, which means that, before proceeding with the actual futures study, one should first determine to what extent the subject under investigation is likely to be sufficiently relevant in the future. However, studies of the future (and predictions) are often magnifications of issues that are relevant as it is. A meta-study, difficult and paradoxical though it may be (after all, it involves a study of the future into the question whether or not the subject of the study of the future has any kind of future. . .), is important to ensure that the proper subjects are being investigated. In addition, the evaluation study showed that more attention needs to be paid to the time horizon of a study of the future. The different subjects of the futures study of the WRR from 1977 (including labor, climate, justice, energy and the economy) were all examined using a 25 year time horizon, while the subjects in question all have different dynamics. It turns out that evaluating is not an unknown activity for futures researchers. Nevertheless, we found it necessary to ask futures researchers to make a contribution to this area, because we still need more knowledge to improve the theoretical foundation and practical application of the evaluation of futures studies. In this special issue, we offer six contributions. The paper by Bouwman, Haaker and De Reuver is aimed at evaluating the high expectations with regard to developments in the ICT industry. Although it is in line with the evaluations of predictive futures studies discussed earlier, the article also formulates conclusions and recommendations with regard to the developments and application of methods of futures research. Rijkens–Klomp discusses futures studies in the local public domain. Its unit of analysis, unlike many evaluation studies, is not the producer of the futures study (in other words, the futures researcher), but its user. In her evaluation, Rijkers–Klomp draws a distinction between process-related and organizational factors. In contrast, Rohrbeck, looks at futures research in the business community and evaluates not futures study itself as the value it may generate for the companies involved. On the basis of 20 case studies, four success factors for corporate foresight can be identified. Splint and Van Wijck evaluate a specific project in the government domain, where scenarios are linked to a so-called signpost method, which is used to test the content of the scenarios against current developments, making it possible to expand the scenario method and improve the follow-up of the scenarios. Piirainen, Gonzalez and Bragge have attempted to design a systematic framework for evaluation futures research, after mapping the entire futures research process by including the input, throughput (process), output and impact. Finally, Van der Steen and Van Twist focus on the (possible) impact of futures research on policy-making in a governmental setting. In conclusion, we offer an analysis of how contributions to this issue help us in evaluating and understanding policy and how decisions happen. References [1] [2] [3] [4] [5] [6] [7] [8] [9] W. Ascher, Forecasting: An Appraisal for Policymakers and Planners, John Hopkins University Press, Baltimore, 1978. S.P. Schnaars, Megamistakes and the Myth of Rapid Technological Change, The Free Press, New York, 1989. W.A. Sherden, The Fortune Sellers. The Big Business of Buying and Selling Predictions, John Wiley & Sons, New York, 1998. R.N. Cooper, R. Layard, What the Future Holds. Insights from Social Science, MIT Press, Cambridge, Massachusetts, 2003. O.H. Pilkey, L. Pilkey-Jarvis, Useless Arithmetic. Why Environmental Scientists Can’t Predict the Future, Columbia University Press, New York, 2007. P.E. Tetlock, Expert Political Judgment. How good is it? How can we know?, Princeton University Press, New Jersey, 2005. D. Gardner, Future Babble. Why Expert Predictions are Next to Worthless and you can do Better, Dutton, New York, 2011. A. Huxley, Brave New World Revisited, Vintage, 2004. P.A. van der Duin, C.A. Hazeu, P. Rademaker, J.S. Schoonenboom, The future revisited. Looking back at ‘The next 25 years’ by the Dutch Scientific Council for Governmental Policy (WRR), Futures 38 (2006) 235–246. Patrick van der Duin* Delft University of Technology, faculty of Technology, Policy and Management Jaffalaan 5 2628 BX Delft, The Netherlands E-mail address: [email protected] *Corresponding author. Tel.: +31 15 2781146 Martijn van der Steen1 The Netherlands School of Public Administration, Lange Voorhout 17, 2514 EB The Hague, The Netherlands E-mail address: [email protected] 1 Tel.: +31 703024910. Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003 G Model JFTR-1731; No. of Pages 6 6 Introduction / Futures xxx (2012) xxx–xxx Please cite this article in press as: P. van der Duin, Futures (2012), http://dx.doi.org/10.1016/j.futures.2012.03.003