Powell InformationTechnologyEvaluation 1992
Powell InformationTechnologyEvaluation 1992
Powell InformationTechnologyEvaluation 1992
REFERENCES
Linked references are available on JSTOR for this article:
https://www.jstor.org/stable/2583696?seq=1&cid=pdf-
reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Operational Research Society and Palgrave Macmillan Journals are collaborating with JSTOR to
digitize, preserve and extend access to The Journal of the Operational Research Society
All investment decisions are problematic. The IT community seems to shy away from evaluation of its
investments. This paper examines information technology investment in order to assess if it is radically
different from other investment decisions. The lack of formal evaluation of IT projects may be due, not
to a deficiency in the tools available to the evaluator, but to other factors. These factors are appraised
and possible ways forward considered.
INTRODUCTION
WHY INVEST?
Earl3 suggests four reasons for using information technology as a strategic resource. These are:
to gain competitive advantage; to improve productivity and performance; to enable new ways
of managing and organizing; and to develop new business. Each of these has some economic
underpinning. That is, presumably the organization is investing in information technology in
order to increase its net worth. Overlaying this may be the actions of sub-groups of the organization
trying to acquire non-monetary gains for themselves, but even here there is still a need for a
set of metrics with which to evaluate the investment. If this is the case, the lack of usable and
Correspondence: P. Powell, Department of Accounting and Management Science, University of Southampton, Highfield,
Southampton S09 SNH.
29
used techniques seems anomalous. Coombs et al.4 looking at the economics of technical change
comment, 'The trend then, can be seen as an attempt by firms partly to internalize and control the
potential benefits of technological advances, rather than being a victim of them'. This desire for
control or self-direction presents an alternative view of why an organization might wish to invest
in new technology.
It is necessary to bear in mind the scale of information technology investment which organiza-
tions are currently making. Weill and Olson5 quote a figure of 20o of revenue as a mean level of
expenditure in 1983. They also argue that this is likely to be an understatement due to the
decentralized nature of organizations and the purchasing of end-user equipment from sub-unit
revenue rather than capital sources. Evidence for this comes from Plunket6 who estimates that
40% of total IT hardware spending is on personal computing items, likely not to be recorded as
a capital expenditure. Price7 offers a figure of 70o of turnover but this is not substantiated;
alternatively Sheppard8 quotes an IT spend of 14% of turnover. Whatever the exact figure, it
appears that large amounts of discretionary expenditure is being allocated to information tech-
nology; much of it, it will be argued, without formal evaluation. The investment in IT should be
compared to a level of 3.1 /o of turnover spent on research and development (Parker9) and 1.2%
on advertising (Buzzell et al."1), rising to 6-120o for high value consumer goods (Kotler").
PROBLEMS OF IT APPRAISAL
There is no shortage of writers in the IT field who have tackled the problematic task of computer
system investment appraisal. Most are of one opinion -that the costs and benefits associated with
computer systems are difficult to quantify. For instance, McRae'2 as long ago as 1970 summed up
the mood of IT evaluators concisely, 'The computer is a difficult investment to evaluate because
the income from the computer is not as clearly defined as it is with other investments'. Others
concur, whilst acknowledging the problems of defining costs associated with information
technology, and admit that defining or quantifying the benefits is of greater difficulty.
There are probably very few investments which are possible to evaluate easily. The well-known,
but possibly not well-used, methodologies for project/investment appraisal rely on assumptions
about a number of possibly interrelated factors. Beyond the simple methodologies lie the areas of
risk assessment and sensitivity analysis. There is a need to distinguish between the existence of these
methodologies and their practical application. Thus, it may be that suitable techniques exist but
are not applied.
Current evaluation techniques are an outgrowth of the traditional cost-benefit methodologies or
are the application of standard accounting techniques to the problem. Zakierski' dichotomizes
these methods into objective and subjective techniques. Objective measures, generally older, seek
to quantify system inputs and outputs in order to attach values to the items. Subjective methods
acknowledge the frailty of the values so computed and rely instead on the attitudes and opinions
of users and system builders. Certain of the suggested models combine the two approaches (see
Tables 1 and 2). However, these seem to be of the form, 'If you can't justify your system objectively,
30
KEY
1 Results presented in financial terms 5 Can be used ex-ante
2 Extensive use of questionnaires in procedure 6 Can be used ex-post
3 Probabilities incorporate into the analysis 7 Can be used continuously
4 Large, varied range of personnel involved 8 Measures operational performance parameters
then use subjective techniques'. The underlying premise is that the system is worthwhile, it's just
difficult to show that this is so.
OBJECTIVE METHODS
Quantitative techniques (see Table 1) endeavour to categorize the costs associated with a system
or proposed system. These costs may relate to the functions of the system, to those involved in the
system or to the life-cycle of the system. It is hoped that by careful cost categorization all sources
of cost can be identified, and hopefully quantified, in a reasonably robust manner. A similar set
of activities is advocated for the attribution of values to benefits.
An example of such a technique is SESAME developed by IBM and described by Lincoln31.
This method attempts to identify all significant relevant costs and the corresponding system
benefits, by asking users how they would manage without the system. To do this the author claims
that questionnaires 'based on a proven standard' are employed. Despite this 'proven standard',
it is interesting to note that IBM has other methodologies (that used by their ATT Group, for
instance) which claim to do the same task. The results Lincoln describes indicate that two-thirds
of all applications break-even in less than one year and 80%o had positive returns on investment.
Evans32 , reporting on a Norton Nolan study, states that returns of 10/o can be generated fr
investment in the 'personal computing phase', 200-500% in the 'connected work group phase' and
1000/o in the final 'business transformation phase'. If such startling results are accurate then two
plausible hypotheses follow. Either no type of analysis need be undertaken, since any investment
in information technology is almost bound to be worthwhile, so why expend effort proving what
you already know. Alternatively, the subjective methods are a waste of time since one can easily
point to objective quantifiable benefits to justify IT investment. All that is required is for firms
that .need to evaluate expenditure in this field to use a SESAME-type technique. Unless, that is,
there are political motives for engaging in subjective analysis in order, perhaps, to show the
widespread impact of IT and secure even greater funding.
Williams and Scott33 provide a still interesting, yet dated set of case studies on investment
appraisal which serves as a counter example to the above. Two of these, dealing with computer
installation, point to the wide discrepancies between ex ante and ex post savings, which failed to
materialize. In one case returns were negative compared to a predicted 7-12%o, and in the other,
an actual return of 10/o was achieved whilst forecasts were of 24-44/o.
However, even if one blindly accepts that investment in information technology is beneficial
there is a further rationale for sensible assessment. Overly34 identifies it thus: 'Technology-based
programs often result in benefits and costs which were not identified or acknowledged in the
planning and resource allocation process and, as a result, there are beneficiaries who do not pay
for their gains and benefactors who must pay costs they do not incur through their own actions'.
Hawgood35, for instance, identifies five groups who benefit from the introduction of computer
31
technology; the information subjects, users of the system, employees, those financially affected and
the decision takers. Further, in the last case the decision maker may not be the one who benefits
from the decision. Thus, if the organization operates in any sort of decentralized or devolved
budgetary mode, there is a need for some mechanism to identify and allocate costs and benefits.
One part of the organization should not be forced to incur higher costs owing to an inefficient
system installed elsewhere. Similarly, unless computing costs are met centrally, a department
should be able to recover some element of the benefit it provides to others by using IT.
There is a documented move towards a charging system for computer use (see, for instance,
Ernest-Jones36) being employed by organizations. The aim is to highlight the expenditures and to
enhance the department's awareness of the costs of IT. This may cause the end-user departments
to wonder whether they get value for money from this budgetary investment, but it side-steps the
issue of how such investment is actually evaluated.
As has been mentioned, the objective methods predate the subjective. This is probably a
reflection of the change in the types of tasks tackled by computer systems. Early systems replaced
largely clerical tasks where differential costs and benefits were easier to identify. The movement
of computing systems to the tackling of more decision-orientated tasks necessitated the widening
of the scope of costs and benefits. With this greater impact on less quantifiable activities, such as
'better' decision taking, came the realization that the accuracy or indeed tracability of quantified
costs and benefits was increasingly dubious. Hence the movement to 'soft' or subjective analysis.
However, even this view is open to challenge. Goodwin37 describing the development of the first
ever office computer, Leo, comments, 'It was evident that Leo was not merely a tool which would
enable clerical work to be performed more economically but ... a tool which ... could help
management at all levels to control the business more efficiently'. Yet the Kobler Report (see (7))
suggests leading companies 'Are moving towards long term effectiveness even though such IT
investments were harder to justify quantifiable'.
SUBJECTIVE METHODS
Subjective methodologies first arose in the late 1970s (see Table 2). These methodologies were
often propounded as team building ones. The notion was to get the computer system out of the
data processing domain and into that of the manager or user, hence giving the user a sense of
participation, ownership and commitment. In some sense most of the subjective methods are
merely spurious pseudo-quantitative ones. They still try to quantify in order to differentiate
between systems, but the quantification is of feelings, attitudes and perceptions. There is, however,
less emphasis on trying to convert these abstract values into a common monetary denominator.
Schniederjans and Fowler38 highlight the contradictory nature of objective and subjective factors
in investment analysis. Discussing strategic acquisition management they conclude that, 'In
strategic acquisition analysis, the decision to acquire a firm can be based chiefly on subjective
judgement on what are desirable objective factors'.
By their very nature, subjective evaluation techniques may only be used ex post. The decision
to invest, using such methods can only therefore be taken on the basis of the performance
characteristics of similar systems. Although the same could be said of quantitative techniques, their
objective nature allows cross-sectional system comparison to be more fruitfully undertaken in new
proposals. Despite the tendency for hard analysis to drive out soft (Ijiri39) in a reverse application
of Gresham's Law, subjective methodologies seem to be in the ascendency. Recent research
concentrates almost exclusively on soft analysis, to some extent blinding the practitioner to the
worth of quantifying techniques. The possible ways forward for objective methodologies are briefly
considered later.
Is the evaluation of information technology investment dissimilar from the assessment of other
investments the organization may wish to undertake? There seems to be an almost blind acceptance,
32
in the literature and in practice, that IT investment is different, because IT is different. This
does not appear to be a tested hypothesis. The problems inherent in many types of investment
consideration are similar and so the solutions, if solutions there are, may be the same.
Part of the argument that IT is different stems from the standpoint of the early researchers
on information technology. These writers were essentially technologists; they did not possess
investment appraisal skills. Later, when the field was encroached by business-related researchers,
who often had little technical background, the myth had already been established and was not
disproved. These researchers took as their starting point the problems of intangible items and their
measurement, and proceeded to suggest methods of attaching values to the intangible benefits
and costs. Note, however, that Aris' claims there is no such thing as an intangible, offering six
techniques for attaching value to intangibles, but most would agree with Overly34 who comments,
'Current technological assessment programs are limited in their ability to account for all benefits
and costs as they are too technology-oriented'.
Marsden and Pingry4' point to the conflict between user satisfaction and company profitability.
System designers are entreated to enhance system usage and ease implementation by involving
the user. The aim of this is to achieve the Holy Grail of user satisfaction. Yet a satisfied user is
not necessarily a productive or profitable user. Satisfaction probably implies satisfying, which
is presumably sub-optimal for the organization as a whole. However, optimization requires a
metric for the measurement of success. There is a sense though, in which IT evaluators have
deliberately misconstrued the role of formal analysis in project appraisal. Quantitative techniques
are postulated as rigid structures, the end result of which is an accept/reject decision. There is no
role for expertise or managerial application in this case. As Gunton42 puts it, 'The problem is not
that end-user system projects cannot be justified but that they cannot be justified in terms which
accountants and many senior managers are prepared to accept at the present time'. The writer goes
on to argue that normal measurement techniques do not work and that one should abandon
financial analysis and use 'gut feelings'. This is echoed by Boddy et al.43 who describe a hotel
system which was justified primarily on the ground of customer service because, 'It was difficult
to identify cost savings to justify the investment in conventional accounting terms'. Thus, whilst
one could argue that the recent ascendency of accountants to the upper echelons of organizations
should point to an increased use of formal methods, it is not demonstrably clear that this is the
case. Arguably, those lower in the organization may prefer to use subjective methods, or no method
at all, in order to get project acceptance because they perceive quantitative methods not to be
successful.
Williams" points to the same accountants' malaise in other types of investment appraisal;
'Many scientists have alleged that one important reason why we are often slow into new fields is
that the crucial investment decisions are made by bankers and accountants who have a strong
aversion to risk and are unable to comprehend the impact of the scientific revolution'.
The OECD45 joins this attack on the accountant's role in the context of a handbook for project
evaluation in developing countries: 'Some costs are of course easy to calculate, but others entail
estimates that shock the accountant with his excessive and often unrealistic desire for precision,
or the technical expert who is too aware of the wide range of possibilities to give even an
approximate answer'. Long46, too, views rigid insistence on cost justification as a legacy of an
earlier computing era; today attention should be concentrated on the system's effectiveness not
efficiency.
This is a misspecification of the place of formal methods in evaluation. Leaving aside the need
for expert input in the quantification of inputs and outputs as well as the need for thought in
deciding the very nature and scope of those costs and benefits, there has never been a school of
thought which advocates project appraisal performed along the rigid dichotomized lines thus
described. Awad47, for instance, writes, 'Cost/benefit analysis is a tool for evaluating systems, not
a substitute for users' final judgement. Like any tool it has its drawbacks,' and earlier, 'Analysis
of the costs and benefits of alternative systems guides the final solution'.
It might be argued that Awad, a recent researcher on the subject, is presenting a softened tone
since IT is involved. Yet one can go back to early accounting writers who state, whilst considering,
in general terms, investment appraisal or capital budgeting that accounting figures are only the
starting point for analysis. Anthony and Welsch48 write, 'Few, if any, business problems can be
33
solved solely by the collection and analysis of figures.' Merrit and Sykes49 concur: 'For this
analysis to be justified despite uncertainty and imperfect estimates, it is merely necessary that the
extra. effort involved should yield a worthwhile improvement in the quality of decision making'.
In the same vein, Batty50 argues that, 'Intuition and judgment cannot be replaced by the collection
of facts, but there is no doubt that the decision making mechanism is likely to be strengthened
materially by the systematic collection and analysis of relevant data'. Lastly, Harvey5' argues for
a 'second entrepreneurial test' to be applied to any computer purchase after having weighed up the
proposal 'by traditional means'.
The ideal that the measurement of costs and benefits is problematic has surfaced in fields other
than IT. Indeed, there is a whole literature on the cost-benefit analysis of economic and social
investments. Here, values are attached to all the identified consequences, both favourable and
unfavourable, of a particular project.
A traditional area where cost-benefit analysis is common is that of engineering and within this,
civil engineering. The use of economic evaluation is recognized. For instance, Rose52 writes, 'It is
inconceivable that large capital expenditures could be made without an accompanying economic
evaluation'; and later, 'Any decision regarding the expenditure of capital therefore should be
preceded by economic evaluation. However, this is never the whole story and 'non-economic'
factors should also be taken into account before the decision is made'. Antill53 concurs with the
first quote stating that estimation of construction costs is, 'Undoubtedly one of the most important
aspects of civil engineering'. Riggs54 adds, 'Revenue is somewhat harder to estimate than costs for
many industrial projects', indicating that in the calculation of benefits, rather than costs, IT and
engineering suffer the same type of problems. Benefits are recognized to be of various types
and 'Qualitative in nature-more convenience, better living conditions, more leisure time-it is
possible to attempt to express their benefits in money terms . . . [but] such calculations are full of
uncertainty' (Rose)52.
Thompson55 adds weight to the comments of Gunton above. He argues that cost estimations
'Are all predictions . . . and we do not expect them to be accurate in the accounting sense'. Rose52,
whilst acknowledging that, 'Cost estimation can be a serious source of error' feels that, 'It is
surprising how much can be expressed quantitatively given a little thought and imagination'. Antill
agrees, 'It is surely somewhat remarkable that an estimator can produce construction works costs
with an average accuracy as good as - 10%'.
The methodologies propounded by these writers essentially fall into three categories. First, those
based on an 'exponential or factorial' analysis of similar projects scaled up or down as necessary.
Secondly, the use of unit rates. Databases of costs per identifiable unit exist for most civil engineer-
ing tasks. These costs, massaged to take account of atypical circumstances, are multiplied by the
units to give a total project cost. The third method is termed operational estimating. This seems
to be a catch-all method used in unique projects where costs are estimated for the constituent
operations and activities, and aggregated. None of these methods are startling or novel, the only
difference between this field and the information technology one being the existence of a very large
databank of historical information which is readily available in published form.
Writers in the engineering fields talk of accuracy of preliminary estimates of the order of
i 15-20% (Antill53) and 4 5% for final estimates (Rose52). Such accuracy, if achieved in the IT
area would be astounding. However, in a computer corollary, Smith56 notes three methods of
project costing. These relate firstly to the quantity of code or object instructions, secondly to the use
of past project data and thirdly to an estimation model attributed to Putnam57. Boehm58 offers a
thorough synthesis of such methods and provides one of his own, and, whilst acknowledging that
the models are an aid and not a substitute for management action, he claims that such models can
estimate software costs within 20%o, 800/o of the time. As an aside, it is interesting to note that the
strengths of the software cost estimation models are largely their 'objectivity' and their weaknesses
'subjectivity'. Norris' (see Twiss59) review of 475 R&D projects found actual costs of 0.97-1.51
times the estimate.
34
Moving out of the pure engineering field, cost-benefit analysis has been used widely for a range
of social projects. Attempts have been made to measure the returns on investment in schooling
(Hansen6"), in medical research (Weisbrod62), in advertising and in education and training.
Similarly, analysis has been undertaken in evaluating employment schemes, mental health
programmes and disease control attempts. Although difficulties existed, estimated rates of return
of 11-12% were generated- for polio research and 10.4-15.3%7o for different levels of education. The
authors of these studies freely admit the large error levels, yet feel, on balance, that the estimates
are sufficiently accurate to be useful. In comparison with these fields, information technology is,
perhaps, a far better bounded domain with less sensitive cost and benefit estimates.
Some interesting empirical work in the research and development (R&D) field is reported by
Parker9. The expected probability of the technical success of projects was greater than 80%o in
75%o of cases reviewed by Mansfield63. However, actual success rates were 44%. Cost deviations
were 20%o under budget for half of the projects whilst, surprisingly, only 15% exceeded budget
forecasts by more than 20%o. In the same field, Twiss59, provides five methods of setting research
budgets: costing of an agreed programme; comparison with other firms; as a percentage of
turnover; or of profit; and finally as an incremental deviation from previous budgets.
Despite a paucity of empirical research on the issue showing the magnitude of returns, companies
do invest heavily in IT. Not only do they invest, but there is a feeling, often unquantified, that
investment is worthwhile. However, one manager quoted in Ernest-Jones'8 asserts, 'I know how
much I spend [on IT] and how much payback I get'. Yet, the only item measured by this evaluator
was staff reductions. Perhaps this organization has not yet progressed to supporting intangible
areas of activity or conversely, even today, IT can be justified on the basis of staff savings
alone.
POSSIBLE SOLUTIONS
What then are the ways forward? Certain possibilities suggest themselves. One would be to accept
that IT investment is not different, that standard techniques are applicable, or at least, as applicable
to information technology as to any other type of project. A second would be to acknowledge that
IT is very different and hence unquantifiable. This view is partially supported by an ASA survey'
in which only 40%o of respondents believed office automation could be economically quantified.
A further way forward would be to develop another methodology.
The existence of seventeen currently identified methodologies suggests the field is already a little
crowded and that a 'new' method would be likely to add little. A last attempt might be to investigate
how currently available techniques might be employed and possibly amended to overcome some
of the difficulties raised in the previous sections of this paper. Two techniques would appear to have
some benefit. These are discussed in detail elsewhere. The first is the application of zero-one goal
programming to the problem. Suffice to say, the problems of quantifying costs and benefits such
as are needed, even in a ordinal fashion, still exist. A second methodology, developed by Chapman
and Cooper65, uses a parametric discounting technique and may have applicability here. Within,
or perhaps in parallel to, these techniques is a need to identify formally disbenefits of particular
courses of action. Also a need exists to look at alternative future scenarios. A final alternative might
be to attempt to establish a global multiplier for IT investment. Although crude, such a measure
might guide investment strategies in some ordinal sense. That is, there might be a life cycle of
information technology development with expected higher returns being given by investment in
transactions processing systems before investment in decision support, or networks. Empirical and
theoretical support for this type of life cycle exists (see for instance Price66).
No attempts seem to have been made to develop optimization models along the R&D lines.
Parker9 discusses a number of these which, essentially, relate optimal R&D expenditure to the
price elasticity of the product and to market share. The same author also reports that profit
maximization models are fairly good at explaining budget allocations in this discipline. Again,
evidence for this in the information technology area is lacking.
35
Despite the existence of, albeit partial, methodologies for information technology evaluation,
there is an apparent lack of use of formal techniques. Galliers67 highlights the prime barrier to
strategic information systems planning as measuring the benefits, yet finds that this analysis only
occurs in 16% of cases, and formally in only 9%. Similarly Coulson-Thomas68 identifies issues of
importance to IT directors, the first being 'management issues', which does not include any notion
of return on investment. In nine case studies Sheppard8 found little evidence of formal evaluation
of IT benefits. Organizations or groups within those organizations do not, on occasions, wish to
evaluate IT, at least ex ante. Evidence of information technology budgets being set on some sort
of competitive parity is available (see, for instance, Weill and Olson5). That is, industry-wide
norms are established through an informal information exchange, e.g. the Annual Datamation
Survey69. This can be likened to one of the methods by which firms establish advertising budgets.
Kotler" points to stable advertising budgets, as percentages of sales, in the cosmetics,
confectionery and soap industries. Kay70 too, observes stability in R&D budgets, rationalizing this
by reference to a managerial preference for stability. Twiss59 argues similarly that firms tend to
have stable technology support budgets which are only altered following take-over, merger, or the
like. It would be wrong, however, to regard this desire for stability as wholly bad; there are strong
arguments for a need for stability in certain long run budgets.
A number of plausible reasons for the lack of use of formal evaluation methods can be identified.
The first is quite simple: are most projects evaluated formally? The existence of robust reliable tools
for project evaluation does not imply their use. Where they are used, it may be in an ex post
justification mode rather than in a proactive one.
There is a substantial body of evidence to suggest that formal evaluation methods are widely
used in practice. Gurnani7' surveys the empirical literature on the capital budgeting methods.
There appears to be a steady growth in the use of sophisticated techniques. The use of discounting
amongst surveyed organizations grew from 10/o in 1959 to 60% in 1975 and 86/o a year later (a
figure confirmed by Moore and Reichert72 and Pike73). Greater use is found in larger corpora-
tions, chemical and petroleum companies, utilities and capital intensive industries. However, the
organizations using such techniques did not apply them universally. Some applied the techniques
only to large projects, whilst others categorized projects and applied different criteria and tests to
each category. Categories included research and development, expansion, replacement and manda-
tory projects. Computer purchase was not explicitly identified. Pike73 suggests that^, in practice,
firms resolve capital rationing problems by restricting the number of proposals entering the decision
process and by using naive techniques such as payback rather than formulating more complex
models. Nearly all (97%) of companies reported approving economically infeasible investments.
Non-quantifiable factors were felt to be of extreme importance: 77% of one sample by Petty (see
Gurnani71), 'Acknowledged that though quantitative factors are dominant ... qualitative factors
do influence the final decision'. Often these non-quantifiable factors were identified as future pos-
sible benefits which might accrue in later years but were currently unassessable. Hence, the use of
relatively sophisticated techniques is apparent, as is a recognition of where these are inappropriate.
What is not established is if information technology considerations do or should fall into the latter
category. As early as 1965, Williams and Scott33 found that their computer acquisition case-
studies all involved quantitative evaluation. This was not the case in other investment decisions.
These findings are mirrored in the R&D literature. Parker9 finds a positive correlation between
the use of quantitative techniques and firm size. On the value placed on quantitative appraisals by
laboratory directors he states, 'A proper attitude is one of scepticism, biased towards an appre-
ciation that quantitative techniques are no more than an aid'. However, he quotes Mansfield et al.'s
'74
view that 'Only 10-20/o of lab directors regarded such estimates as poor or untrustworthy .
Perhaps what is needed are cost-benefit analyses of doing cost-benefit analyses. Limited evidence
of the benefits derived from undertaking such analysis is available in such areas as US Defense
programmes (Williams"4), yet such studies are not widespread. One of Sheppard's respondents
states8 that there is, 'Obviously a need for Email' but it is too expensive to establish a business case
for it. Finally, a recognition that the process of evaluation may be as valuable as the outcome might
be helpful. The procedure is a control system not just a selection technique. Binning75 argues
36
that the strength of formal techniques lies not in their precise quantified results but as a demon-
stration that complex situations are amenable to logical analysis.
A final rider to this section might be the ambiguity of the evidence on superior firm performance
and the use of capital budgeting techniques. Haka et al.76 have reviewed the literature and
empirically suggest that although short term performance increases may be correlated with the use
of sophisticated techniques, there is no long term correlation. Despite this, Gordon77 notes the
insistence of the US Federal authorities on benefit-cost analysis for project evaluation; specific
reference is made to IT systems.
Turning aside from the problems of measurement in information technology evaluation, the view
that there are other motivations or reasons for not carrying out such assessments needs to be
addressed. A number of arguments for this are reviewed below. Perhaps one overwhelming reason
is that firms do not have objectives, and hence have no yardstick by which to measure proposed
systems. Williams and Scott33 comment, 'For all firms in our case-studies, we did not find defini-
tions of objectives that acted as a clear-cut basis for investment decisions'. Nevertheless, most
organizational theorists would accept that organizational goals do exist. These may not be stable,
consistent nor beneficial to the majority of participants, yet they do seem to influence organiza-
tional actions. As Coombs et al.4 point out, the work of researchers such as Marris78 and Cyert
and March79 can only explain variations in managerial behaviour, not globally existing or specific
reactions by different firms to different circumstances.
Computerization is seen in some instances as obligatory or strategic. 'Strategic' is the new
defensive avoidance term, possibly a substitute for subjective in quantitative terms. Anything
labelled as 'strategic' bypasses the normal review process. It seems odd that firms seek to engage
in strategic investments, often substantial, which they do not evaluate or quantify. The reasons for
this are likely to be competition or perceived competition. Certainly in the Powell et al.80 survey
of decision support system (DSS) use in accounting, the most frequently cited reason for DSS
purchase was corporate image and the desire not to be seen to be lagging behind the competition.
Sheppard8, however, suggests that the successful use of IT as a strategic weapon requires, inter
alia, value-added justification, although the returns are primarily measured by staff reduct
gaining a competitive edge or by 'act of faith'. Twiss59 dismisses this act of faith argument,
arguing that, 'No company is going to invest heavily in technology solely as an act of faith in the
hope that by backing the right people "somthing will turn up" '. Interestingly, he suggests that R&D
investment is a strategic decision but, as detailed below, one which is routinely evaluated. Con-
versely Strassman" argues against specific justification of computer purchases; rather, j
tion is via the links to strategic goals.
In other instances the use of shared facilities forces computerization. In the banking community,
for example, many of the payments made are by electronic transfer. It is necessary to have the
requisite systems, in order merely to participate in the current processes. Obligation implies a need
to justify rather than to evaluate. Hence, such justification is going to be of a satisfying rather than
optimizing nature since the aim is to achieve better than a hurdle of some sort set by the
organization. Such hurdles may not be quantified but subsumed into wider organization-based
evaluation. That is, only if the return from the company as a whole is poor, are individual elements,
such as the computer system, subject to scrutiny. It is not clear if these hurdles are even set ex ante.
The effect of setting hurdles ex post must be one of inducing cognitive dissonance into the
evaluation process. Yet hurdle figures set beforehand may also be lacking, since, unless a realistic
view of system objectives is available, the targets are likely to be set in nebulous terms.
The obligatory, or strategic, argument for investment in an activity without formal analysis being
invoked, is one which Cook and Rizzuto82. find empirically supported. In a survey of capital
budgeting practices for research and development, the authors find that for basic research only
23.1 0o of their sample used formal analysis. This figure rises to 76.5 0o at the development stage.
The rationale put forward is that investment in R&D is strategic, and hence obligatory, if the firm
is to remain competitive. The expectation here would be that firms would differentiate between
37
investments needed to maintain the industry status quo and those that contribute to increased
profitability. However, as early as 1975, Durand83 argued that informatics should no longer be
supported at any cost, but be asked to account for itself.
Strategic justification is usually defended on the grounds of conformity to organizational objec-
tives but Wassenaar84 points to the difficulty of establishing links between intangible benefits
and corporate objectives. Mitchell85 sees the organizational strategic objective comprising three
elements: knowledge building, strategic positioning and business investment. Certainly the final
of these, business investment, needs to be formally evaluated and a similar case could be made for
the other two.
In some sense, computer system evaluation must be related to what would occur if you do not
computerize. The forces for computerization are such that this alternative is not often considered,
except in so far as to be used as a bench mark for computer manufacturers to push dubious
system feasibility studies. Turban86 adds the further problem, not apparent in some other fields,
of partial implementation of the computer project, whilst Keen'5 points to the evolutionary nature
of most decision support systems.
There are also scenarios in which cost may not matter, or may not be a dominant motivator. This
is illustrated by two quite different scenarios. Firstly where computerization costs are a percentage
of total costs. For instance in the London financial market, post 'Big Bang', the cost of a dealer's
workstation, whilst high, was small in comparison to the costs of the dealer using it and to the
returns available from successful trading in the market. The second situation is the one of R&D
mentioned above. Here funds may be committed without an explicit requirement or even
expectation of a return on the investment. There may also be a psychological sense in which more
attention is paid to small amounts of investment than to large ones. The pennies are more closely
guarded and monitored than the pounds. Indeed, the dominant motivators for IT investment are,
according to Sheppard8, that the old system is difficult to maintain, the desire to replace old
systems with new technology, and to exploit an opportunity to expand services. None of these is
explicitly cost-based. Similarly Twiss59 sees one of the motivations for R&D expenditure to be the
'foot in the door' concept; that is, invest to avoid sudden technological surprises.
Paradoxically, government intervention in new technology may inhibit rigorous evaluation.
In Japan the MITI sponsorship of the Fifth Generation project, the UK Alvey experience and
the European Esprit project have all involved governments supporting team-developed IT applica-
tions. A number of these are pre-competitive, taking the form of 'clubs'. The investment, which
is often government matched, for each member of the club is small and hence easily written
off. Little emphasis is placed on economic evaluation as a criterion for success, hence little is
learned in this area to be carried over to other projects. It must be pointed out that government
sponsorship of new technological advances is not new. There is a history of government inter-
vention back to the industrial revolution, but this intervention tends to be technologically not
techniques-led.
Misspecification of requirements of computer systems is apparent. The clients, be they internal
users or external customers do not know what they require. This induces a tools-led or analyst-led
scenario. Specifications are more likely to change rapidly in such a case. The analogy here is with
the defence industry. The developer is frequently working with poor or variable specifications,
often at the leading edge of technology. The technical problems of such operations are likely to
be greater than those found in other more stable environments. Coupled with this is the problem
of evaluating the likely benefits of a poorly specified or misspecified system. Yet, it should be borne
in mind that the problems of building a new and probably unique civil engineering undertaking are
no less problematic. However, it could be argued, that with software development one is less aware
of where one is, in terms of the extent to which a project is complete, at any given time. For instance
the amount of effort expended on the user interface in the XCON expert system construction has
been documented (see for instance Bobrow and Stefik87) at almost 50/o of the total system (in
terms of coding). This was an unexpectedly high and unforeseen level, and presumably not allowed
for in the system specification.
The computer industry is notorious for its cost overruns. A trawl through any computer-industry
newspaper will yield numerous cases of cost overruns. So to, however, would a similar fishing
trip in an engineering counterpart. It is a standard ploy of contractors to oversell their projects.
38
Thompson, writing from an engineering standpoint, comments 'In many cases particularly when
the sponsors of the project are politically motivated, there is a tendency not to disclose or consider
all the risks and to sanction an optimistic estimate of costs in the knowledge that once the project
is under way it will rarely be stopped'. Freeman88, writing on cost estimation in research pro-
posals, concurs: 'The context of estimation will always be one of political advocacy and clash
of interest groups, whatever the possibility for sober calculation'. Page and Hooper89 offer an
information systems view, 'Actually one of the most common deficiencies is not the determination
of a poor cost figure but rather the complete omission of important costs'. Counteracting this
deficiency, though, is another from the same source: 'It is common to have unanticipated benefits
which are more important than the actually anticipated benefits'. The cost overruns are most
noticeable in large projects. Although smaller projects may have vast percentage cost overruns there
is a tendency to focus on the absolute magnitude of such cost rises. Also the options element in
investment in information technology is large, possible larger, than that associated with other
projects.
Harvey5' offers four reasons why computer purchases are poor. The first is the mesmerizing
effect where benefits are taken for granted. The second is a blinding of the purchaser by science.
Thirdly is an underestimation of the human factors involved and lastly a lack of purpose in the
purchase decision.
There is no doubt that there are trade-offs amongst the cost/time/performance metrics of project
appraisal and analysis. If too much pressure is applied to any one of these, then the others will
be affected. Boehm et al.' identified a trade-off between development costs and life-cycle costs
for software production and claim that even if software is on time and on budget it may be
unsatisfactory since it may be hard to understand and modify, difficult to use (or easy to misuse)
or hard to integrate. Maintaining the right balance may be more important than the individual level
of any single item. Norris' sample suggests that project duration is more difficult to estimate than
cost.
The past problems of highly publicized computing projects may engender a feeling of operating
within a world of perceived failure. Such a culture may not be conductive to ex ante evaluation
in any rigorous sense. The fact that organizations continue to invest, despite this failure, is at
odds with most views on the matter. The KPMG9` IT report, for instance, on 'runaway' comp
projects claims that 30/o of all major projects become runaways. Williams and Scott33 point to
R&D projects often being judged, not on current quantitative evaluations, but on the track record
of the manager concerned. Coulson-Thomas68 describes how the cynicism resulting from earlier
generations of technology which did not deliver the hoped for benefits has given way to a realization
that it is the use of the technology not the technology itself which is of prime importance. However,
it might be more beneficial, rather than viewing information technology as a means for achieving
a competitive edge through better implementation, to see it as a form of non-price competition.
That is, IT is a corporate image maker and as such cannot be evaluated in the same way as other
projects.
There is also evidence that information technology is a significant barrier to entry to new
competitors in certain fields. The high fixed costs and reluctance of customers to duplicate existing
facilities have proved a barrier to new entrants in such fields as airline reservation systems, the
hospital supply industry and cable television. Information technology can also be used to create
an illusion of entry barriers. Witness also the number of mergers and acquisitions that have been
abandoned due to the incompatibility of the data processing systems of both parties. The UK
Building Society industry, where IT spending is 20%o of operating costs, has seen a number of such
incidents, with system incompatibility cited as a major reason for failure.
A further point which may engender caution in any who seek to evaluate information technology
is that of the critical success factors which have been identified for information systems. One of
the major factors to emerge is that of top management support. If IT success is dependent upon
such factors, then a view of the level of such support must be taken ex ante by the evaluator. This
may prove highly problematic and politically unacceptable. Other factors identified by Keen and
Scott Morton'9 include early commitment by the user and conscious staff involvement. All these
may be difficult to take a view on before the investment takes place. Interestingly, most of the
critical success factors identified in IT studies surface in other research on R&D success. Parker9
39
finds top management support to be the most important item followed by such factors as clear
identification of need, good cooperation, and availability of resources.
A final, more cynical, view is that information technology is just 'toys for the boys' and not a
serious corporate tool.
CONCLUSION
The rapid pace of change in IT technology poses serious starting problems for any large
investment. Any long-term, fixed project is almost obsolete before it has started and is certainly
passe by the time it is fully installed. This does not, however, negate the need to evaluate projects.
It is clear that the justification of information technology is difficult, yet techniques are available
which give broad indications of success and failure. These standard techniques do not appear to
be widely used, even though they have been employed in other fields and are recognized as useful.
If information technology is to emerge as a beneficial corporate tool, the decision to invest needs
to be examined as rigorously as with any other large investment. However, Currie92 terms IT
justification as a 'ritual of legitimacy' not a process of really assessing benefits. But then, as
Williams pointed out, a quarter of a century ago, 'The important thing in business is not to make
good forecasts but to make them come true' '44
REFERENCES
1. P. ZAKIERSKI (1987) A review of new technology investment techniques. Unpublished M.Sc. Dissertation, University
of Southampton.
2. J. EATON, J. SMITHERS and S. CURRAN (1988) This is IT. Philip Allan, Oxford.
3. M. J. EARL (1987) Information systems strategy formation. In Critical Issues in Information Systems Research
(R. J. BOLAND and R. A. HIRSCHHEIM, Eds) pp 157-178. John Wiley, Chichester.
4. R. COOMBS, P. SAVIoTTi and V. WALSH (1987) Economics and Technical Change. Macmillan, London.
5. P. WEILL and M. OLSON (1989) Managing investment in IT. MIS Quarterly 13(1), 3-17.
6. S. PLUNKET (1990) Making computers pay. Today's computers. March, 8-10.
7. C. PRICE (1989) quoted in T. Ernest-Jones36. Computer Weekly, February 20-21.
8. J. SHEPPARD (1990) The strategic management of IT investment decisions: a research note. Brit. J. Management 1,
171-181.
9. J. E. S. PARKER (1978) The Economics of Innovation. Longman, London.
10. R. BUZZELL, R. NOURSE, J. MATHEWS and T. LEVITT (1972) Marketing: A Contemporary Approach. McGraw-Hill,
New York.
11. P. KOTLER (1976) Marketing Management. Prentice Hall, New Jersey.
12. T. W. MCREA (1970) The evaluation of investment in computers. Abacus 6, 56-70.
13. P. M. Q. LAY (1985) Beware of the cost/benefit model for I.S. project evaluations. J. Syst. Mgmt. 36(1), 30-35.
14. M. P. MARTIN and J. E. TRUMBLY (1986) Measuring performance of automated systems. J. Syst. Mgmt. 37(2), 7-17.
15. P. G. W. KEEN (1981) Value analysis: justifying decision support systems. MIS Quarterly 5(1), 1-15.
16. E. 0. JOSLIN (1965) Application benchmarks: the key to meaningful computer evaluations. Association for Comput-
ing Machinery, Proceedings of the National Conference 20, 27-37.
17. J. S. CHANDLER (1982) A multiple criteria approach for evaluating information systems. MIS Quarterly 6(1), 61-75.
18. J. KANTER (1970) Management Guide to Computer System Selection and Use. Prentice-Hall, London.
19. P. G. W. KEEN and M. S. SCOTT MORTON (1978) Decision Support Systems: An Organisational Perspective. Addison-
Wesley, Reading, Mass.
20. G. P. SCHELL (1986) Establishing the value of information systems. Interfaces 16(3), 82-89.
21. R. H. SPRAGUE and E. D. CARLSON (1982) Building Effective Decision Support Systems. Prentice-Hall, Englewood
Cliffs, N.J.
22. S. HAMILTON and N. L. CHERVANY (1981) Evaluating information system effectiveness-part 1: comparing evaluation
approaches. MIS Quarterly 5(3), 55-69.
23. N. AHITUV (1980) A systematic approach toward assessing the value of an information system. MIS Quarterly 4(1)
61-75.
24. R. L. KEENEY and H. RAIFFA (1976) Decisions with Multiple Objectives: Preferences and Value Tradeoffs. John Wiley
and Sons, New York.
25. J. A. SENN (1978) Information Systems in Management. Wadworth Publishing, New York.
26. J. N. CHAPPLE (1976) Business Systems Techniques for the Systems Professional. Longman, London.
27. E. R. MCLEAN and T. F. RIESING (1977) M.A.P.P.: a decision support system for financial planning. Data Base 3(3),
9-14.
28. C. H. KEPNER and B. B. TREGOE (1965) The Rational Manager. McGraw-Hill, New York.
29. H. Q. NORTH and D. L. PYKE (1969) 'Probes' of the technological future. Harvard Business Review 47(3), 68-82.
30. C. F. GIBSON (1975) A methodology for implementation research. In Implementing Operations Researchi
Management Science part II (R. L. SCHULTZ and D. P. SELVIN Eds) pp 53-73. Elsevier, Amsterdam.
31. T. LINCOLN (1986) Do computer systems really pay off? Info. and Mgmt 11, 25-34.
32. R. EVANS (1989) Why productivity in the office is slowing down. Computer World. October, 64-66.
40
33. B. R. WILLIAMS and W. P. SCOTT (1965) Investment Proposals and Decisions. George Allan & Unwin, London.
34. D. OVERLY (1973) Introducing societal indicators into technology assessment. In Technology Assessment in a Dynamic
Environment (M. J. CETRON and B. BARTOCHA, Eds) pp 561-590. Gordon and Breach, New York.
35. J. HAWGOOD (1975) Quinquevalent quantification of computer benefits. In Economics of Informatics (A. FRIELINK,
Ed) pp 171-180. North-Holland, Amsterdam.
36. T. ERNEST-JONES (1989) Does your system give value for money? Computer Weekly, February, 20-21.
37. C. GOODWIN (1989) Leo and the Managers. Computing, April, 24-27.
38. M. SCHNIEDERJANS and K. FOWLER (1989) Strategic acquisition management: a multi-objective synergistic approach.
J. Opl Res. Soc. 40, 333-345.
39. Y. IJIRI (1975) Theory of accounting measurement. Studies in Accounting Research 10, American Accounting
Association.
40. J. ARIS (1975) Quantifying the costs and benefits of computer projects. In Economics of Informatics (A FRIELINK, Ed)
pp 15-24. North-Holland, Amsterdam.
41. A. L. MARSDEN and Y. PINGRY (1988) End user-IS designer interaction. Info. and Management 14, 75-80.
42. T. GUNTON (1988) End User Focus. Prentice Hall, New York.
43. D. BODDY, J. MCCALMAN and D. BUCHANAN (Eds) (1988) The New Management Challenge: Information Systems for
Improved Performance. Croom Helm, London.
44. B. R. WILLIAMS (1967) Technology, Investment and Growth. Chapman & Hall, London.
45. OECD (1972) Manual of Industrial Project Analysis in Developing Countries. OECD, Paris.
46. R. LONG (1987) New Office Information Technology: Human and Managerial Implications. Croom Helm,
London.
47. E. M. AWAD (1988) Management Information Systems; Concepts, Structure and Applications. Benjamin Cummings,
California.
48. R. N. ANTHONY and G. L. WELSCH (1974) Fundamentals of Management Accounting. Richard D. Irwin, Homewood,
Illinois.
49. A. J. MERRIT and A. SYKES (1973) The Financing and Analysis of Capital Projects. Longman, London.
50. J. BATTY (1978) Advanced Cost Accounting. MacDonald & Evans, Plymouth.
51. D. HARVEY (1986) The Electronic Office in the Smaller Business. Wildwood House, Aldershot.
52. L. M. ROSE (1976) Engineering Investment Decisions: Planning under Uncertainty. North Holland, Amsterdam.
53. J. M. ANTILL (1973) Civil Engineering Management. Angus & Robertson, Sydney.
54. J. L. RIGGS (1977) Engineering Economics. McGraw-Hill, New York.
55. P. THOMPSON (1981) Organisation and Economics of Construction. McGraw-Hill, Maidenhead.
56. K. SMITH (1988) Corporate Accounting Systems: A Software Engineering Approach. Addison-Wesley, Wokingham.
57. L. N. PUTNAM (1980) Software Cost Estimating and Life Cycle Control. IEEE Computer Society Press, London.
58. B. BOEHM (1981) Software Engineering Economics. Prentice-Hall, New Jersey.
59. B. Twiss (1986) Managing Technological Innovation. Longman, Harlow.
60. K. P. NORRIS (1971) The accuracy of project cost and duration estimates in industrial R and D. R and D Mgmt 2,
25-36.
61. W. L. HANSEN (1963) Total and private rates of return to investment in schooling. J. Polit. Econ. 71, 128-140.
62. B. A. WEISBROD (1971) Costs and benefits of medical research. J. Polit. Econ. 79, 527-544.
63. E. MANSFIELD (1968) Industrial Research and Technological Innovation. Norton, London.
64. Australian Society of Accountants (ASA) (1984) Survey on the Use of Information Technology by Accountants.
65. C. B. CHAPMAN and D. F. COOPER (1985) A programmed equity-redemption approach to the finance of-public
projects. Managerial and Decision Economics 6, 112-118.
66. R. PRICE (1989) What is the payoff from end user computing (sic). Paper delivered at the British Accounting
Association Conference, March.
67. R. GALLIERS (1991) Strategic I.S. planning: myths, reality and guidelines for successful implementation. Eur. J. Info.
Syst. 1, 55-64.
68. C. COULSON-THOMAS (1991) Directors and IT, and IT Directors. Eur. J. Info. Syst. 1, 45-53.
69. DATAMATION (1991) The Datamation 100, Datamation 37(12), 6-90.
70. N. KAY (1979) The Innovating Firm. Macmillan, London.
71. C. GURNANI (1984) Capital budgeting: theory and practice. The Engineering Economist, 30 (Fall), 19-46.
72. J. MOORE and A. REICHERT (1983) An analysis of the financial management techniques currently employed by large
U.S. corporations. J. Business Finance and Accounting 10, 623-645.
73. R. PIKE (1983) The capital budgeting behaviour and corporate characteristics of capital constrained firms. J. Business
Finance and Accounting 10, 663-671.
74. E. MANSFIELD, J. RAPOPORT, J. SCHNEE, S. WAGNER and M. HAMBURGER (1971) Research and Innovation in the
Modern Corporation. Norton, London.
75. K. BINNING, cited in Twiss59 Managing Technological Innovation. Longman, Harlow.
76. S. HAKA, L. GORDON and C. PINCHES (1985) Sophisticated capital budgeting selection techniques and firm perfor-
mance. Accounting Rev. 60, 651-669.
77. L. GORDON (1989) Benefit-cost analysis and resource allocation of decisions. Accounting, Organisations and Society
14, 247-258.
78. R. MARRIS (1974) The Economic Theory of Managerial Capitalism. Macmillan, London.
79. R. M. CYERT and J. G. MARCH (1963) A Behavioural Theory of the Firm. Prentice Hall, Englewood Cliffs, NJ.
80. P. L. POWELL, N. A. D. CONNELL and J. HOLT (1992) An investigation into the practical uses of decision support and
expert systems in the USA. In Expert systems in finance, (D. O'LEARY and P. WATKINS, Eds), Elsevier, Amsterdam,
forthcoming.
81. P. STRASSMAN (1985), Information Payoff. Free Press, New York.
82. T. J. COOK and R. J. RIZZUTO (1989) Capital budgeting practices for R&D: a survey and analysis of business week's
R&D scoreboard. Engineering Economist 34 (Summer), 291-304.
83. R. DURAND (1975) Cost analysis of DP centres. In Economics of Informatics (A. FRIELINK, Ed) pp 100-112. North-
Holland, Amsterdam.
41
84. A. WASSENAAR (1988) Information management in an industrial environment-an education perspective. In The
New Management Challenge: Information Systems for Improved Performance (D. BODDY, J. MCCALMAN and
D. BUCHANAN, Eds) pp 49-57. Croom Helm, London.
85. G.' MITCHELL (1990) Alternative frameworks for technology evaluation. Eur. J. Opl Res. 47, 15,3-161.
86. E. TURBAN (1988) Decision Support and Expert Systems: Managerial Perspectives. Macmillan, New York.
87. D. BOBROW and M. STEFIK (1985) Paper delivered to the IEEE Conference on Expert Systems. London, Easter.
88. C. FREEMAN (1982) The Economics of Industrial Innovation. Frances Pinter, London.
89. J. PAGE and P. HOOPER (1987) Accounting and Information Systems. Prentice-Hall International, Englewood Cliffs.
90. B. BOEHM, J. BRAUN, H. KASPAR, M. Lipow, C. MACLEOD and M. MERRITT (1978) Characteristics of Software
Quality. North-Holland, Amsterdam.
91. KPMG Report in Information Age (1990) 12(3), July,
92. W. CURRIE (1989) The art of justifying new technology to top management. Omega 17, 409-418.
42