SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT
AND MEASUREMENT
Dr. Mary Helen Fagan, University of Texas at Tyler,
[email protected]
Dr. Barbara Ross Wooldridge, University of Tampa,
[email protected]
Stern Neill, University of Washington,
[email protected]
ABSTRACT
This study explores how support for computer users can be conceptualized and measured in
information systems research. A number of studies have proposed that support for computer
users plays an important role in the acceptance and utilization of information technology
applications. In these studies, the support concept has been conceptualized in a variety of ways,
and the findings have often not been as hypothesized. The paper provides a conceptual
framework for understanding support for computer users, and then describes the development of
an instrument to measure support for computer users in a business school lab environment. The
paper should help further understanding and measurement of a concept that seems important, as
well as problematic, for information systems research.
Keywords: Computer user support, SERVPERF, instrument development, computer usage
INTRODUCTION
A number of studies have explored various factors that can influence the successful adoption and
utilization of information systems applications. Recently, the Technology Acceptance Model (TAM) has
encouraged a large stream of literature focusing on the role that an individual’s beliefs and attitudes play
in the development of their intentions to accept information technology (IT) and in their subsequent IT
usage (11). However, an intended behavior may not occur if the surrounding environment creates
barriers to carrying out the intention (e.g., difficulty in running the software). Furthermore, an information
technology may be adopted, but then used minimally, due to poor facilitating conditions (e.g., insufficient
training).
In the TAM model, support is conceptualized as a factor that will influence the model’s two key
variables: perceived usefulness and perceived ease of use (11). A number of other researchers, taking
different approaches, have also recognized the importance of support, and have studied its role in a
variety of ways. In order to better understand this factor, this study provides a conceptual framework
for understanding support for computer users based upon a review of the literature. Then, the study
describes the development of an instrument intended to measure support for computer users in a
business school lab environment. A better understanding of how to conceptualize and measure support
for computer users should facilitate future research and enhance understanding of the technology
acceptance process.
186
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
IACIS 2002
CONCEPTUAL FRAMEWORK
A number of key studies illustrate how support has been conceptualized and measured. One early
effort identified 39 items that could affect computer user satisfaction (e.g., technical competence of the
IS staff, convenience of access, and vendor support) (4). Later research grouped factors affecting
computer user success into three groups, one of which were factors that were considered fully
controllable (e.g., user training, end-user support policies) (6). Based upon a review of the literature, it
appears at this point in time there are at least four main approaches that researchers are using to
measure support/satisfaction in the IT literature. Each of these is briefly described below.
In developing a model of personal computer utilization, researchers (30) built upon the work of Triandis
(31) to hypothesize that behavior cannot occur if objective, facilitating conditions in the environment
prevent it. However, their hypothesis that there would be a positive relationship between facilitating
conditions and personal computer usage was not supported in their study (30). Since prior work
supported the expectation that support would be positively related to computer usage, the researchers
suggested that their findings were due, in part, to the measures they used and that “the operationalization
of facilitating conditions must take context into account” (29, p. 136). Subsequent researchers using the
same measures also expected to find that support would be positively related to self-efficacy and
performance expectations, key factors that the literature suggests influence computer usage (8).
However, contrary to their expectations, they found support was negatively related to self-efficacy and
to performance expectations, and in their discussion they concluded that additional research was needed
to follow up on these findings.
Another research stream has explored the influence that support has directly on usage and on other
factors which are expected to influence usage. In this research stream, organizational support has been
hypothesized to include two categories: 1) end-user support, which includes instruction and guidance in
system applications and 2) management support, which includes encouragement and the allocation of
resources (17). While subsequent work has found support for this conceptualization (2,18), other
studies have found, for example, that internal support was not relevant to personal computing
acceptance in small firms (19) and that organizational support was negatively related to Internet usage in
organizations (3).
A review of the literature also finds that a great deal of prior work has been done regarding service
quality. Information systems research has explored the factors that influence end user satisfaction (13).
An End-User Computing Satisfaction instrument based upon the SERVQUAL and USISF (User
Satisfaction with Information Services Function) scales has been tested and retested in a variety of
studies (20,22). The SERVQUAL scale is well known in the marketing field and has been developed to
help service firms ensure ongoing high quality service to consumers (25,26,27). The SERVQUAL
instrument is intended to be relevant to all service sectors and consists of five dimensions (tangibility,
reliability, responsiveness, assurance and empathy). Although problems have been found, such as
questions related to the reliability of the instrument’s tangibility construct, the overlap between
187
IACIS 2002
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
SERVQUAL and the other measures of support suggest that SERVQUAL could provide a well-known
and useful basis for assessment of the support construct.
Another approach to the measurement of support in the IT literature is based upon the development of
unique measures related to the specific context in which the behavior of interest occurs. This approach
is used by studies based upon the Theory of Planned Behavior (1) which propose that intention to
perform a behavior is predicted, in part, by perceived behavior control. Perceived behavior control
reflects an individual’s perception of whether the necessary resources are available for them to achieve a
particular outcome. Perceived behavioral control, thus, is comparable to the measures of facilitating
conditions that are used in other studies measuring support (31). However, since the approach required
by the Theory of Planned Behavior “requires a pilot study to identify relevant outcomes, referent groups,
and control variables in every context in which it is used” (23, p. 178), it requires significantly more
effort to employ than measures that are applied in the same way across various applications.
This literature review indicated that the research has found support to be directly related to usage, and
that support is also related to a number of key factors that the literature suggests will influence usage. A
model that summarizes some of these relationships is depicted below (Figure 1). The literature review
also suggests that further work is needed to clarify the concept of support, and that valid and reliable
measures are needed before the nature of the relationships can be better understood.
Figure 1: Role of Computer Support
Perceived Usefulness
Support
Usage
Self-efficacy
Performance
Expectations
MEASUREMENT OF THE SUPPORT CONCEPT
The remainder of this paper describes one effort to measure the support for computer users in a
business school environment. Business students who are learning new computer applications and using
software to complete educational assignments often rely upon effective computer labs and support
personnel in order to be successful. Students may be motivated to learn computer applications and
faculty may develop effective learning materials. However, if computer support is problematic (e.g.,
computer lab facilities are not accessible when needed), then pedagogical goals and learning outcomes
may be undermined (12).
First, an exploratory study was conducting using SERVQUAL to assess its usefulness in measuring
support for computer users in a business school lab environment. However, despite the fact that
188
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
IACIS 2002
SERVQUAL was developed with a similar population (business school students), a number of issues
arose when it was used in this context. The concerns raised by the SERVQUAL exploratory study
included the fact that: 1) the expectations section yielded no usable results, 2) the expected five
dimensions were not identified and 3) the instrument’s twenty-two items did not adequately predict user
satisfaction with the computer lab/service center. The fact that SERVQUAL did not prove to be an
adequate instrument in this study might bolster the contention that measures of support/facilitating
conditions must take the particular context into account in order to adequately understand a particular
usage behavior (28).
Since the exploratory survey using SERVQUAL indicated some potentially serious problems, it was
decided that a new scale needed to be developed to measure support for users in a computer lab
environment. The new University Computer User Evaluation Scale (UCUES) was created as a result of
rigorous scale development procedures (5). Organizations who wish to better understand the role that
computer support plays in user decisions to adopt information technology applications may wish to
consider a similar approach tailored to their needs, or one that builds upon the process used in the
Theory of Planned Behavior (1). The scale development process, which is summarized below, consists
of : 1) instrument format revision, 2) focus group research, 3) dimension and item generation, 4)
purification, and 5) assessment of internal consistency, discriminant validity, and construct validity.
Instrument Format Revision
Due to the problems the exploratory study found with the SERVQUAL measurement of expectations,
the format of the instrument was revised. The format used by the SERVPERF scale was adopted
instead. SERVPERF is an alternative measure that uses SERVQUAL’s five dimensions, but which
omits the expectations measures (9,10). By modifying the instrument to only measure performance, it
was hoped that the problems found in the exploratory study would be overcome.
Focus Group Research
Focus group research was important to the development of a new and improved instrument since it
allowed the researchers to obtain information in the students own words and enabled students to
interact with each other and with the researcher in developing their responses. Three focus groups
meetings were held in order to develop a more detailed and specific understanding of the dimensions
that students felt were important to the performance of a computer lab. An understanding of the
dimensions that students felt were relevant was especially important in light of the problems the
exploratory study uncovered with the reliability of the SERVQUAL instrument’s dimensions.
Dimension and Item Generation
Based upon the exploratory study and the input from the focus groups, the researchers identified six
dimensions that appeared relevant to the evaluation of computer service and support in business school
labs. An initial pool of 219 items was generated for these six dimensions from the prior exploratory
study and the focus group input. Three Ph.D. students with knowledge of personal computers and the
189
IACIS 2002
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
computer labs were asked to judge the applicability of the items. If two of the judges agreed that an
item was very applicable and no judge deemed it non-applicable, an item was retained. A total of 108
items were retained for the following dimensions:
a)
b)
c)
d)
e)
f)
access - the lab and its services can be utilized when needed (15 items)
assistance – the help from lab employees (17 items)
atmosphere - the environment of the lab (15 items)
facilities – the lab tools used to complete tasks (17 items)
reliability - the lab equipment dependability and accuracy (19 items)
self-learning – the ability of an individual to learn how to complete tasks without help from lab
personnel (15 items)
Purification
In order to purify the scale and assess its reliability, a sample of 255 students were asked to complete
the instrument and evaluate the computer lab’s performance on a ten point scale. The resulting data was
analyzed using principal components factor analysis with a varimax rotation with no constraints (7), and
resulted in a seven factor solution consisting of 61 items that accounted for 68.8 of the variance with
eigen values that ranged from 20.8 to 2.3. The assistance, atmosphere and self-learning dimensions
were stable. However, the dimensions of access, facilities and reliability were refined into four
dimensions: hours of operation, access to equipment, quality of printed output, and reliability of
equipment/software. A second phase of data gathering was conducted with an additional 188 students,
which identified items that respondents felt were unclear. Approximately 5% of the questions were reworded to make them work better with the “poor to excellent” scale. Following these changes to the
questions, a third sample was collected when all students enrolled in business courses (870
respondents) were asked to complete the revised questionnaire. A third factor analysis was conducted
that resulted in a seven factor solution. The three phases of data gathering and principal components
factor analysis was thus able to reduce the initial 108 items to a 40 item scale that represented key
aspects of computer lab performance. The seven dimensions that make up the scale are assistance,
access to equipment, hours of operation, quality of printed output, atmosphere, reliability, and selflearning.
Assessment of Internal Consistency, Discriminant Validity, and Construct Validity
LISREL VIII (21) was used for confirmatory factor analysis. The intercorrelations among the 40 items
on the scale were input into LISREL in order to examine the internal consistency and discriminant
validity of the seven factor model. The analysis indicated the self-learning factor had low correlations
with other factors, suggesting that the self-learning factor might not be part of the computer lab
performance construct (24). After reconsidering the conceptual reasons for the inclusion of self-learning
and the analysis results, the researchers determined that self-learning is not a key attribute that students
use to evaluate computer labs, and to the factor was dropped. Support for the discriminant of the 6
factor model was supported by a number of tests (e.g., the analysis of φ estimates, average variance
extracted between constructs, and around φ between each pair of constructs) (14,15,16). This analysis
indicates the 6 factor model adequately assesses user’s perceptions of computer lab performance.
190
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
IACIS 2002
Finally, multiple regression analysis was performed to determine the importance of the each dimension,
and this analysis indicated substantial improvement over the initial exploratory analysis. Those who are
interested in this approach are encouraged to a copy of the instrument and refer to the detailed
description of the steps and analysis involved in this rigorous scale development process (29).
CONCLUSION
Support for computer users can be expected relate directly to computer usage, as well as to a number
of factors that indirectly influence usage such as self-efficacy, perceived usefulness, and performance
expectations. Information systems researchers have used a variety of measures to operationalize the
support construct and have obtained some mixed results. This study evaluated the usefulness of the
SERVQUAL instrument for measuring support for computer users in a business school lab environment.
When the SERVQUAL instrument did not prove satisfactory, a new instrument was developed to
measure computer user support following rigorous scale development procedures. This research may
assist others who wish to study support for computer users, either by employing existing measures such
as SERVQUAL or by developing a scale that is directly related to the context in which the expected
behavior occurs.
REFERENCES
1. Ajzen, I. (1985). From Intentions to Actions: A Theory of Planned Behavior, in Kuhl, J. and
Beckman, J. (eds.), Action Control: From Cognition to Behavior, Springer Verlag: New York, 1985,
11-39.
2. Anakew, U. P., Igbaria, M. and Anandarajan, M. (2000). Management Practices Across Cultures:
Role of Support in Technology Usage, Journal of International Business Studies, 42(4), 653-666.
3. Anandarajan, M., Simmers, C. and Igbaria, M. (2000). An Exploratory Investigation of the
Antecedents and Impact of Internet Usage: An Individual Perspective, Behavior and Information
Technology, 19(1), 69-85.
4. Bailey, J. E. and Pearson, S. W. (1983). Development of a Tool for Measuring and Analyzing
Computer User Satisfaction, Management Sciences, 29(5), 530-545.
5. Bearden, W. O., Netemeyer, R. G., and Mobley, M. F. (1993). Handbook of Marketing Scales:
Multi-Item Measures for Marketing and Consumer Behavior Research. Newbury Park, CA: Sage
Publishing, Inc.
6. Cheney, P. H., Mann, R., and Amoroso, D. L. (1986). Organizational Factors Affecting the Success
of End User Computing, Journal of MIS, 3, 65-80.
7. Churchill, G. (1979). A Paradigm for Developing Better Measures of Marketing Constructs, Journal
of Marketing Research, 16, 54-73.
8. Compeau and Higgins, (1996). Computer Self-Efficacy: Development of a Measure and Initial Test,
MIS Quarterly, (June), 189-211.
9. Cronin, J. and Taylor, S. A. (1992). Measuring Service Quality: A Reexamination and Extension,
Journal of Marketing, 56(3), 55-68.
191
IACIS 2002
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
10. Cronin, J. and Taylor, S. A. (1994). SERVPERF Versus SERVQUAL: Reconciling PerformanceBased and Perceptions-Minus-Expectations Measurement of Service Quality, Journal of Marketing,
58(1), 125-131.
11. Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. (1998) User Acceptance of Computer
Technology: A Comparison of Two Theoretical Models, Management Science, 35(8), 982-1003.
12. DeNisco, J. and Sackmary, B. (1990). Target Marketing for Business Microcomputer Labs,
Proceedings of the 1990 AMA Microcomputers in Marketing Education Conference.
13. Doll, W. J. and Torkzadeh, J. (1988). The Measurement of End-User Computing Satisfaction, MIS
Quarterly, (June), 259-273.
14. Fornell, C. and Larker, D. P. (1981). Evaluating Structural Equation Models with Unobservable
Variables and Measurement Error, Journal of Marketing Research, 13 (February).
15. Gerbing, D. and Anderson, J. (1984). On the Meaning of Within-Factor Correlated Measurement
Errors, Journal of Consumer Research, (11) (June).
16. Gerbing, D. and Anderson, J. (1988). An Updated Paradigm for Scale Development Incorporating
Unidimensionality and its Assessment, Journal of Marketing Research, (19) (November).
17. Igbaria M. (1990). End-User Computing Effectiveness: A Structural Equation Model, OMEGA,
18(6), 637-652.
18. Igbaria, M., Guimaraes, T. and Davis, G. B. (1995). Testing the Determinants of Microcomputer
Usage via a Structural Equation Model, Journal of Management Information Systems, 11(4), 87114.
19. Igbaria, M., Zinatelli, N., Cragg, P. and Cavaye, A. L. M. (1997) Personal Computing Acceptance
in Small Firms: A Structural Equation Model, MIS Quarterly, 21(3).
20. Jiang, J. J., Klein, G., and Crampton, S. M. (2000). A Note on SERVQUAL Reliability and
Validity in Information System Service Quality Measurement, Decision Sciences, 31(3), 725-744.
21. Joreskog, K. and Sorbom, D. (1993). LISREL: Analysis of Linear Structural Relations by the
Method of Maximum Likelihood, Version VIII. Chicago: National Education Resources.
22. Kettinger, W. J. and Lee, C. C. (1994). Perceived Service Quality and User Satisfaction with the
Information Services Function, Decision Sciences, 25(5/6), 737-766.
23. Mathieson, K., (1991), Predicting User Intentions: Comparing the Technology Acceptance Model
with the Theory of Planned Behavior, Information Systems Research, 2(3), 173-191.
24. Netemeyer, R. G., Burton, S. and Lichtenstein, D. R. (1995). Trait Aspects of Vanity:
Measurement and Relevance to Consumer Behavior, Journal of Consumer Research, 21(4).
25. Parasuraman, A., Zeithml, V. A., and Berry, L. L. (1985). A Conceptual Model of Service Quality
and Its Implications for Future Research, Journal of Marketing, 49(4), 41-50.
26. Parasuraman, A., Zeithml, V. A., and Berry, L. L. (1988). SERVQUAL: A Multiple-Item Scale for
Measuring Consumer Perceptions of Service Quality, Journal of Retailing, 64(1), 12-40.
27. Parasuraman, A., Zeithml, V. A., and Berry, L. L. (1994b). Reassessment of Expectations as a
Comparison Standard in Measuring Service Quality: Implications for Further Research, Journal of
Marketing, 58(1), 111-124.
28. Pitt, L. F., Watson, R. T., and Kaven, C. B. (1995), Service Quality: A Measure of Information
Systems Effectiveness, MIS Quarterly, 19(2).
192
SUPPORT FOR COMPUTER USERS: CONCEPT DEVELOPMENT AND MEASUREMENT
IACIS 2002
29. Ross, Barbara-Jean, Maxham, J. G., and Neill, S. (1996), UCUES: An Evaluation Scale for
Business College Personal Computer Labs, in Marketing: Moving Towards the 21st Century,
Stuart, E. W., Oritnua, D. J. and Moore, E. M., eds, Southern Marketing Association, 24-29.
30. Thompson, R. L. Higgins, C. A., and Howell, J. M. (1991) Personal Computing: Toward a
Conceptual Model of Utilization, MIS Quarterly, 15(1), 134-143.
31. Triandis, H. C. (1980). Beliefs, Attitudes and Values. Lincoln NE: University of Nebraska Press,
195-259.
193