A Critical Look at The Use of PLS-SEM in MIS Quarterly
A Critical Look at The Use of PLS-SEM in MIS Quarterly
A Critical Look at The Use of PLS-SEM in MIS Quarterly
EDITORS COMMENTS
A Critical Look at the Use of PLS-SEM in MIS Quarterly
By: Christian M. Ringle Professor of Management Hamburg University of Technology (TUHH) and University of Newcastle (Australia) [email protected] Marko Sarstedt Assistant Professor of Quantitative Methods in Marketing and Management Ludwig-Maximilians-University Munich and University of Newcastle (Australia) [email protected] Detmar W. Straub Editor-in-Chief, MIS Quarterly Professor of CIS Georgia State University [email protected]
Introduction
Wolds (1974, 1982) partial least squares structural equation modeling (PLS-SEM) approach and the advanced PLS-SEM algorithms by Lohmller (1989) have enjoyed steady popularity as a key multivariate analysis method in management information systems (MIS) research (Gefen et al. 2011). Chins (1998b) scholarly work and technology acceptance model (TAM) applications (e.g., Gefen and Straub 1997) are milestones that helped to reify PLS-SEM in MIS research. In light of the proliferation of SEM techniques, Gefen et al. (2011), updating Gefen et al. (2000), presented a comprehensive, organized, and contemporary summary of the minimum reporting requirements for SEM applications. Such guidelines are of crucial importance for advancing research for several reasons. First, researchers wishing to apply findings from prior studies or wanting to contribute to original research must comprehend other researchers decisions in order to understand the robustness of their findings. Likewise, when studies arrive at significantly different results, the natural course is to attempt explaining the differences in terms of the theory or concept employed, the empirical data used, and how the research method was applied. A lack of clarity on these issues, including the methodological applications, contradicts the goals of such studies (Jackson et al. 2009). Even worse, the misapplication of a technique may result in misinterpretations of empirical outcomes and, hence, false conclusions. Against this background, rigorous research has a long-standing tradition of critically reviewing prior practices of reporting standards and research method use (e.g., Boudreau et al. 2001). While the use of covariance-based SEM (CB-SEM) techniques has been well documented across disciplines (e.g., Medsker et al. 1994; Shook et al. 2004; Steenkamp and Baumgartner 2000), few reviews to date have investigated usage practices specific to PLS-SEM (see, however, Gefen et al. 2000). Previous reviews of such research practices were restricted to strategic management (Hulland 1999) and, more recently, marketing (Hair et al. 2012; Henseler et al. 2009), and accounting (Lee et al. 2011). The question arises as to how authors publishing in top IS journals such as MIS Quarterly have used PLSSEM thus far, given the SEM recommendations of Gefen et al. (2011). By relating Gefen et al.s (2011) reporting guidelines to actual practice, we attempt to identify potential problematic areas in PLS-SEM use, problems which may explain some of the criticism of how it has been applied (e.g., Marcoulides et al. 2009; Marcoulides and Saunders 2006).
iii
Editors Comments
By reviewing previous PLS-SEM research in MIS Quarterly, we can hopefully increase awareness of established reporting standards. The results allow researchers to further improve the already good reporting practices that have been established in MIS Quarterly and other top journals and thus could become blueprints for conducting PLS-SEM analysis in other disciplines such as strategic management and marketing.
Our data source was the ABI/INFORM Complete, EBSCO Business Source Complete, and JSTOR databases, as well as the journal online archive (http://www. misq.org/archive/), using the keywords partial least squares and PLS to search full text of the articles. The list of results was then examined independently by two professors proficient in the technique, the purpose being to identify those studies eligible for inclusion in the review. In this process, conceptual papers and simulation studies on methodological aspects (e.g., Qureshi and Compeau 2009; Wetzels et al. 2009) were removed from the search list. In order to avoid the biasing effects of single studies, applications with more than 10 models per study were not considered in this analysis (e.g., Venkatesh et al. 2003). The complete list of studies is available in the Online Supplement to this editorial (http://www.misq.org/supplements/).
2 To shed further light on the quality of PLS-SEM use and results reporting in MIS Quarterly, we benchmarked our results against those obtained from a review of the three marketing journals with the highest journal impact factor according to the Thomson Reuters 2010 journal citation report (i.e., Journal of Marketing, Journal of the Academy of Marketing Science, and Journal of Marketing Research). These marketing journals published 41 empirical studies (with 60 models) using PLS-SEM in the 20-year period between 1992 and 2011. Tables 1 through 8 illustrate these results vis--vis those from MIS Quarterly.
iv
Editors Comments
Legend: The gray lines represent year-by-year totals; the line represents the cumulative numbers of studies.
Editors Comments
Criterion Number of Latent Variables Meana Median Range Number of Structural Model Relations Meana Median Range Mode of Measurement Models Only Reflective Only Formative Reflective and Formative Not Specified Number of Indicators per Reflective Construct b Meana Median Range Number of Indicators per Formative Construct c Meana Median Range Total Number of Indicators in Models Meana Median Range Number of Models with Control Variables Number of Control Variables Mean Median Range
Proportion Reporting (%) 47.69 23.08 4.62 27.69 9.23 89.23 84.62 66.15 83.08
Criterion Number of Studies with Single-Item Constructs Higher Order Constructs (i.e., Hierarchical Component Analysis) Nonlinear Relationships Model Modified in the Course of the Analysis If yes, Comparison with Initial Model? Item Wordings Reported Scales Reported Scale Means and Standard Deviations Reported Correlation/Covariance Matrix
Proportion Reporting (%) 51.22 36.59 9.76 19.51 0.00 82.93 82.93 65.85 70,73
Please see footnote 2 for details on how this column was generated. Estimate for 5% trimmed mean. b Includes only models that have been marked as including reflective indicators (N = 79 for MISQ and N = 50 for JM, JMR and JAMS). c Includes only models that have been marked as including formative indicators (N = 35 for MISQ and N = 50 for JM, JMR and JAMS).
Finally, it is noteworthy that MIS Quarterly has established a high level of transparency through detailed standard reporting practices with respect to measurement models. More than 80% of the studies mention item wordings, report item scales, and include the correlation or covariance matrix (see Table 2) as called for by Gefen et al. (2011). Two-thirds of all studies report scale means and standard deviations. Even though these reporting practices are satisfactory, they still leave room for improvement.
vi
Editors Comments
Imprudent Use of Formative Measurement Models A key argument for employing PLS-SEM relates to the use of formative measurement models since PLS-SEM readily handles both reflective and formative measures. Technically and implicitly, researchers accept the underlying assumptions of the PLS-SEM method (e.g., predictor specification; Lohmller 1989; Wold 1982), which allow for the possibility of formative measurement models. However, automatically relying on PLS-SEM when using formative measures is not without its own problems, particularly because PLS-SEM is restricted to estimating formative constructs sans error terms (Diamantopoulos 2011). In practice, this circumstance is hard to defend because scholars cannot really be certain that all possible causes related to the latent variable are accounted for by the indicators (Diamantopoulos 2006). This is why establishing an acceptable level of measurement validity before analysis of the structural relationships is essential in PLS-SEM studies (e.g., by establishing external validity via a redundancy analysis; Chin 1998b). Our review indicates that the average number of indicators is significantly smaller in formative than in reflective constructs (3.03 versus 3.58; p < 0.01). In that formative constructs should represent the entire population of indicators (Diamantopoulos et al. 2008), one would generally expect formative measurement models to be more capacious than reflective ones. So this is a puzzle. The Curse and Blessing of Single-Item Constructs A much debated subject across disciplines is the use of single-item measures (e.g., Bergkvist and Rossiter 2007; Drolet and Morrison 2001; Wanous et al. 1997). Since PLS-SEM allows for the unrestricted use of single item constructs, it is not surprising that many models (31 models, 47.69%) deploy them, as shown in Table 2. Single-item constructs have practical advantages (e.g., Fuchs and Diamantopoulos 2009) and there are circumstances in which researchers may have no other choice than to use single item constructs (Straub et al. 2004) and, thus, be criticized for monooperationalization bias (Cook and Campbell 1979). However, single-item constructs do not offer more for less (Sarstedt and Wilczynski 2009). In terms of psychometric properties, recent research shows that only under very specific conditions do single items perform as well as multi-item scales (Diamantopoulos et al. 2012). As Diamantopoulos et al. (2012, forthcoming) point out, opting for single item measures in most empirical settings is a risky decision as the set of circumstances that would favor their use is unlikely to be frequently encountered in practice. This conclusion holds even more so for PLS-SEM since the utilization of a small number of items for construct measurement (in the extreme, the use of a single item) works against PLS-SEMs tendency to bias estimates (i.e., an overestimation of the measurement model relations and an underestimation of the structural model relations) when the number of indicators and/or the number of observations increase (i.e., consistency at large; Lohmller 1989; Wold 1982). Despite their ease of implementation in PLS-SEM, researchers should follow Diamantopoulos et al.s (2012) guideline and only consider single items (rather than a multi-item scale) when (1) small sample sizes are present (i.e., N < 50), and (2) effect sizes of 0.30 and lower are expected, and (3) the items of the originating multi-item scale are highly homogeneous (i.e., Cronbachs alpha>0.90), and (4) the items are semantically redundant.
Sampling Characteristics
The most prominent argument for choosing PLS-SEM in MIS Quarterly is the use of small sample sizes. This issue has been passionately debated over the last years (e.g., Goodhue et al. 2006; Marcoulides and Saunders 2006) with Gefen et al. (2011, p. iii) noting that there is an apparent misuse of perceived leniencies such as assumptions about minimum sample sizes. Prior studies appearing in scholarly journals (e.g., Reinartz et al. 2009)including those more critical of the PLS-SEM method (e.g., Lu et al. 2011)indicate that PLS-SEM overcomes problematic model identification issues and that it is a powerful method to analyze complex models using smaller samples. Nevertheless, like any other statistical technique, PLS-SEM is not immune to threats from data inadequacies and researchers should make every effort to provide support for its statistical power in the research setting at hand. If commonly known standards of collecting adequate sets of empirical data have been met (e.g., the identification and treatment of outliers and other influential observations or the handling of missing values), PLS-SEM can indeed be a silver bullet in certain research situations (e.g., when models are relatively complex and representative sets of data are rather small; Hair et al. 2011; Reinartz et al. 2009). As shown in Table 3, about a quarter of all models having fewer than 100 observations and six models (5.50%) fail the commonly suggested rule of ten (Hair et al. 2011), which is admittedly only a rough guideline regarding minimum sample size requirements. It
vii
Editors Comments
is important to note that this practice cannot supplant additional power analyses (Chin 1998a), which as few as three studies (4.62%) carried out in an effort to provide further support for the adequacy of the resulting sample size. To address this, researchers could have used power tables from regression (e.g., Cohen 1992) to determine minimum sample size requirements (Chin 2010). Other important sampling characteristics have also been little analyzed or satisfactorily presented. While about one-third of the studies address nonresponse bias, only ten studies (15.38%) report the exact treatment of missing values and four studies (6.15%) broach the issue of detecting influential observations (e.g., outliers) and their treatment while only two studies (3.08%) use a holdout sample to validate their findings (see Table 3). Similarly, authors motivate their choice of PLS-SEM with distributional considerations in one-third of the studies, but only four studies (6.15%) specifically analyze the normality of their data. Given that highly skewed data inflate bootstrap standard errors (Hair et al. 2012) and the well-known tendency of PLS-SEM to slightly underestimate structural model relationships (Dijkstra 1983), one needs to pay close attention to the data distributional characteristics, especially when using relatively small Ns. In this context, it is important to note that even though PLS-SEM provides precise estimates in situations with extremely non-normal data (Cassel et al. 1999; Reinartz et al. 2009), motivating the use of PLS-SEM primarily on the grounds of distributional considerations is not advisable in light of the multitude of robust covariance-based estimator options available (Gefen et al. 2011; Reinartz et al. 2009).
viii
Editors Comments
Number of Models in JM, JMR, and JAMS Reporting (N = 60) 210.88 160 (39; 2,990) 11 8 24.87% Number of Studies in JM, JMR, and JAMS Reporting (N = 41) 16 0 5 3 2
22.94 5.50
18.33 13.33
24 2 10 4 4
20 9 0 0
8 9 0 0
Multicollinearity
ix
Editors Comments
scores. In particular, most studies report indicator loadings (70 of 79 models, 88.61%) and measures of internal consistency by reporting Cronbachs alpha (8 models, 10.13%), composite reliability (45 models, 56.96%), or both (22 models, 27.85%). All studies provide evidence of convergent validity and most models assess discriminant validity (see Table 6), using approaches as described in, for example, Straub et al. (2004) or Gefen and Straub (2005). Even though the handling of reflective measures suggests that researchers are following good practice, future PLS-SEM studies should continue to further improve measurement validation (Boudreau et al. 2001; Straub 1989). Moreover, while researchers frequently use confirmatory factor analyses (CFA) prior to the model evaluation (26 models, 32.91%), it would actually be preferable to avoid this practice. Considering that the parameter estimates depend on the specific set-up of the analyzed model, it is more appropriate to evaluate these measures via PLS-SEM statistics. Structural Model Evaluation A common argument for using PLS-SEM is that the technique excels at prediction and almost all model estimations use the coefficient of determination R values) to characterize the ability of the model to explain and predict the endogenous latent variables (see Table 7). However, only 13 models (11.93%) use Cohens (1988) pseudo F-test ( effect size), which allows a scholar to evaluate the independent variables incremental explanation of a dependent variable. In that PLS-SEM is strong on prediction, it is disconcerting to see that none of the studies uses Stones (1974) and Geissers (1974) cross-validated redundancy measure Q, which allows for assessing the model predictive relevance (Wold 1982). In addition, changes in Q allow assessing the relative impact of the structural model for predicting the observed measures of an endogenous latent variable by the q effect size (Chin 1998b). In light of our results, we urge researchers to use statistical criteria such as f, Q, and q more frequently (Chin 1998b; Hair et al. 2011; Henseler et al. 2009) to make a stronger case for model predictive capabilities. Likewise, researchers should compare the theoretical model with the saturated model, which includes all possible paths in order to (1) verify that the significant paths in the theoretical model also remain significant in the saturated model, and (2) that adding the paths via the saturated model does not significantly increase the f (Gefen et al. 2011, p. viii). None of the MIS Quarterly studies we examined applied this analysis. Similarly, the estimation of alternative models (Gefen et al. 2011) is the exception, with 18 studies (27.69%) engaging in model modifications, and 6 studies (9.23%) providing a comparison with the initial model. In this context, and as emphasized by Rigdon, Preacher et al. (2011), researchers should think more broadly in terms of the different relationships in the structural model and the measurement models (e.g., linear versus nonlinear relationships; Henseler et al. 2012). Methodological research should, therefore, make greater efforts to develop ways to explore different model set-ups in this respect. PLS-SEM studies in MIS Quarterly address, to some extent, additional hypothesized complexity in the model set-up (see Table 2 and Table 8) by, for instance, mediator analysis (15 studies, 23.08%) and hierarchical component models (15 studies, 23.08%). Only three studies (4.62%) examine nonlinear relationships, even though this kind of analysis can easily be carried out in PLS-SEM (Rigdon et al. 2010). Group analyses by means of continuous or categorical moderators are considered in 24 studies (36.92%), often without conveying any details on the methods deployed. For instance, alternative approaches to moderator analyses perform differently in PLSSEM (Henseler and Chin 2010) and the limitations of conventional statistical tests in multigroup comparisons have been reported in the literature (Rigdon et al. 2010; Sarstedt, Henseler, and Ringle 2011). Moreover, only three studies (4.62%) address the issue of measurement model invariance in the context of multigroup analyses. Finally, it is noteworthy that far too few of the PLS-SEM studies in MIS Quarterly conducted supplementary analyses (see Table 8), most of which have been summarized in the recommendations by Gefen et al. (2011) and Hair et al. (2012). For example, none of the studies addresses the critical issue of unobserved heterogeneitycarried out by using, for example, FIMIX-PLS (Sarstedt, Becker et al. 2011; Sarstedt and Ringle 2010)that, if not handled properly, can seriously compromise the results, interpretation, and conclusions (Rigdon et al. 2010; Rigdon, Ringle et al. 2011; Ringle et al. 2010).
Editors Comments
Empirical Test Criterion in PLS-SEMa Indicator Reliability Internal Consistency Reliability Convergent Validity Indicator Loadings Only Composite Reliability Only Cronbachs Alpha Both AVE1 Other Only Fornell-Larcker Criterion Only Cross-Loadings Both Other
Proportion Reporting (%) 88.61 56.96 10.13 27.85 88.61 11.39 36.71 8.86 41.77 3.80
Proportion Reporting (%) 54.00 32.00 10.00 20.00 56.00 2.00 40.00 6.00 20.00 2.00
Discriminant Validity
a
Criterion Coefficient of Determination Predictive Relevance Path Coefficients Significance of Path Coefficients Confidence Intervals Total Effects
Empirical Test Criterion in PLS-SEM R f Effect Size Cross-Validated Redundancy Q q Effect Size Absolute Values Standard Errors, Significance Levels, t-Values, p-Values
Criterion Common Method Variance Mediator Analysis Multigroup Analysis Continuous Moderator Analysis Categorical, Observed (Multigroup Comparison) Categorical, Unobserved (Model-Based Segmentation Techniques; e.g., FIMIX-PLS) Measurement Model Invariance Tetrad Analysis
Proportion Reporting (%) 40.00 23.08 12.31 24.62 0.00 4.62 1.54
Proportion Reporting (%) 29.27 34.15 0.00 17.07 12.20 0.00 7.32 0.00
goals of their research with the PLS-SEM capabilities, that is, use PLS-SEM primarily for exploratory work and for prediction), (2) the suitability of the data used and reporting of sampling and other statistics (e.g., distributions and statistical power calculations), (3) the use of formative measures and their evaluation, (4) the inclusion of additional structural model evaluation criteria in compliance with the PLS-SEM prediction-oriented goals, and (5) the reporting of the particular procedures employed, and the algorithmic options employed.
xi
Editors Comments
Conclusion
Every SEM approach has certain strengths but also exhibits clear constraints, constraints which limit its utility in certain research situationsas discussed and shown by authors such as Jreskog and Wold (1982) and Reinartz et al. (2009) in their comparisons of CBSEM and PLS-SEM.3 We thus call for a more informed and rigorous use of PLS-SEM. Much of the criticism found currently in the literature may be less related to PLS-SEM itself than to misuses of the method or the belief that a given analytical technique can overcome any challenge researchers face (e.g., the realized N). If correctly applied, PLS-SEM can indeed be a silver bullet for estimating causal models in many model and data situations (Hair et al. 2011), especially when complex models and secondary data are involved. Secondary data, whose use is becoming more and more common in business research, is typically collected without the benefit of a theoretical framework and is often not a good match for CB-SEM analysis. In light of the need in CB-SEM for high-quality and specially developed manifest variables, PLS-SEM may often be the better choice for structural modeling of secondary data (Rigdon 2012). PLS-SEM is still catching up with the methodological advances that have been carried out on CB-SEM over more than the last 25 years. Researchers must improve the method further and provide guidance on appropriate techniques to extend PLS-SEM analyses and their correct applications. For instance, about a quarter of all studies considered the inclusion of hierarchical component models but about half of these studies explain exactly how they were conducted (see Table 2). Thus, knowledge of the use of the different types of hierarchical component models in PLS-SEM (e.g., the formativeformative type) remains scant (for additional results on this technique and on how to apply it, see the Online Supplement to this editorial). Researchers should continue to make every possible effort to follow the many avenues for improving and extending the PLS-SEM method in order to make its use even more valuable for empirical researchers.
Acknowledgments
We would like to thank Jrg Henseler, Radboud University Nijmegen, The Netherlands, for his helpful comments to improve earlier versions of the manuscript.
References
Bagozzi, R. P. 2011. Measurement and Meaning in Information Systems and Organizational Research: Methodological and Philosophical Foundations, MIS Quarterly (35:2), pp. 261-292. Baumgartner, H., and Homburg, C. 1996. Applications of Structural Equation Modeling in Marketing and Consumer Research: A Review, International Journal of Research in Marketing (13:2), pp. 139-161. Bergkvist, L., and Rossiter, J. R. 2007. The Predictive Validity of Multiple-Item Versus Single-Item Measures of the Same Constructs, Journal of Marketing Research (44:2), pp. 175-184. Boudreau, M.-C., Gefen, D., and Straub, D. W. 2001. Validation in Information Systems Research: A State-of-the-Art Assessment, MIS Quarterly (25:1), pp. 1-16. Cassel, C., Hackl, P., and Westlund, A. H. 1999. Robustness of Partial Least-Squares Method for Estimating Latent Variable Quality Structures, Journal of Applied Statistics (26:4), pp. 435-446. Chin, W. W. 1998a. Commentary: Issues and Opinion on Structural Equation Modeling, MIS Quarterly (22:1), pp. xii-xvi. Chin, W. W. 1998b. The Partial Least Squares Approach to Structural Equation Modeling, in Modern Methods for Business Research, G. A. Marcoulides (ed.), Mahwah, NJ: Erlbaum, pp. 295-358. Chin, W. W. 2003. PLS Graph 3.0, Soft Modeling Inc., Houston, TX. Chin, W. W. 2010. How to Write Up and Report PLS Analyses, in Handbook of Partial Least Squares: Concepts, Methods and Applications in Marketing and Related Fields, V. E. Vinzi, W. W. Chin, J. Henseler, and H. Wang (eds.), Berlin: Springer, pp. 655-690. Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences (2nd ed.), ed.), Mahwah, NJ: Lawrence Erlbaum Associates. Cohen, J. 1992. A Power Primer, Psychological Bulletin (112:1), pp. 155-159. Cook, T. D., and Campbell, D. T. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings, Boston: Houghton Mifflin.
This notion holds for every approach to SEM, including the recently proposed generalized structured component analysis (GSCA; Hwang et al. 2010; Hwang and Takane 2005). Henseler (2012) shows that the prominently proposed advantages of GSCA do not hold true when the method is correctly applied and evaluated.
xii
Editors Comments
Diamantopoulos, A. 2006. The Error Term in Formative Measurement Models: Interpretation and Modeling Implications, Journal of Modelling in Management (1:1), pp. 7-17. Diamantopoulos, A. 2011. Incorporating Formative Measures into Covariance-Based Structural Equation Models, MIS Quarterly (35:2), pp. 335-A335. Diamantopoulos, A., Riefler, P., and Roth, K. P. 2008. Advancing Formative Measurement Models, Journal of Business Research (61:12), pp. 1203-1218. Diamantopoulos, A., Sarstedt, M., Fuchs, C., Kaiser, S., and Wilczynski, P. 2012. Guidelines for Choosing Between Multi-Item and SingleItem Scales for Construct Measurement: A Predictive Validity Perspective, Journal of the Academy of Marketing Science, forthcoming. Diamantopoulos, A., and Winklhofer, H. M. 2001. Index Construction with Formative Indicators: An Alternative to Scale Development, Journal of Marketing Research (38:2), pp. 269-277. Dijkstra, T. 1983. Some Comments on Maximum Likelihood and Partial Least Squares Methods, Journal of Econometrics (22:1/2), pp. 67-90. Drolet, A. L., and Morrison, D. G. 2001. Do We Really Need Multiple-Item Measures in Service Research?, Journal of Service Research (3:3), pp. 196-204. Efron, B., and Tibshirani, R. 1986. Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical Accuracy, Statistical Science (1:1), pp. 54-75. Fuchs, C., and Diamantopoulos, A. 2009. Using Single-Item Measures for Construct Measurement in Management Research. Conceptual Issues and Application Guidelines, Die Betriebswirtschaft (69:2), pp. 197-212. Gefen, D., Rigdon, E. E., and Straub, D. W. 2011. Editors Comment: An Update and Extension to SEM Guidelines for Administrative and Social Science Research, MIS Quarterly (35:2), pp. iii-xiv. Gefen, D., and Straub, D. W. 1997. Gender Differences in the Perception and Use of E-Mail: An Extension to the Technology Acceptance Model, MIS Quarterly (21:4), pp. 389-400. Gefen, D., and Straub, D. W. 2005. A Practical Guide to Factorial Validity Using PLS-Graph: Tutorial and Annotated Example, Communications of the AIS (16:Article 5), pp. 91-109. Gefen, D., Straub, D. W., and Boudreau, M.-C. 2000. Structural Equation Modeling Techniques and Regression: Guidelines for Research Practice, Communications of the AIS (1:7), pp. 1-78. Geisser, S. 1974. A Predictive Approach to the Random Effects Model, Biometrika (61:1), pp. 101-107. Goodhue, D., Lewis, W., and Thompson, R. 2006. PLS, Small Sample Size, and Statistical Power in MIS Research, in Proceedings of the 39th Annual Hawaii International Conference on System Sciences, Los Alamitos, CA: IEEE Computer Society Press. Gudergan, S. P., Ringle, C. M., Wende, S., and Will, A. 2008. Confirmatory Tetrad Analysis in PLS Path Modeling, Journal of Business Research (61:12), pp. 1238-1249. Hair, J. F., Ringle, C. M., and Sarstedt, M. 2011. PLS-SEM: Indeed a Silver Bullet, Journal of Marketing Theory and Practice (19:2), pp. 139-151. Hair, J. F., Sarstedt, M., Ringle, C. M., and Mena, J. A. 2012. An Assessment of the Use of Partial Least Squares Structural Equation Modeling in Marketing Research, Journal of the Academy of Marketing Science, forthcoming. Henseler, J. 2012. Why Generalized Structured Component Analysis Is Not Universally Preferable to Structural Equation Modeling, Journal of the Academy of Marketing Science, forthcoming. Henseler, J., and Chin, W. W. 2010. A Comparison of Approaches for the Analysis of Interaction Effects Between Latent Variables Using Partial Least Squares Path Modeling, Structural Equation Modeling (17:1), pp. 82-109. Henseler, J., Fassott, G., Dijkstra, T. K., and Wilson, B. 2012. Analyzing Quadratic Effects of Formative Constructs by Means of VarianceBased Structural Equation Modelling, European Journal of Information Systems (21:1), pp. 99-112. Henseler, J., Ringle, C. M., and Sinkovics, R. R. 2009. The Use of Partial Least Squares Path Modeling in International Marketing, Advances in International Marketing (20), pp. 277-320. Hulland, J. 1999. Use of Partial Least Squares (PLS) in Strategic Management Research: A Review of Four Recent Studies, Strategic Management Journal (20:2), pp. 195-204. Hwang, H., Malhotra, N. K., Kim, Y., Tomiuk, M. A., and Hong, S. 2010. A Comparative Study on Parameter Recovery of Three Approaches to Structural Equation Modeling, Journal of Marketing Research (47:4), pp. 699-712. Hwang, H., and Takane, Y. 2005. Generalized Structured Component Analysis, Psychometrika (69:1), pp. 81-99. Jackson, D. L., Gillaspy Jr., J. A., and Purc-Stephenson, R. 2009. Reporting Practices in Confirmatory Factor Analysis: An Overview and Some Recommendations, Psychological Methods (14:1), pp. 6-23. Jreskog, K. G., and Wold, H. 1982. The ML and PLS Techniques for Modeling with Latent Variables: Historical and Comparative Aspects, in Systems Under Indirect Observation, Part I, H. Wold and K. G. Jreskog (eds.), Amsterdam: North-Holland, pp. 263-270. Lee, L., Petter, S., Fayard, D., and Robinson, S. 2011. On the Use of Partial Least Squares Path Modeling in Accounting Research, International Journal of Accounting Information Systems (12:4), pp. 305-328. Lohmller, J.-B. 1989. Latent Variable Path Modeling with Partial Least Squares, Heidelberg: Physica, 1989. Lu, I. R. R., Kwan, E., Thomas, D. R., and Cedzynski, M. 2011. Two New Methods for Estimating Structural Equation Models: An Illustration and a Comparison with Two Established Methods, International Journal of Research in Marketing (28:3), pp. 258-268.
xiii
Editors Comments
MacKenzie, S. B., Podsakoff, P. M., and Podsakoff, N. P. 2011. Construct Measurement and Validation Procedures in MIS and Behavioral Research: Integrating New and Existing Techniques, MIS Quarterly (35:2), pp. 293-334. Marcoulides, G. A., Chin, W. W., and Saunders, C. 2009. Foreword: A Critical Look at Partial Least Squares Modeling, MIS Quarterly (33:1), pp. 171-175. Marcoulides, G. A., and Saunders, C. 2006. PLS: A Silver Bullet?, MIS Quarterly (30:2), pp. iii-ix. Medsker, G. J., Williams, L. J., and Holahan, P. J. 1994. A Review of Current Practices for Evaluating Causal Models in Organizational Behavior and Human Resources Management Research, Journal of Management (20:2), pp. 439-464. Petter, S., Straub, D., and Rai, A. 2007. Specifying Formative Constructs in Information Systems Research, MIS Quarterly (31:4), pp. 623-656. Qureshi, I., and Compeau, D. R. 2009. Assessing Between-Group Differences in Information Systems Research: A Comparison of Covariance- and Component-Based SEM, MIS Quarterly (33:1), pp. 197-214. Reinartz, W., Haenlein, M., and Henseler, J. 2009. An Empirical Comparison of the Efficacy of Covariance-Based and Variance-Based SEM, International Journal of Research in Marketing (26:4), pp. 332-344. Rigdon, E. E. 2012. Partial Least Squares Path Modeling, in Structural Equation Modeling: A Secondary Course, G. R. Hancock and R. O. Mueller (eds.), Charlotte, NC: Information Age Publishing, Charlotte, forthcoming. Rigdon, E. E., Preacher, K. J., Lee, N., Howell, R. D., Franke, G. R., and Borsboom, D. 2011. Overcoming Measurement Dogma: A Response to Rossiter, European Journal of Marketing (45:11/12), pp. 1589-1600. Rigdon, E. E., Ringle, C. M., and Sarstedt, M. 2011. Structural Modeling of Heterogeneous Data with Partial Least Squares, in Review of Marketing Research, N. K. Malhotra (ed.), Armonk, NY: Sharpe, pp. 255-296. Rigdon, E. E., Ringle, C. M., Sarstedt, M., and Gudergan, S. P. 2011. Assessing Heterogeneity in Customer Satisfaction Studies: Across Industry Similarities and Within Industry Differences, Advances in International Marketing (22), pp. 169-194. Ringle, C. M., Sarstedt, M., and Mooi, E. A. 2010. Response-Based Segmentation Using Finite Mixture Partial Least Squares: Theoretical Foundations and an Application to American Customer Satisfaction Index Data, Annals of Information Systems (8), pp. 19-49. Sarstedt, M., Becker, J.-M., Ringle, C. M., and Schwaiger, M. 2011. Uncovering and Treating Unobserved Heterogeneity with FIMIX-PLS: Which Model Selection Criterion Provides an Appropriate Number of Segments?, Schmalenbach Business Review (63:1), pp. 34-62. Sarstedt, M., Henseler, J., and Ringle, C. M. 2011. Multi-Group Analysis in Partial Least Squares (PLS), Path Modeling: Alternative Methods and Empirical Results, Advances in International Marketing (22), pp. 195-218. Sarstedt, M., and Ringle, C. M. 2010. Treating Unobserved Heterogeneity in PLS Path Modeling: A Comparison of FIMIX-PLS with Different Data Analysis Strategies, Journal of Applied Statistics (37:8), pp. 1299-1318. Sarstedt, M., and Wilczynski, P. 2009. More for Less? A Comparison of Single-Item and Multi-Item Measures, Die Betriebswirtschaft (69:2), pp. 211-227. Shah, R., and Goldstein, S. M. 2006. Use of Structural Equation Modeling in Operations Management Research: Looking Back and Forward, Journal of Operations Management (24:2), pp. 148-169. Shook, C. L., Ketchen, D. J., Hult, T., and Kacmar, K. M. 2004. An Assessment of the Use of Structural Equation Modeling in Strategic Management Research Strategic Management Journal (25:4), pp. 397-404. Steenkamp, J.-B., and Baumgartner, H. 2000. On the Use of Structural Equation Models for Marketing Modeling, International Journal of Research in Marketing (17:2/3), pp. 195-202. Stone, M. 1974. Cross-Validatory Choice and Assessment of Statistical Predictions, Journal of the Royal Statistical Society (36:2), pp. 111-147. Straub, D. 1989. Validating Instruments in MIS Research, MIS Quarterly (13:2), pp. 147-169. Straub, D., Boudreau, M.-C., and Gefen, D. 2004. Validation Guidelines for IS Positivist Research, Communications of the Association for Information Systems (13:Article 24), pp. 380-427. Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. 2003. User Acceptance of Information Technology: Toward a Unified View, MIS Quarterly (27:3), pp. 425-478. Wanous, J. P., Reichers, A., and Hudy, M. J. 1997. Overall Job Satisfaction: How Good are Single-Item Measures?, Journal of Applied Psychology (82:2), pp. 247-252. Wetzels, M., Odekerken-Schroder, G., and van Oppen, C. 2009. Using PLS Path Modeling for Assessing Hierarchical Construct Models: Guidelines and Empirical Illustration, MIS Quarterly (33:1), pp. 177-195. Wold, H. 1974. Causal Flows with Latent Variables: Partings of Ways in the Light of NIPALS Modelling, European Economic Review (5:1), pp. 67-86. Wold, H. 1982. Soft Modeling: The Basic Design and Some Extensions, in Systems Under Indirect Observations: Part II, K. G. Jreskog and H. Wold (eds.), Amsterdam: North-Holland, pp. 1-54.
xiv
S3
Iacouvou, C. L., Thompson, R. L., and Smith. H. J. 2009. Selective Status Reporting in Information Systems Projects: A Dyadic-Level Investigation (33:4), pp. 785-810. Igbaria, M., Parasuraman, S., and Badawy, M. K. Work Experiences, Job Involvement, and Quality of Work Life Among Information Systems Personnel (18:2), pp. 175-201. Igbaria, M., Zinatelli, N., Cragg, P., and Cavaye, A. L. M. 1997. Personal Computing Acceptance Factors in Small Firms: A Structural Equation Model (21:3), pp. 279-305. Jiang, Z., and Benbasat, I. 2007. The Effects of Presentation Formats and Task Complexity on Online Consumers Product Understanding (31:3), pp. 475-500. Johnson, N. A., and Cooper, R. B. 2009. Power and Concession in Computer-Mediated Negotiations: An Examination of First Offers (33:1), pp. 147-170. Johnston, A. C., and Warkentin, M. 2010. Fear Appeals and Information Security Behaviors: An Empirical Study (34:3), pp. 549-566. Kamis, A., Koufaris, M., and Stern, T. 2008. Using an Attribute-Based Decision Support System for User-Customized Products Online: An Experimental Investigation (32:1), pp. 159-177. Kanawattanachai, P., and Yoo, Y. 2007. The Impact of Knowledge Coordination on Virtual Team Performance Over Time (31:4), pp. 783-808. Karahanna, E., Straub, D. W., and Chervany, N. L. 1999. Information Technology Adoption Across Time: A Cross-Sectional Comparison of Pre-Adoption and Post-Adoption Beliefs (23:2), pp.183-213. Karahanna, E., Agarwal, R., and Angst, C. M. 2006. Reconceptualizing Compatibility Beliefs in Technology Acceptance Research (30:4), pp. 781-804. Klein, R., and Rai, A. 2009. Interfirm Strategic Information Flows in Logistics Supply Chain Relationships (33:4), pp. 735-762. Ko, D-G., Kirsch, L. J., and King, W. R. 2005. Antecedents of Knowledge Transfer from Consultants to Clients in Enterprise System Implementations (29:1), pp. 59-85. Komiak, S. Y. X., and Benbasat, I. 2006. The Effects of Personalization and Familiarity on Trust and Adoption of Recommendation Agents (30:4), pp. 941-960. Lee, G., and Xia, W. 2010. Toward Agile: An Integrated Analysis of Quantitative and Qualitative Field Data on Software Development Agility (34:1), pp. 87-114. Lewis, W., Agarwa., R., and Sambamurthy, V. 2003. Sources of Influence on Beliefs about Information Technology Use: An Empirical Study on Knowledge Workers (27:4), pp. 657-678. Liang, H., Saraf, N., Hu, Q., and Xue, Y. 2007. Assimilation of Enterprise Systems: The Effect of Institutional Pressures and the Mediating Role of Top Management (31:1), pp. 59-87. Limayem, M., Hirt, S. G., and Cheung, C. M. K. 2007. How Habit Limits the Predictive Power of Intention: The Case of Information Systems Continuance (31:4), pp. 705-737. Majchrzak, A., Beath, C. M., Lim, R. A., and Chin, W. W. 2005. Managing Client Dialogues During Information Systems Design to Facilitate Client Learning (29:4), pp. 653-672. Moores, T. T., and Chang, J. C-J. 2006. Ethical Decision Making in Software Piracy: Initial Development and a Test of a Four-Component Model (30:1), pp. 167-180. Nadkarni, S., and Gupta, R. 2007. A Task-Based Model of Perceived Website Complexity (31:3), pp. 501-524. Pavlou, P. A., and Fygenson, M. 2006. Understanding and Predicting Electronic Commerce Adoption: An Extension of the Theory of Planned Behavior (30:1), pp. 115-143. Pavlou, P. A., Liang, H., and Xue, Y. 2007. Understanding and Mitigating Uncertainty in Online Exchange Relationships: A Principal Agent Perspective (31:1), pp. 105-136. Rai, A., Patnayakuni, R., and Seth, N. 2006. Firm Performance Impacts of Digitally Enabled Supply Chain Integration Capabilities (30:2), pp. 225-246. Ravichandran, T., and Rai, A. 2000. Quality Management in Systems Development: An Organizational System Perspective (24:3), pp. 381-415. Sarker, S., and Valacich, J. S. 2010. An Alternative to Methodological Individualism: A Non-Reductionist Approach to Studying Technology Adoption by Groups (34:4), pp. 779-808. Sia, C. L., Lim, K. H., Leung, K., Lee, M. K. O., Huang, W. W., and Benbasat, I. 2009. Web Strategies to Promote Internet Shopping: Is Cultural-Customization Needed? (33:3), pp. 491-512. Siponen, P., and Vance, A. 2010. Neutralization: New Insights into the Problem of Employee Information Systems Security Policy Violations (34:3), pp. 487-502. Spears, J. L., and Barki, H. 2010. User Participation in Information Systems Security Risk Management (34:3), pp. 503-522. Srite, M., and Karahanna, E. 2006. The Role of Espoused Cultural Values in Technology Acceptance (30:3), pp. 679-704. Stewart, K. J., and Gosain, S. 2006. The Impact of Ideology on Effectiveness in Open Source Software Development Teams (30:2), pp. 291-314.
Sykes, T. A., Venkatesh, V., and Gosain, S. 2009. Model of Acceptance with Peer Support: A Social Network Perspective to Understand Employees System Use (33:2), pp. 371-393. Teo, H. H., Wei, K. K., and Benbasat, I. 2003. Predicting Intention to Adopt Interorganizational Linkages: An Institutional Perspective (27:1), pp. 19-49. Thatcher, J. B., and Perrew, P. 2002. An Empirical Examination of Individual Traits as Antecedents to Computer Anxiety and Computer Self-Efficacy (26:4), pp. 381-396. Turel, O., Serenko, A., and Giles, P. 2011. Integrating Technology Addiction and Use: An Empirical Investigation of Online Auction Users (35:4), pp. 1043-1061. Venkatesh, V., and Morris, M. G. 2000. Why Dont Men Ever Stop to Ask for Directions? Gender, Social Influence, and Their Role in Technology Acceptance and Usage Behaviors (24:1), pp. 115-139. Venkatesh, V., Brown, S. A., Maruping, L. M., and Bala, H. 2008. Predicting Different Conceptualizations of System Use: The Competing Roles of Behavioral Intention, Facilitating Conditions, and Behavioral Expectation (32:3), pp. 483-502. Wang, W., and Benbasat, I. 2009. Interactive Decision Aids for Consumer Decision Making in E-Commerce: The Influence of Perceived Strategy Restrictiveness (33:2), pp. 293-320. Wasko, M. M., and Faraj, S. 2005. Why Should I Share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice (29:1), pp. 35-57. Wells, J. D., Valacich, J. S., and Hess, T. J. 2011. What Signals Are You Sending? How Website Quality Influences Perceptions of Product Quality and Purchase Intentions (35:2), pp. 373-396. Wixom, B. H., and Watson, H. J. 2001. An Empirical Investigation of the Factors Affecting Data Warehousing Success (25:1), pp. 17-41. Yoo, H., and Alavi, M. 2001. Media and Group Cohesion: Relative Influences on Social Presence, Task Participation, and Group Consensus (25:3), pp. 371-390. Zhang, T., Agarwal, R., and Lucas Jr., H. C. 2011. The Value of IT-Enabled Retailer Learning: Personalized Product Recommendations and Customer Store Loyalty in Electronic Markets (35:4), pp. 859-881. Zhu, K., Kraemer, K. L., Gurbaxani, V., and Xu, S. X. 2006. Migration to Open-Standard Interorganizational Systems: Network Effects, Switching Costs, and Path Dependency (30:SI), pp. 515-539.
S5
Legend: LOC = lower order component; HOC = higher order component; Y1 = exogenous latent variable in the structural model (its measurement model is not further specified in this illustration); Y2 = endogenous latent variable in the structural model; b1 = standardized path coefficient for the structural model relationship between the latent variables Y1 and Y2.
S7
In half of the formativeformative type and in a quarter of the reflectiveformative type of hierarchical component model applications, the higher order component is endogenous. These model set-ups require particular attention when the repeated indicators approach is used since almost all variance of the higher order component is explained by its lower order components (R . 1.0; Figure B2). As a consequence, the path relationship from the latent variable (predecessor) to the endogenous higher order component is always approximately zero and nonsignificant. In this kind of situation, a mixture of the repeated indicators approach and the use of latent variable scores in a two-stage approachwhich is similar to the two-stage approach in moderator analyses in PLS-SEM (Henseler and Chin 2010)is appropriate. In the first stage, one uses the repeated indicators approach to obtain the latent variable scores for the lower order components which then, in the second stage, serve as manifest variables in the measurement model of the higher order component (Figure B2). Thereby, the higher order component is embedded in the nomological net in a way that allows other latent variables as predecessors to explain some of its variance, which may result in significant path relationships. Even though these explications further substantiate the use of hierarchical component models in PLS-SEM from a technical perspective, more knowledge is needed to integrate the theoretical and technical underpinnings. Future research on the appropriate use of formative measurement models in PLS-SEM must also address the use of formativereflective, reflectiveformative, and formativeformative types of hierarchical component models.
References
Chin, W. W. 1998. Commentary: Issues and Opinion on Structural Equation Modeling, MIS Quarterly (22:1), pp. xii-xvi. Henseler, J., and Chin, W. W. 2010. A Comparison of Approaches for the Analysis of Interaction Effects Between Latent Variables Using Partial Least Squares Path Modeling, Structural Equation Modeling (17:1), pp. 82-109. Lohmller, J.-B. 1989. Latent Variable Path Modeling with Partial Least Squares, Heidelberg: Physica. Wetzels, M., Odekerken-Schroder, G., and van Oppen, C. 2009. Using PLS Path Modeling for Assessing Hierarchical Construct Models: Guidelines and Empirical Illustration, MIS Quarterly (33:1), pp. 177-195. Wold, H. 1982. Soft Modeling: The Basic Design and Some Extensions, in Systems Under Indirect Observations: Part II, K. G. Jreskog and H. Wold (eds.), Amsterdam: North-Holland, pp. 1-54.