10 1002@per 2265

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

European Journal of Personality, Eur. J. Pers.

(2020)
Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/per.2265

A Psychometric Network Perspective on the Validity and Validation of Personality


Trait Questionnaires

ALEXANDER P. CHRISTENSEN1*, HUDSON GOLINO2 and PAUL J. SILVIA1


1
Department of Psychology, University of North Carolina at Greensboro, Greensboro, NC USA
2
Department of Psychology, University of Virginia, Charlottesville, VA USA

Abstract: This article reviews the causal implications of latent variable and psychometric network models for the vali-
dation of personality trait questionnaires. These models imply different data generating mechanisms that have important
consequences for the validity and validation of questionnaires. From this review, we formalize a framework for assessing
the evidence for the validity of questionnaires from the psychometric network perspective. We focus specifically on the
structural phase of validation, where items are assessed for redundancy, dimensionality, and internal structure. In this
discussion, we underline the importance of identifying unique personality components (i.e. an item or set of items that
share a unique common cause) and representing the breadth of each trait’s domain in personality networks. After, we ar-
gue that psychometric network models have measures that are statistically equivalent to factor models but we suggest that
their substantive interpretations differ. Finally, we provide a novel measure of structural consistency, which provides com-
plementary information to internal consistency measures. We close with future directions for how external validation can
be executed using psychometric network models. © 2020 European Association of Personality Psychology

Key words: assessment; measurement; network analysis; personality scales and inventories; validity

INTRODUCTION One promising model comes from the emerging field of


network psychometrics (Epskamp et al., 2018a). Psychomet-
What are personality traits? Your answer likely implies cer- ric network models have a simple representation: nodes (cir-
tain hypotheses about the existence of traits and their under- cles) represent variables (e.g. questionnaire items), and edges
lying data generating mechanisms. These hypotheses are (lines) represent the unique associations (e.g. partial correla-
usually supported by your choice of psychometric model tions) between nodes (Epskamp and Fried, 2018; Epskamp
(Borsboom, 2006). Psychometric models come with a num- et al., 2018b). This representation supports the theoretical
ber of assumptions such as how traits cause variation in your perspective, often referred to as the network approach
measures and the meaning of scores derived from these mea- (Borsboom, 2008, 2017), that psychological attributes are
sures (Borsboom, Cramer, Kievit, Scholten, and Franić, complex systems of observable behaviours that dynamically
2009; Cramer, 2012). These models also come with a num- and mutually reinforce one another (Schmittmann et al.,
ber of other consequences such as considerations about 2013).
how scales should be developed and validated. From this perspective, personality traits resemble an
For many researchers, personality traits are complex sys- emergent property of the interactions that occur between
tems—that is, traits are systems in that they are composed of unique behavioural components—that is, traits are not any
many components which interact with one another and com- single component of the system but rather a feature of the
plex in that their interactions with other systems are difficult system as a whole (Baumert et al., 2017; Cramer et al.,
to derive because of their dependencies and properties. De- 2012a). This suggests that traits emerge because some char-
spite this view, personality traits are usually not modelled acteristics and processes within individual people tend to co-
this way. Most psychometric models provide parsimonious vary more than others (Mõttus and Allerhand, 2017), and
perspectives on personality traits, which may arbitrarily when these relevant processes are aggregated, they reflect
carve joints into the fuzzy nature of personality. In addition, meaningful differences between people in the population
these models have causal implications that some researchers (e.g. trait domains; Borkenau and Ostendorf, 1998; Cramer
might not agree with. Therefore, there is a need for models et al., 2012a).
that better align with how researchers think about Such an explanation of traits affords a novel context for
personality. how they should be assessed. First, it implies that behaviours
that are associated with one trait may directly influence be-
haviours of another trait—the lines separating traits are more
fuzzy than they are distinct (e.g. comorbidity in psychopa-
*Correspondence to: Alexander P. Christensen, Department of Psychol- thology; Cramer, Waldrop, van der Maas, and Borsboom,
ogy, University of North Carolina at Greensboro, P.O. Box 26170, Greens-
boro, NC 27402-6170, USA.
2010). Second, there is an emphasis on a trait’s components
Email: [email protected] as much as the trait itself—a trait’s observable components

Received 7 December 2019


© 2020 European Association of Personality Psychology Revised 1 April 2020, Accepted 3 April 2020
A. Christensen et al.

do not measure the trait but are instead part of the trait In this section, we provide an overview of what this def-
(Schmittmann et al., 2013). This suggests that a behaviour inition means for the validity of personality trait question-
such as liking to go to parties is only one part of a causal col- naires. Most of the heavy lifting for the relation between a
lection of behaviours that we call extraversion (Borsboom, personality trait and questionnaire is done by a researcher’s
2008; Cramer, 2012). This represents a refocusing on what choice of psychometric model (Borsboom, 2006). The most
parts of a trait are being measured rather than the premise common model used for validation in the personality litera-
that the trait itself is being measured. Finally, this explanation ture is the latent variable model (Flake et al., 2017). There-
proposes that the behavioural components of a trait are fore, we begin our discussion of validity by briefly
unique, meaning they have distinct causes (Cramer et al., reviewing how latent variable models make sense of the test
2012a). This suggests that there should be a shift in how validity criteria (for a more through treatment, see
we model existing questionnaires where many related items Borsboom, Mellenbergh, and van Heerden, 2003). We then
are often used to measure a single attribute. move to psychometric network models, which have emerged
The intent of this paper is to elaborate on what the novel as an alternative explanation for the coherence of traits. In
perspective provided by the network approach means for per- terms of validity, much less has been put forward for psycho-
sonality measurement and assessment. We focus specifically metric network models and personality questionnaires; there-
on the validity and validation of personality trait question- fore, we spend most of this section reviewing its current state
naires with the goal of demonstrating how psychometric net- and discussing its meaning in the context of personality mea-
work models relate to modern psychometric perspectives. surement. Throughout this section, we refer to a hypothetical
We place a particular emphasis on the structural analysis of questionnaire of extraversion to contextualize our points.
validation (e.g. item analysis, dimension analysis, and inter-
nal structure; Flake, Pek, and Hehman, 2017; Loevinger,
1957). Before discussing validation, we first consider what
Latent variable perspective
it means for a questionnaire to be valid—a topic that has a
defining role in the substantive interpretation of personality In personality (and most of psychology), reflective latent var-
measures. iable models, where the indicators are regressed on the latent
variable (i.e. causal arrows point from the latent variable to
the indicators), are the standard conceptualization of mea-
surement (Borsboom et al., 2003). A reflective latent variable
(TEST) VALIDITY OF PERSONALITY TRAIT model holds that the items in our questionnaire are a function
QUESTIONNAIRES of the latent variable, meaning that people’s responses to our
questionnaire are caused by their position on the latent vari-
The trait approach has a long tradition in personality, signif- able (e.g. extraversion). Using this causal explanation, we
icantly shaping the last 30 years of research. Many contem- can evaluate the criteria for test validity.
porary theories of personality are inclined to accept traits as First, does the attribute extraversion exist? That is, does
phenomena that exist in some form (e.g. concrete biological extraversion exist prior to and independently of our question-
entities, abstract population summaries). Across theories, naire? Many researchers would consider this question trivial;
traits seek to provide parsimonious descriptions of broad however, to maintain that extraversion is indeed an attribute
between-person patterns of covariation at the population that exists, then the latent variable must be causally responsi-
level (Baumert et al., 2017). Five or six higher order traits ble for the responses to our questionnaire (Borsboom et al.,
are commonly thought to represent the majority of these 2003). Indeed, this is how many researchers think about the
between-person differences, which are typically assessed relationship between personality traits and their question-
using questionnaires (Lee and Ashton, 2004; McCrae and naires as well as what some theories of personality suggest
Costa, 1987). The validation of these questionnaires is a crit- (McCrae and Costa, 2008; McCrae et al., 2000).
ical part of the research agenda (Flake et al., 2017). The second criterion is the crux of test validity which, as
There are many views on what validity means with the Borsboom and colleagues (2004) point out, is not so straight-
most common perspectives involving the interpretation of forward. This is because it requires a theory for how extra-
test scores (e.g. Cronbach and Meehl, 1955; Kane, 2013; version can be linked to the response processes of our
Messick, 1995). This is not how we view validity; instead, questionnaire. Difficulties arise because it’s plausible (and
we adopt the definition that validity refers to whether a test even likely) that the processes that lead one person to re-
measures what it intends to measure (Borsboom et al., spond with ‘agree’ to an item (e.g. ‘I like to go to parties’)
2009; Cattell, 1946; Kelley, 1927). Borsboom and colleagues and another person to respond ‘agree’ to the same item are
(2004) refer to this as test validity, which states that ‘a test is different (Borsboom and Mellenbergh, 2007). An idealistic
valid for measuring an attribute if and only if (a) the attribute perspective is that people have different response processes
exists and (b) variations in the attribute causally produce var- but they select ‘agree’ on the same item because they are po-
iations in the outcomes of the measurement procedure’ sitioned similarly on extraversion. With this perspective, a
(p. 1061). An attribute refers to a property that exists prior common and implicit interpretation is that people possess
to and independent of measurement (Loevinger, 1957). This some quantity of extraversion and it’s the difference in these
definition of validity involves connecting the structure of an quantities that cause the variation in how people respond to
attribute to the response processes of a measure. our questionnaire. More simply, Alice scores higher on our

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

questionnaire than Bob because Alice is positioned higher on Our questionnaire thus refers to the state of a specific set of
the extraversion continuum than Bob. personality components that are causally dependent on one
From a causal perspective, a defensible account for how another and form a network (Cramer, 2012). The state of
extraversion causes variation in our questionnaire would be the network is determined by the total activation of these
that ‘population differences in position on [extraversion] components and is what we refer to as extraversion—that
cause population differences in the expectation of item re- is, the more personality components that are active, the more
sponses’ (Borsboom et al., 2003, p. 211). This implies that the network is pushed towards an extraverted state (Cramer,
people in the population which occupy the same position 2012).
on extraversion will typically respond similarly to the same In the context of a between-person network model, our
items in our questionnaire. This brings us back to the first cri- questionnaire’s network represents the aggregation of the av-
terion: Does extraversion exist? Or rather, to what extent erage activation of each component across within-person net-
does extraversion exist? One claim would be that extraver- works (i.e. each individual person’s network across several
sion exists as a between-person attribute—that is, as a popu- time points; Cramer et al., 2012a; Epskamp et al., 2018b).
lation attribute. A population attribute is not necessarily Both theory and empirical evidence appear to support this
possessed by any one person in the population but rather rep- claim. From a Whole Trait Theory perspective (Fleeson
resents between-person differences at the population level and Jayawickreme, 2015), people’s responses to items in
(Baumert et al., 2017; Cervone, 2005). This notion aligns self-report questionnaires correspond to their locations and
well with the Allportian view that ‘[a] common trait is a cat- maximums of their density distributions for the respective
egory for classifying functionally equivalent forms of behav- within-person states (Fleeson, 2001; Fleeson and Gallagher,
iour in a general population of people’ (Allport, 1961, 2009). When aggregated, these states tend to correspond to
p. 347). self-reported traits (Rauthmann, Horstmann, and Sherman,
From a noncausal perspective, this means that extraver- 2019), and when compared across people, these states typi-
sion could exist as a useful descriptor for comparing people cally produce the between-person traits (Borkenau and
rather than explaining their behaviours (Hogan and Foster, Ostendorf, 1998; Hamaker, Nesselroade, and Molenaar,
2016; Pervin, 1994). Many personality researchers hold this 2007). This interpretation leaves open an important question:
view of reflective latent variables (e.g. Ashton and Lee, What then do personality components refer to?
2005; Goldberg, 1993). Therefore, researchers need not view
a reflective latent variable as causal but rather as a summary
statistic of the shared variance between items in our question- Personality components
naire. This leaves our questionnaire’s validity as a subject of Cramer and colleagues (2012a) define personality compo-
substantive interpretation—that is, it depends upon what re- nents as ‘every feeling, thought, or act’ that is associated with
searchers think they are measuring: traits as population level a ‘unique causal system’ (p. 415). In most instances, these
positions or traits as descriptive summaries of components refer to items of a questionnaire. A key point
between-person differences in the population. of emphasis in their definition is that these components are
unique in that they are causally autonomous (i.e. distinct
Psychometric network perspective causal processes). For many existing personality question-
naires, items are often not unique. Instead, facets or narrower
Psychometric network models have been proposed as an al- characteristics of a trait (e.g. gregariousness, warmth, and as-
ternative explanation for the emergence of personality traits. sertiveness) are composed of many closely related and some-
From a psychometric network perspective, traits arise not be- times redundant items. In this sense, personality components
cause of a latent common cause but rather from the causal that comprise extraversion may be items but they also may
(bi)directional relationships between observed variables be facets (Costantini and Perugini, 2012).
(Cramer et al., 2012a). This explanation suggests that latent In our view, this represents a key difference between
traits are not necessary to explain how items in our question- facets and components: Facets are a collection of related
naire covary (Borsboom et al., 2009). Moreover, it implies items (not necessarily sharing a unique common cause),
that traits do not exist or at least they do not exist in a classi- while components are an item or set of items that share a
cal sense of measurement (i.e. causing variation in our ques- unique common cause. This distinction is important because
tionnaire; Cramer, 2012). Instead, the relationship between some facets in existing questionnaires reflect a homogeneous
extraversion and our questionnaire is a mereological one— cause (and are therefore considered a component), while
that is, the items in our questionnaire do not measure extra- other facets reflect heterogeneous causes, which must be sep-
version but are part of it (Borsboom, 2008; Cramer et al., arated into unique components.1Therefore, a facet from the
2012a). network perspective would be a set of unique components
Extraversion is therefore a summary statistic for how per- that coalesce into a meaningful suborganization of a trait’s
sonality components are influenced by one another (e.g. lik- domain. From this perspective, it becomes imperative that re-
ing to talk to people → liking to go to parties ↔ liking to searchers determine whether items and facets of an existing
meet new people; Cramer, 2012). In this sense, extraversion questionnaire are distinct autonomous causal components
exists as a state of the network or the stable organization of
dynamic personality components that are mutually activating 1
The use of ‘reflect’ in our language is on purpose and implies a common
one another (Cramer et al., 2012a; Schmittmann et al., 2013). cause that can be associated with a reflective latent variable.

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

or if they reflect a common cause (Hallquist, Wright, and When evaluating whether our questionnaire is a valid
Molenaar, 2019). measure of extraversion from the network perspective, we
Based on this definition, personality components appear must shift the evaluation from the validity of extraversion
to closely resemble attributes. In this way, extraversion as an attribute to the validity of its components. This does
may exist as a composite attribute or an attribute that is com- not rule out the validity of the questionnaire but rather shifts
posed of many other attributes (Borsboom and Mellenbergh, the perspective such that our questionnaire is measuring the
2007). The number of attributes that constitute extraversion state of the network composed of causal connected compo-
then becomes a function of the sampling properties from its nents that we refer to as extraversion. The explanation for
domain of representative attributes (McDonald, 2003). Im- the variation of our measurement thus comes from the recip-
portantly, the selection of attributes will change the composi- rocal cause and effect of other attributes in the network.
tion of the network, meaning that different questionnaires This explanation does not come without consequence.
will have different compositions despite still plausibly refer- The issue of connecting the attribute to response processes
ring to extraversion (Markus and Borsboom, 2013). Extra- is merely side stepped from personality traits to personality
version should then be viewed as a finite universe of components. The response processes in network models are
attributes where there are a limited number of unique attri- assumed to lie in the reciprocal cause and effect of other
butes that can comprise it (McDonald, 2003). Therefore, components. Indirectly, this suggests that the responses pro-
there is a particular need to identify and validate the content cesses of one component has reciprocal causes and effects on
of each personality trait’s domain (Markus and Borsboom, processes of other components. This point, however, is circu-
2013). lar in that it still does not specify what the response processes
It’s tempting to say that researchers should only measure are.
attributes that represent one domain; however, it’s unlikely Although the network perspective avoids introducing la-
that attributes of a personality trait will exist independently tent variables to account for these response processes, it does
of other trait domains (Schmittmann et al., 2013; Schwaba, not avoid the question of how they occur. To this end, it is
Rhemtulla, Hopwood, and Bleidorn, 2020; Sočan, 2000). important for personality researchers from the network per-
An item like ‘enjoys talking to people,’ for example, cer- spective to connect response processes to personality compo-
tainly represents the domain of extraversion, but it may also nents. More specifically, researchers must seek to specify
represent the domain of agreeableness. Common examples of how response processes of one component can cause and ef-
this cross-domain entanglement often occur in psychopatho- fect processes of another component. We do not claim to
logical comorbidity (Cramer et al., 2010). Thus, distinctions have a definitive answer to this issue but highlight it as one
between what attributes constitute the extraversion domain that is particularly perplexing and requires sophisticated re-
become rather fuzzy and a matter of degree because of the search designs (e.g. Costantini, Richetin, et al., 2015).
overlap attributes can have with other domains
(Schmittmann et al., 2013). Such fuzziness is likely common
in personality where attributes may not clearly delineate be- VALIDATION OF TRAIT QUESTIONNAIRES FROM
tween where one trait begins and another ends (Connelly, A PSYCHOMETRIC NETWORK PERSPECTIVE
Ones, Davies, and Birkland, 2014). Indeed, this is exactly
what functionalist and complex system theories of personal- Our discussion of validity to this point has been about
ity suggest (Cramer et al., 2012a; Perugini, Costantini, whether a questionnaire possesses the property of being
Hughes, and De Houwer, 2016; Read et al., 2010; Wood, valid. This discussion sets up how psychometric evaluations
Gardner, and Harms, 2015) and what recent psychometric of a questionnaire should be substantively interpreted during
network analyses of personality traits find (Schwaba et al., validation. Validation differs from validity in that it is an on-
2020). going activity which seeks to describe, classify, and evaluate
the degree that empirical evidence and theoretical rationales
Validity from the network perspective support the validity of the questionnaire (Borsboom et al.,
This leads to the question of how extraversion (as a compos- 2004; Messick, 1989). Validation usually entails three
ite attribute) can cause variation in our questionnaire. Quite phases: substantive, structural, and external (Flake et al.,
simply, it does not: There is no link between the (composite) 2017). Our main focus will be on the structural phase, which
attribute and the questionnaire’s response processes because primarily consists of establishing evidence that our question-
it does not exist (Schmittmann et al., 2013). This is because naire measures what we intend it to through item, dimension,
no single attribute that extraversion is composed of will di- and internal structure (e.g. internal consistency) analyses.
rectly assess extraversion; instead, each attribute assesses Validation from a psychometric network perspective has
parts of the extraversion domain (Borsboom, 2008; received relatively little attention. To date, psychometric net-
Borsboom and Mellenbergh, 2007; Cramer et al., 2012b). work models have mainly been used as a novel measurement
With this perspective, we can say that the variation in our tool, which has led to an alternative account for the formation
questionnaire arises from the sampling of attributes in the of traits (Costantini, Epskamp, et al., 2015; Cramer et al.,
representative domain (Borsboom and Mellenbergh, 2007), 2012a). When it comes to psychometric assessment, the
which is clear from studies that have examined several differ- scope of psychometric networks has been far more limited
ent questionnaires (e.g. Christensen et al., 2019; Schwaba (e.g. dimension reduction methods; Golino and Epskamp,
et al., 2020). 2017; Golino et al., 2020). There does, however, appear to

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

be some potential because networks have been shown to be and breadth of the trait(s) the researcher is trying to measure.
mathematically equivalent to latent variable models Greater specificity leads to scales that have higher internal
(Guttman, 1953; Kruis and Maris, 2016; Marsman et al., consistency, which increases the likelihood that the re-
2018). searcher is measuring the same attribute while reducing idio-
The key to distinguishing network psychometrics models syncrasies specific to each item (DeVellis, 2017). Greater
from latent variable models is to establish how the measures breadth leads to scales that have higher item-specific vari-
of these models differ in their substantive interpretations (i.e. ance, which increases the coverage of the representative do-
hypothesized data generating mechanisms; van Bork et al., main (McCrae, 2015). In many existing trait questionnaires,
2019). We will draw on several points from the previous sec- researchers have focused on achieving a balance of both—
tion on validity to elaborate on these interpretations. In the that is, some facets reflect a single narrow attribute, while
end, the aim of this section is to take the initial steps towards other facets are composites of several attributes.
a formalized framework for the use of psychometric network One recent suggestion for questionnaires aimed at trait
models in the validation of personality questionnaires. domains is to favour breadth in order to maximize informa-
tion and efficiency (McCrae and Mõttus, 2019). Based on
what we’ve outlined in our section on validity, psychometric
Overview network models align well with this suggestion. Indeed, a
To achieve this aim, we divide this section into three parts, key notion of network psychometrics is that personality traits
which represent the order in which researchers should pro- are composed of unique causal components, meaning the
ceed with structural validation from the psychometric net- components are not exchangeable with other components
work perspective. First, we cover the initial phase of of the system (Cramer et al., 2012a). As a consequence, these
redundancy analysis for reducing redundancy in personality components should be unique rather than redundant to re-
questionnaires. Next, we discuss dimension analyses. Within duce latent confounding (Hallquist et al., 2019). This impli-
this section, we connect communities and node strength of cation perhaps marks the biggest validation difference
network models to factors and factor loadings of latent vari- between network and latent variable models.
able models, respectively. Finally, we present a novel mea- Because most existing personality scales have been de-
sure of internal structure that can be used to assess the veloped from a latent variable perspective, researchers must
extent to which a scale (or dimension) is composed of a set make careful considerations about using psychometric net-
items that are homogeneous and interrelated in a multidimen- work models with existing scales because they are likely
sional context.2 to have homogeneous facets (Costantini and Perugini,
In our discussion, it’s important that we make clear that 2012). Take, for example, the SAPA Personality Inventory
we view network models as complementary to latent variable (Condon, 2018) where items ‘Hate being the center of atten-
models and therefore suggest that they can be synergistically tion’, ‘Make myself the center of attention’, ‘Like to attract
leveraged. The main difference between them, as we attention’, and ‘Dislike being the center of attention’ clearly
discussed in our section on validity, is the proposed data gen- have a common underlying attribute: attention seeking. From
erating mechanisms. From a statistical point of view, net- the psychometric networks perspective, these items are not
work models offer additional information about the unique components themselves but comprise a single unique
relationship between variables that are not possible in latent component. This makes the first step of questionnaire valida-
variable models since in the latter the relationship between tion from a psychometric network perspective to identify and
items are accounted by the factor. Analyses about the struc- handle redundancy in scales.
ture of the system (e.g. topological analysis), for example,
can be implemented in network models, helping researchers An approach to statistically identify redundancy
uncover important aspects of the system (Borsboom, Cramer, In the literature, the network measure, clustering coefficient,
Schmittmann, Epskamp, and Waldorp, 2011). Because of has been considered as a measure of redundancy in personal-
this, we connect network models to latent variable models ity networks (Costantini et al., 2019; Dinić, Wertag,
(where applicable) and highlight the substantive differences Tomašević, and Sokolovska, 2020). A node’s clustering co-
that these models imply. As a general point, we recommend efficient is the extent to which its neighbours are connected
at least 500 cases when performing these analyses, which is to each other, forming a triangle. Although this measure is
based on previous simulation studies (Christensen, 2020; useful for describing whether a node is locally redundant, it
Golino et al., 2020). does not provide information about which nodes in particular
a target node is redundant with. Here, we conceptually de-
scribe an approach to identify whether a node is statistically
Redundancy analysis redundant with other nodes in a network.
Our approach begins by first computing a similarity mea-
In scale development, a researcher must establish what items sure between nodes. One method for doing so is called
to include, which involves determining the desired specificity weighted topological overlap (Zhang and Horvath, 2005),
which quantifies how similar two nodes’ connections to other
2
We provide a full walkthrough example of these analyses in the Supporting nodes are. More specifically, it quantifies the similarity be-
Information using R (R Core Team, 2020). Our example uses data that are
freely available in the psychTools package (Revelle, 2019) and assesses the tween the magnitude and direction of two nodes’ connections
five-factor model using the SAPA inventory (Condon, 2018). to all other nodes in the network. In biological networks,

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

these measures have been used to identify genes or proteins situation-specific component of an attribute (e.g. ‘I often ex-
that may have a similar biological pathway or function press my opinions’). Quantitatively, which item has the most
(Nowick, Gernat, Almaas, and Stubbs, 2009). Thus, greater variance? This is a common criterion in traditional psycho-
topological overlap suggests that two genes may belong to metrics because greater variation suggests that this item bet-
the same functional class compared to those with less over- ter discriminates between people on the specific attribute
lap. In the context of a personality network, nodes that have (DeVellis, 2017).
large topological overlap are likely to have shared functional The more straightforward option is to combine items into
or latent influence. From a more traditional psychometrics a single variable. This can be done by estimating a reflective
perspective, one method would be to identify items that have latent variable consisting of the redundant items and using
high residual correlations after the variance of facets and fac- the latent scores. We strongly recommend this latter ap-
tors have been removed.3 proach because it retains all possible information from avail-
Although the weighted topological overlap measure pro- able data. We’ve implemented an interface to manage this
vides numerical values, from no overlap (0) to perfect over- second option using the node.redundant.combine
lap (1), for each node pair in the network, it does not function in the EGAnet package in R. We describe how to
include a test for significance. In order to determine which apply this approach in the supporting information, including
node pairs overlap significantly with one another, we apply heuristics to use when deciding which items are redundant.
the following approach. First, we obtain only the values that
are greater than zero—node pairs that have a topological
Dimension analysis
overlap of zero are not connected in the network and are
therefore not informative for determining significance of Dimensionality assessment is an integral step for validating
overlap. Next, we fit a distribution to these non-zero values the structure of a questionnaire. The general consensus
using the fitdistrplus package (Delignette-Muller and among researchers is that personality traits are hierarchically
Dutang, 2015) in R. The parameters (e.g. μ and σ from a nor- organized at different levels of breadth and depth (John and
mal distribution) from the best fitting distribution (based on Srivastava, 1999; McCrae and Costa, 2008). Usually, trait
Akaike information criterion) are then used as our probability domains are decomposed into facets, which are further bro-
distribution. ken down into items (McCrae and Costa, 1987, 2008). More
For each node pair with a non-zero topological overlap recently, aspects (between traits and facets) were added to the
value, we compute the probability of achieving its corre- hierarchy (DeYoung, Quilty, and Peterson, 2007). Personal-
sponding value from this distribution. These probabilities ity questionnaires tend to follow this structure with most
correspond to p values. Using a multiple comparison method, assessing multiple domains and facets—there typically are
node pairs whose p values are less than the corrected alpha five or six trait domains and for every domain, there are sev-
are considered to be significantly redundant. We’ve imple- eral facets (ranging from two to nine).
mented this approach in the EGAnet package (Golino and In traditional psychometrics, factor models [e.g. explor-
Christensen, 2020) in R under the function node.redun- atory factor analysis (EFA)] are the most common method
dant (see the Node Redundancy section in the Supporting used to assess the dimensionality of a trait domain (Flake
Information). Results from one simulation study found that et al., 2017). In psychometric networks, the main methods
the adaptive alpha multiple comparison correction method used to assess dimensionality of the network are called com-
(Pérez and Pericchi, 2014) had the fewest false positives, munity detection algorithms (Fortunato, 2010). These algo-
false negatives, and highest accuracy of all the methods rithms identify the number of communities (i.e.
tested (Christensen, 2020). dimensions) in the network by maximizing the connections
within a set of nodes while minimizing the connections from
Options for handling redundant nodes the same set of nodes to other sets of nodes in the network.
This approach provides quantitative evidence for whether Rather than these communities forming because of a com-
certain items are redundant. We recommend, however, that mon cause, psychometric network models suggest that di-
researchers verify these redundancies and use theory to deter- mensions emerge from densely connected sets of nodes that
mine whether two or more items represent a single attribute form coherent subnetworks within the overall network.
(i.e. a common cause). There are two options that researchers Despite these frameworks proposing different data gener-
can take when deciding how to handle redundancy in their ating mechanisms, the data structures do not necessarily dif-
questionnaire. The more involved option is to remove all fer (van Bork et al., 2019). Indeed, a researcher can fit a
but one item from the questionnaire. When taking this op- factor model to a data structure generated from a network
tion, there are a few considerations researchers must make. model with good model fit (van der Maas et al., 2006). Sim-
Qualitatively, which item represents the most general case ilarly, a network model with a community detection algo-
of the attribute? Often items are written with certain situa- rithm can be fit to a data structure generated from a factor
tions attached to them (e.g. ‘I often express my opinions in model and identify factors (Golino and Epskamp, 2017;
group meetings’; Lee and Ashton, 2018), which may not ap- Golino et al., 2020). This underlying equivalence follows
ply to all people taking the questionnaire. Therefore, more from the fact that any covariance matrix can be represented
general items may be better because they do not represent a as a latent variable and network model (van Bork et al.,
2019). The statistical equivalence between these models has
3
We thank the anonymous reviewer who pointed out this possibility. been well documented (e.g. Epskamp et al., 2018a; Guttman,

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

1953; Kruis and Maris, 2016; Marsman et al., 2018). There- (e.g. estimation of factor loadings; Browne, 2001; Sass and
fore, factors of a latent variable model and communities of a Schmitt, 2010). For EGA, orthogonal dimensions are
network model are statistically equivalent (Golino and depicted with few or no connections between items of one di-
Epskamp, 2017). mension and items of another dimension. Second, re-
Indeed, Guttman (1953) demonstrates that there is a di- searchers do not need to decide on item allocation—the
rect equivalence between network and factor models. Al- algorithm places items into dimensions without the re-
though network models were not yet specified in the area searcher’s direction. For EFA, in contrast, researchers must
of psychology, Guttman (1953) proposed a new factor ana- decipher a factor loading matrix. Third, the network plot de-
lytic approach termed image structural analysis, which is es- picts some dimensions as more central than others in the net-
sentially a network model with node-wise estimation using work (see Dimensionality section in the Supporting
multiple regression (e.g. Haslbeck and Waldorp, 2015). Information). Thus, EGA can be used as a tool for re-
Guttman mathematically demonstrated how image structural searchers to evaluate whether the items of their questionnaire
analysis relates to factor models and suggested that factor are coalescing into the dimensions they intended and whether
models were a special case of the node-wise network model the organization of the trait’s structure is what they intended.
where the errors of the variables are made to be Finally, the network plot also depicts levels of a trait’s hier-
orthogonal.4Therefore, the difference between the models is archy as continuous—that is, items can connect between dif-
their suggested data generating mechanisms, which is pro- ferent facets and traits. This supports a fuzzy interpretation of
vided by their visual representations. the trait hierarchy where the boundaries between items,
It’s important that we acknowledge that in some cases, facets, and traits are blurred.
factors of a factor model may represent causally dependent With these advantages, it’s important to note their simi-
interactions between components (rather than a common larities to factor analytic methods. For instance, most com-
cause); in other cases, communities of a network model munity detection algorithms used in the literature
may represent a common cause (rather than causally depen- (including the Walktrap) sort items into single dimensions.
dent interactions between components). Moreover, other ex- This creates a structure that is akin to a typical confirmatory
planations could be that external causes such as situational factor analysis (CFA; i.e. items belonging to a single dimen-
factors (Cramer et al., 2012a; Rauthmann and Sherman, sion), which constrains the interpretation of a continuous hi-
2018) or goals and motivations (Read et al., 2010) could lead erarchy. There are, however, algorithms in the broader
to personality dimensions. network literature that allow for overlapping community
membership (e.g. Blanken et al., 2018), which may better
Exploratory graph analysis represent these fuzzy boundaries and how researchers think
The most extensive work on dimensionality in the psycho- about personality. Another limitation is that the factor load-
metric network literature has been with a technique called ex- ing matrix of an EFA model can equivalently represent the
ploratory graph analysis (EGA; Golino and Epskamp, 2017; complexity of items relating to other items and loading onto
Golino et al., 2020). The EGA algorithm works by first esti- other dimensions. Network models, however, provide intui-
mating a Gaussian graphical model (Lauritzen, 1996), using tive depictions of these interactions (Bringmann and Eronen,
the graphical least absolute shrinkage and selection operator 2018). Therefore, even though EFA loading matrices repre-
(GLASSO; Friedman, Hastie, and Tibshirani, 2008), where sent this complexity, it requires a certain level of psychomet-
edges represent (regularized) partial correlations between ric expertise for a researcher to intuitively view the matrix
nodes after conditioning on all other nodes in the network. this way. Moreover, network plots can reveal exactly which
Then, EGA applies the Walktrap community detection algo- items are responsible for the cross-domain relationships in
rithm (Pons and Latapy, 2006), which uses random walks to a way that an EFA loading matrix cannot.
determine the number and content of communities in the net-
work (see Golino et al., 2020, for a more detailed Loadings
explanation).5Several simulation studies have shown that Recent simulation efforts, however, have demonstrated that
EGA has comparable or better accuracy for identifying the network models can be used to estimate an EFA loading ma-
number of dimensions than the most accurate factor analytic trix equivalent. In a series of simulation studies, Hallquist,
techniques (e.g. parallel analysis; Christensen, 2020; Golino Wright, and Molenaar (2019) demonstrated that the network
and Demetriou, 2017; Golino and Epskamp, 2017; Golino measure, node strength (i.e. the sum of a node’s connec-
et al., 2020). tions), was roughly redundant with CFA factor loadings. A
Beyond performance, EGA has several advantages over notable finding in one of their studies was that a node’s
more traditional methods. First, EGA does not require a rota- strength could potentially be a blend of connections within
tion method. Rotations are rarely discussed in the validation and between dimensions. Based on this result, they sug-
gested that researchers should reduce the latent confounding
of the network measure to avoid misrepresenting the relation-
4
We thank Denny Borsboom for pointing us to the Guttman’s (1953) paper. ships between components in the network.
5
A recent simulation study used the EGA approach and examined different Heeding this call, Christensen and Golino (2020) derived
community detection algorithms, finding that the Louvain (Blondel, Guil- a measure called network loadings, which represents the
laume, Lambiotte, and Lefebvre, 2008) and Walktrap algorithms were the
most accurate and least biased of the eight algorithms tested (including standardization of node strength split between dimensions.
two parallel analysis methods; Christensen, 2020). More specifically, a node’s strength was computed for only

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

the connections it had to other nodes in each dimension of consistency measures assess unidimensionality (Flake et al.,
the network. They demonstrated that these network loadings 2017). This misconception likely stems from confusion over
could effectively estimate the simulated population (or true) the difference between internal consistency (the extent to
loadings. Moreover, they found that network loadings more which items are interrelated) and homogeneity (a set of items
closely resembled EFA loadings but also had some loadings that have a common cause; Green, Lissitz, and Mulaik,
of zero like CFA loadings. This suggests that the network 1977). Based on these definitions, internal consistency is
loadings represent a complex structure that is between a sat- necessary but not sufficient for homogeneity (Schmitt, 1996).
urated (EFA) and simple structure (CFA). In sum, they sug- We believe that many misconceptions arise because there
gest that these network loadings can be used as an equivalent is a mismatch between what researchers intend to measure
to factor loadings (e.g. see Table SI3). and what they are actually measuring. Much like validity,
Although these metrics are statistically redundant, they the psychometric concept of internal consistency seems di-
arguably differ in a substantive way. Factor loadings suggest vorced from how researchers think about it (Borsboom
that items ‘load’ onto factors, which is provided by items be- et al., 2004). This is because most researchers know that
ing regressed on the factors. If interpreted in a substantive the items of their scale are interrelated—they were designed
way, they represent how well one observable indicator is re- that way. When framed in this light, internal consistency
lated to the factor—that is, how well an item represents or measures are more of a ‘sanity check’ than an informative
measures the latent factor. The substantive interpretation of measure. To better understand what researchers intend to
node strength does not suggest this, however, they may be measure, we can look at how they use these measures: Re-
epistemologically related. From a substantive standpoint, searchers use them to validate the consistency of the structure
we argue that these network loadings represent each node’s of their scales (Flake et al., 2017). That is, researchers use
contribution to the emergence of a coherent dimension in them to know whether their scale’s structure is consistent
the network. In this sense, we can connect the substantive which implies internal consistency and assumes
meanings of network and factor loadings: the more one item homogeneity.
contributes to a dimension’s coherence, the more the item re- From a latent variable perspective, the solution is
flects the underlying dimension. A researcher’s substantive straightforward: test if a unidimensional model fits and com-
interpretation will favor one interpretation over the other, pute an internal consistency measure (Flake et al., 2017;
but ultimately, they statistically resolve to roughly the same Green et al., 1977). From a psychometric network perspec-
thing (Christensen and Golino, 2020; Guttman, 1953; tive, this is not the case. First, there is an inherent incompat-
Hallquist et al., 2019). ibility with computing an internal consistency measure from
the network perspective. Internal consistency measures are
typically a variant on the ratio between the common covari-
Internal structure analysis
ance between items and the variance of those items
Internal consistency (McNeish, 2018). In the estimation of networks, most of
Analyses that quantify the internal structure of questionnaires the common covariance is removed, leaving only the correla-
have been dominated by internal consistency measures, tions between item-specific variance (Forbes, Wright,
which are almost always measured with Cronbach’s α Markon, and Krueger, 2017, 2019).
(Cronbach, 1951; Flake et al., 2017; Hubley, Zhu, Sasaki, Second, scales and their items in networks are interre-
and Gadermann, 2014). In a review of 50 validation studies lated, usually with cross-connections occurring throughout.
randomly selected from Psychological Assessment and This is more than likely to be true for personality scales
European Journal of Personality Assessment during the (Sočan, 2000). Therefore, it’s important to know whether a
years 2011 and 2012, α was reported in 90% and 100% of set of items are causally dependent and form a unidimen-
the articles, respectively (Hubley et al., 2014). Similar num- sional network but also whether they remain as a coherent
bers were obtained in a review of 35 studies in the Journal of subnetwork nested in the rest of the network. Said differ-
Personality and Social Psychology during 2014, with 79% of ently, questionnaires often contain scales that are assumed
scales that included two or more items (n = 301) reporting α to be unidimensional but it’s unclear whether these scales re-
(Flake et al., 2017). More often than not, α was the sole mea- main unidimensional when other items and scales are added
sure of structural validation. In short, the use of α in valida- (i.e. in a ‘multidimensional context’). Therefore, internal
tion is pervasive (McNeish, 2018). consistency measures do not capture whether scales (or di-
Despite α’s prevalence, there are some serious issues mensions) remain unidimensional within the context of other
(Dunn, Baguley, and Brunsden, 2014; Sijtsma, 2009). These items and dimensions. Regardless of psychometric model,
issues range from improper assumptions about the data (e.g. this seems to be a more informative measure of what most re-
τ equivalent vs. congeneric models; Dunn et al., 2014; searchers want to know and say about their scales—that they
McNeish, 2018) to misconceptions about what it actually are unidimensional and internally consistent.
measures (Schmitt, 1996; Sijtsma, 2009). Although newer
internal consistency measures (e.g. ω; Dunn et al., 2014; Structural consistency
McDonald, 1999; Zinbarg, Yovel, Revelle, and McDonald, We refer to this measure as structural consistency, which we
2006) account for improper assumptions about the data, mis- substantively define as the extent to which causally coupled
conceptions about internal consistency still abound. One of components form a coherent subnetwork within a network.
the more persistent misconceptions is that internal Using extant terminology, structural consistency is the extent

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

to which items in a dimension are homogeneous and interre- replication proportions across dimensions can reveal whether
lated given the multidimensional structure of the question- they are forming a new separate dimension (only replicating
naire. In other words, it is the combination of homogeneity in a new dimension), fit better with another dimension (repli-
and internal consistency in a multidimensional context. We cating more with another dimension), or identify as multidi-
view the inclusion of other dimensions as a particularly im- mensional items (replicating equally across multiple
portant conceptual feature because a dimension could have dimensions). The latter two explanations can be verified
high homogeneity and internal consistency but when placed using the network loading matrix. An example of these anal-
in the context of other related dimensions it’s structure falls yses are provided in the Structural Consistency section of the
apart (i.e. it is no longer unidimensional). This renders the in- Supplementary Information.
terpretation of that dimension in a multidimensional context In practice, the goal of structural consistency is to deter-
relatively ambiguous even when its interpretation is clear in mine the extent in which a dimension is composed of a set
a unidimensional context (i.e. examined in isolation). items that are homogeneous and interrelated in the context
A recently developed approach called bootstrap explor- of other dimensions. The importance that a dimension of a
atory graph analysis (bootEGA; Christensen and Golino, questionnaire has a high structural consistency is up to the re-
2019) can be used to estimate this measure. bootEGA applies searcher’s intent. For many dimensions in personality, items
a parametric and non-parametric bootstrap approach, but for may be multidimensional, which will lead to lower values of
the structural consistency measure, we focus on the paramet- structural consistency. Therefore, lower structural consis-
ric approach. The parametric approach begins by estimating tency is not a bad thing if it is what the researcher intends.
a GLASSO network from the data and taking the inverse of More importantly, the items that are leading to the lower
the network to derive a covariance matrix. This covariance structural consistency can be identified with item stability
matrix is then used to simulate data with the same number statistics, which may help researchers decided whether an
of cases as the original data from a multivariate normal item is multidimensional or fits better with another dimen-
distribution. sion. At this point, it is too early to make recommendations
EGA is then applied to this replicate data, obtaining each for what ‘high’ or ‘acceptable’ structural consistency means.
item’s assigned dimension. This procedure is repeated until Ultimately, simulation studies are necessary to develop such
the desired number of samples is achieved (e.g. 500 standards.
samples).6The result from this procedure is a sampling distri-
bution for the total number of dimensions and each item’s di-
mension allocation. Although a number of statistics can be DISCUSSION
computed, we focus on two: structural consistency and item
stability. To derive both statistics, the original EGA results Questionnaires have been and will likely continue to be a
(i.e. empirically derived dimensions) are used. standard format for the measurement of personality attri-
Structural consistency is derived by computing the pro- butes. The validity of what questionnaires claim to measure,
portion of times that each empirically derived dimension is however, has rarely been explicated in the contemporary per-
exactly (i.e. identical item composition) recovered from the sonality literature. Instead, psychometric models have been
replicate bootstrap samples. If a scale is unidimensional, then applied without much consideration of their causal implica-
structural consistency reduces to the extent that the items in tions. In this paper, we provided a review on the validity of
the scale form a single dimension—that is, the proportion personality trait questionnaires from the latent variable and
of replicate samples that also return one dimension. The psychometric network perspectives. The goal of our review
range that structural consistency can take is from 0 to 1. A di- was not to argue for one approach more than another but to
mension’s structural consistency can only be 1 if the items in elaborate on how questionnaires can be viewed as valid mea-
the empirically derived dimension conform to that dimension sures of personality traits or attributes. These views imply
across all replicate samples. Such a measure leads to an im- different substantive interpretations about the underlying
portant question: What’s happening when a dimension is data generating mechanisms, which are important for under-
structurally inconsistent? standing the meaning of what’s being measured and how
To answer this, item stability or the proportion of times psychometric measures substantively inform our
that each item is identified in each empirically derived di- measurement.
mension across the replicate samples can be computed. This In our review, we took special interest in elaborating on
relatively simple measure not only provides insight into the psychometric network perspective because few articles
which items may be causing structural inconsistency but also have focused on their measurement when applied to person-
the other dimension(s) these items are being placed in. On ality questionnaires. Much like latent variable models, psy-
the one hand, two items of our hypothetical dimension might chometric network models have been readily applied by
be at the root of the structural inconsistency; on the other researchers without much consideration about their causal
hand, it might be multiple items are at the root of the struc- implications or how network measures should be interpreted
tural inconsistency. In either case, examining each item’s in a psychological context (Bringmann et al., 2019). Based
on our review, we propose a substantive interpretation of
6 node strength (network loadings) that is appropriate in the
A total of 500 replicate samples should be an adequate number to achieve
an accurate estimate; however, researchers can increase this number to ob- context of dimensions and overall network—that is, a node’s
tain greater precision. contribution to the emergence of a coherent subnetwork or

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

network. This interpretation is by no means definite; how- likelihood of the sparse network model). Approaches like
ever, we believe that it is a more appropriate interpretation these can be used to determine whether a latent variable
than what has been put forward in the literature. or psychometric network model may be more appropriate
More specific to personality, we explicated an initial for the data. Ultimately, we believe that the choice of
framework for how psychometric networks can be used to model will not significantly affect the outcomes of these
validate the structure of personality trait questionnaires. dimension-related analyses.
One point of emphasis was on reducing the redundancy of Finally, we introduced a novel measure, structural consis-
components in the network. This is because components of tency, to quantify a questionnaire’s internal structure. Part of
a network are defined as ‘unique’ and ‘causally autonomous’ the motivation for this measure was the need to move beyond
(Cramer et al., 2012a). We described a novel approach to de- measures of internal consistency, which we believe do not
tect an item’s redundancy with other items in the network, necessarily align with what researchers intend to measure.
which can aid researchers in this endeavour. Moreover, we Notably, we do not view this measure as incompatible with
provided some general recommendations for removing or internal consistency but rather complementary. As we
combining redundant items. Following from this emphasis, discussed, a dimension could be homogeneous (i.e. unidi-
greater exploration of the unique components that represent mensional) and internally consistent (i.e. interrelated) but it
the domain of personality traits is necessary so that a specific may not remain homogeneous in a multidimensional context.
set of attributes can be defined. This is unlikely to be an easy Such a condition is likely to occur in personality measures
task because personality traits are multifaceted and interre- where components of traits tend to be interrelated. In general,
lated, which suggests that representation of a domain may this measure adds to the internal structure methods that re-
be a matter of degree rather than clear cut definitions searchers can use for validating the structure of their
(Schmittmann et al., 2013; Schwaba et al., 2020). questionnaire.
This puts determining appropriate coverage of each trait’s
domain at the forefront of the psychometric network research
Steps towards external validation
agenda in personality. Indeed, determining appropriate cov-
erage of each trait’s domain is still an active area of research To this point, we’ve described our conceptual framework for
and requires more attention than it’s been given in the past. the structural validation of personality questionnaires from
In many cases, this will require sampling from attributes that the network perspective. This framework leaves open ques-
may lie just outside of a trait’s domain. We recommend that tions related to external validation. How, for example, do
researchers focus more on the extent to which attributes rep- outcome variables relate to the components in personality
resent each domain rather than assuming an existing ques- networks? What about covariates? How does this fit with
tionnaire’s domain coverage is sufficient. One place to start contemporary trends for evaluating the unique predictive
is by examining the unique items of several personality ques- value of items? Moreover, what if the researcher is interested
tionnaires in a single network domain (e.g. Christensen et al., in relating the trait itself to outcomes rather than compo-
2019). Multiple domains and outcome measures could also nents? We briefly discuss these questions in turn.
be included to help determine these boundaries (e.g. Afzali, Personality–outcome relations are a fundamental part of
Stewart, Séguin, and Conrod, 2020; Costantini, Richetin, personality research and the validation of personality assess-
et al., 2015; Schwaba et al., 2020). ment instruments. These relations are just as fundamental to
When it comes to item and dimension analyses, many of the network perspective as more traditional perspectives.
the statistics for latent variable models (i.e. factor loadings Our suggestion for this is relatively simple: include the
and dimensions) are mathematically equivalent to psycho- outcome(s) of interest in the network. Similarly, important
metric network models (Christensen, 2020; Golino and covariates should also be added to the network. Indeed,
Epskamp, 2017; Hallquist et al., 2019). We argued that the Costantini, Richetin et al. (2015) used this approach to eval-
key difference between these models is their substantive in- uate how facets of conscientiousness were related to mea-
terpretations, which suggest very different data generating sures of self-control, working memory, self-report
mechanisms (van Bork et al., 2019). At this point, behaviours related to conscientiousness, and implicit atti-
disentangling these models is a nascent area of research. tudes of conscientiousness descriptors. Similarly, Afzali
Kan, van der Maas, and Levine (2019), for example, et al. (2020) longitudinally examined items of the Substance
show how fit indices can be applied to network models Use Risk Profile Scale and their relations to cannabis and al-
so that they can be compared to confirmatory factor analy- cohol use in adolescences. These studies not only provide a
sis models. They also demonstrated how the comparison of more complex evaluation of the relations between personal-
networks over groups could be achieved (see also ity and outcomes but also provide more targeted item gener-
Epskamp, 2019). van Bork et al. (2019) developed an ap- ation for future measures (e.g. including more items
proach to compare the likelihood that data were generated measuring transgression in sensation-seeking personality in-
from a unidimensional factor model or sparse network dicators; Afzali et al., 2020).
model by assessing the proportion of partial correlations We propose that because networks are often estimated
that have a different sign than the corresponding using the GLASSO approach researchers can interpret the
zero-order correlations and the proportion of partial corre- partial correlation coefficients between outcomes and com-
lations that are stronger than the corresponding zero-order ponents in the network as if they were entered into a reg-
correlations (greater proportions for both increase the ularized regression. Regularized regression has already

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

been effectively used in the literature to evaluate should interpret the measures used to quantify their scales.
personality–outcome relations (Mõttus, Bates, Condon, In this article, we take the initial steps towards how re-
Mroczek, and Revelle, 2018; Seeboth and Mõttus, 2018). searchers can go about this with psychometric network
To achieve a similar model, researchers could compute models. We by no means suggest that our views represent
beta coefficients from the partial correlation coefficients the views of all researchers using these models (including
for the outcome variable (see Haslbeck and Waldorp, latent variable models); however, we have provided a
2018). More directly, researchers could square these same foundation for future work and discussion. Undoubtedly,
partial correlation coefficients to derive partial R2 values the successful application of psychometric network models
(or the residual variation explained by adding the variable in personality psychology requires explicit definition and
to the network), which makes for more interpretable results formalization of their measurement (e.g. components),
(Haslbeck and Waldorp, 2018). which we have provided (Costantini and Perugini, 2012;
Within our proposed framework, networks would be Cramer et al., 2012b). We are optimistic that the continued
composed of personality components rather than specific development of measurement from a psychometric network
items or facets. On the surface, this seems to clash with re- perspective can move the theoretical and substantive as-
cent articles (including this special issue) demonstrating the sessment of personality traits forward.
unique predictive value of items in personality–outcome as-
sociations. In our view, this conflict is relatively minimal; in- SUPPORTING INFORMATION
stead, we think that unique personality components should
be tapping into the very same notion. Items should have Additional supporting information may be found online in
unique predictive value if they have distinct causes beyond the Supporting Information section at the end of the article.
other items—just as personality components are
conceptualized. Supporting info item
On the one hand, when considering the items, ‘Hate be-
ing the center of attention’, ‘Make myself the center of atten-
tion’, ‘Like to attract attention’, and ‘Dislike being the center REFERENCES
of attention’, there is unlikely to be unique predictive value
Afzali, M. H., St`ewart, S. H., Séguin, J. R., & Conrod, P. (2020).
of one item over another. On the other hand, unique items The network constellation of personality and substance use:
that do not have such an obvious overlap should remain as Evolution from early to late adolescence. European Journal of
items and therefore unique components in the personality Personality. https://doi.org/10.1002/per.2245
network. Therefore, we view personality components to be Allport, G. W. (1961). Pattern and growth in personality. Oxford,
completely compatible with the unique predictive value of UK:Holt, Reinhart, & Winston.
Ashton, M. C., & Lee, K. (2005). A defence of the lexical approach
items while reducing homogeneous sets of items to their to the study of personality structure. European Journal of
unique causes. Personality, 19, 5–24. https://doi.org/10.1002/per.541
`Finally, researchers may be interested in the relations Baumert, A., Schmitt, M., Perugini, M., Johnson, W., Blum, G.,
between traits and outcomes. As mentioned in our section Borkenau, P., Costantini, G., Denissen, J. J. A., Fleeson, W.,
on validity, traits are viewed as a summary of the net- Grafton, B., Jayawickreme, E., Kurzius, E., MacLeod, C., Miller,
L. C., Read, S. J., Roberts, B., Robinson, M. D., Wood, D., &
work’s state. Based on this definition, a summary statistic Wrzus, C. (2017). Integrating personality structure, personality
could be computed and used to evaluate the relationship process, and personality development. European Journal of
between traits and outcomes. Using network loadings, Personality, 31, 503–528. https://doi.org/10.1002/per.2115
Christensen and Golino (2020) proposed multiplying the Blanken, T. F., Deserno, M. K., Dalege, J., Borsboom, D., Blanken,
loading matrix by the observed data to derive a weighted P., Kerkhof, G. A., & Cramer, A. O. J. (2018). The role of stabi-
lizing and communicating symptoms given overlapping commu-
composite for each dimension (e.g. facets and traits) in a nities in psychopathology networks. Scientific Reports, 8, 5854.
personality network. These composites could then be used https://doi.org/10.1038/s41598-018-24224-2
in traditional analyses (e.g. zero-order correlation and re- Blondel, V. D., Guillaume, J.-L., Lambiotte, R., & Lefebvre, E.
gression) or as variables in a ‘higher order’ network with (2008). Fast unfolding of communities in large networks. Journal
the outcome (and covariate) variables included. Following of Statistical Mechanics: Theory and Experiment, 2008, P10008.
https://doi.org/10.1088/1742-5468/2008/10/P10008
the same suggestions above, researchers could then square Borkenau, P., & Ostendorf, F. (1998). The Big Five as states: How
the outcome’s partial correlation coefficients to derive the useful is the five-factor model to describe intraindividual varia-
partial R2 for the higher order personality components in tions over time? Journal of Research in Personality, 32,
the network. 202–221. https://doi.org/10.1006/jrpe.1997.2206
Borsboom, D. (2006). The attack of the psychometricians.
Psychometrika, 71, 425–440. https://doi.org/10.1007/s11336-
006-1447-6
CONCLUSION Borsboom, D. (2008). Psychometric perspectives on diagnostic sys-
tems. Journal of Clinical Psychology, 64, 1089–1108. https://doi.
So what are personality traits? At this point, it’s clear that org/10.1002/jclp.20503
how researchers answer this question should affect the psy- Borsboom, D. (2017). A network theory of mental disorders. World
Psychiatry, 16, 5–13. https://doi.org/10.1002/wps.20375
chometric model they choose. In doing so, there are differ- Borsboom, D., Cramer, A. O. J., Kievit, R. A., Scholten, A. Z., &
ent considerations that should be made when developing Franić, S. (2009). The end of construct validity. In Lissitz, R.
and selecting items for their scales as well as how they W. (Ed.), The concept of validity: Revisions, new directions and

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

applications. Charlotte, NC:IAP Information Age Publishing, analysis. European Journal of Personality, 29, 548–567. https://
pp. 135–170. doi.org/10.1002/per.2014
Borsboom, D., Cramer, A. O. J., Schmittmann, V. D., Epskamp, S., Costantini, G., Richetin, J., Preti, E., Casini, E., Epskamp, S., &
& Waldorp, L. J. (2011). The small world of psychopathology. Perugini, M. (2019). Stability and variability of personality net-
PLoS ONE, 6. https://doi.org/10.1371/journal.pone.0027407 works. A tutorial on recent developments in network psychomet-
Borsboom, D., & Mellenbergh, G. J. (2007). Test validity in cogni- rics. Personality and Individual Differences, 136, 68–78. https://
tive assessment. In Leighton, J. P., & Gierl, M. J. (Eds.), Cogni- doi.org/10.1016/j.paid.2017.06.011
tive diagnostic assessment for education: Theory and Cramer, A. O. J. (2012). Why the item “23+ 1” is not in a depres-
applications. New York, NY:Cambridge University Press, sion questionnaire: Validity from a network perspective. Mea-
pp. 85–116. https://doi.org/10.1017/cbo9780511611186.004 surement: Interdisciplinary Research & Perspective, 10, 50–54.
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The https://doi.org/10.1080/15366367.2012.681973
theoretical status of latent variables. Psychological review, 110, Cramer, A. O. J., van der Sluis, S., Noordhof, A., Wichers, M.,
203–219. https://doi.org/10.1037/0033-295X.110.2.203 Geschwind, N., Aggen, S. H., Kendler, K. S., & Borsboom, D.
Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The (2012a). Dimensions of normal personality as networks in search
concept of validity. Psychological Review, 111, 1061–1071. of equilibrium: You can’t like parties if you don’t like people.
https://doi.org/10.1037/0033-295X.111.4.1061 European Journal of Personality, 26, 414–431. https://doi.org/
Bringmann, L. F., Elmer, T., Epskamp, S., Krause, R. W., Schoch, 10.1002/per.1866
D., Wichers, M., Wigman, J., & Snippe, E. (2019). What do cen- Cramer, A. O. J., van der Sluis, S., Noordhof, A., Wichers, M.,
trality measures measure in psychology networks? Journal of Ab- Geschwind, N., Aggen, S. H., Kendler, K. S., & Borsboom,
normal Psychology, 128, 892–903. https://doi.org/10.1037/ D. (2012b). Measurable like temperature or mereological like
abn0000446 flocking? On the nature of personality traits. European Jour-
Bringmann, L. F., & Eronen, M. I. (2018). Don’t blame the model: nal of Personality, 26, 451–459. https://doi.org/10.1002/
Reconsidering the network approach to psychopathology. per.1879
Psychological Review, 125, 606–615. https://doi.org/10.1037/ Cramer, A. O. J., Waldrop, L. J., van der Maas, H. L., & Borsboom,
rev0000108 D. (2010). Comorbidity: A network perspective. Behavioral and
Browne, M. W. (2001). An overview of analytic rotation in explor- Brain Sciences, 33, 137–150. https://doi.org/10.1017/
atory factor analysis. Multivariate Behavioral Research, 36, S0140525X09991567
111–150. https://doi.org/10.1207/S15327906MBR3601_05 Cronbach, L. J. (1951). Coefficient alpha and the internal structure
Cattell, R. B. (1946). Description and measurement of personality. of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/
New York, NY:World Book Company. BF02310555
Cervone, D. (2005). Personality architecture: Within-person struc- Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psy-
tures and processes. Annual Review of Psychology, 56, 423–452. chological tests. Psychological Bulletin, 52, 281–302. https://
https://doi.org/10.1146/annurev.psych.56.091103.070133 doi.org/10.1037/h0040957
Christensen, A. P. (2020). Towards a network psychometrics ap- Delignette-Muller, M. L., & Dutang, C. (2015). fitdistrplus: An R
proach to assessment: Simulations for redundancy, dimensional- package for fitting distributions. Journal of Statistical Software,
ity, and loadings (Unpublished doctoral dissertation). University 64, 1–34. https://doi.org/10.18637/jss.v064.i04
of North Carolina at Greensboro, Greensboro, NC, USA. https:// Devellis, R. F. (2017). Scale development: Theory and applications
doi.org/10.31234/osf.io/84kgd (4th ed.). Thousand Oaks, CA:SAGE Publications.
Christensen, A. P., Cotter, K. N., & Silvia, P. J. (2019). Reopening Deyoung, C. G., Quilty, L. C., & Peterson, J. B. (2007). Between
openness to experience: A network analysis of four openness to facets and domains: 10 aspects of the Big Five. Journal of
experience inventories. Journal of Personality Assessment, 101, Personality and Social Psychology, 93, 880–896. https://doi.
574–588. https://doi.org/10.1080/00223891.2018.1467428 org/10.1037/0022-3514.93.5.880
Christensen, A. P., & Golino, H. (2020). Statistical equivalency of Dinić, B. M., Wertag, A., Tomašević, A., & Sokolovska, V. (2020).
factor and network loadings. PsyArXiv. https://doi.org/ Centrality and redundancy of the Dark Tetrad traits. Personality
10.31234/osf.io/xakez and Individual Differences, 155, 109621. https://doi.org/
Christensen, A. P., & Golino, H. (2019). Estimating the stability of 10.1016/j.paid.2019.109621
the number of factors via Bootstrap Exploratory Graph Analysis: Dunn, T. J., Baguley, T., & Brunsden, V. (2014). From alpha to
A tutorial. PsyArXiv. https://doi.org/10.31234/osf.io/9deay omega: A practical solution to the pervasive problem of internal
Condon, D. M. (2018). The SAPA personality inventory: An consistency estimation. British Journal of Psychology, 105,
empirically-derived, hierarchically-organized self-report person- 399–412. https://doi.org/10.1111/bjop.12046
ality assessment model. PsyArXiv. https://doi.org/10.31234/osf. Epskamp, S. (2019). Psychonetrics: Structural equation modeling
io/sc4p9 and confirmatory network analysis. R package verson 0.3.3.
Connelly, B. S., Ones, D. S., Davies, S. E., & Birkland, A. (2014). https://CRAN.R-project.org/package=psychonetrics
Opening up openness: A theoretical sort following critical inci- Epskamp, S., & Fried, E. I. (2018). A tutorial on regularized partial
dents methodology and a meta-analytic investigation of the trait correlation networks. Psychological Methods, 23, 617–634.
family measures. Journal of Personality Assessment, 96, 17–28. https://doi.org/10.1037/met0000167
https://doi.org/10.1080/00223891.2013.809355 Epskamp, S., Maris, G., Waldrop, L. J., & Borsboom, D. (2018a).
Costantini, G., Epskamp, S., Borsboom, D., Perugini, M., Mõttus, Network psychometrics. In Irwing, P., Hughes, D., & Booth, T.
R., Waldorp, L. J., & Cramer, A. O. J. (2015). State of the aRt (Eds.), The Wiley handbook of psychometric testing, 2 volume
personality research: A tutorial on network analysis of personal- set: A multidisciplinary reference on survey, scale and test devel-
ity data in R. Journal of Research in Personality, 54, 13–29. opment. New York, NY:Wiley. https://doi.org/10.1002/
https://doi.org/10.1016/j.jrp.2014.07.003 9781118489772.ch30
Costantini, G., & Perugini, M. (2012). The definition of components Epskamp, S., Waldorp, L. J., Mõttus, R., & Borsboom, D. (2018b).
and the use of formal indexes are key steps for a successful appli- The Gaussian graphical model in cross-sectional and time-series
cation of network analysis in personality psychology. European data. Multivariate Behavioral Research, 4, 1–28. https://doi.org/
Journal of Personality, 26, 434–435. https://doi.org/10.1002/ 10.1080/00273171.2018.1454823
per.1869 Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in
Costantini, G., Richetin, J., Borsboom, D., Fried, E. I., Rhemtulla, social and personality research: Current practice and recommen-
M., & Perugini, M. (2015). Development of indirect measures dations. Social Psychological and Personality Science, 8,
of conscientiousness: Combining a facets approach and network 370–378. https://doi.org/10.1177/1948550617693063

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
Network psychometrics and trait validation

Fleeson, W. (2001). Toward a structure- and process-integrated Hubley, A. M., Zhu, S. M., Sasaki, A., & Gadermann, A. M.
view of personality: Traits as density distributions of states. Jour- (2014). Synthesis of validation practices in two assessment
nal of Personality and Social Psychology, 80, 1011–1027. https:// journals: Psychological Assessment and the European Journal
doi.org/10.1037/0022-3514.80.6.1011 of Psychological Assessment. In Zumbo, B., & Chan, E. (Eds.),
Fleeson, W., & Gallagher, P. (2009). The implications of Big Five Validity and validation in social, behavioral, and health sciences.
standing for the distribution of trait manifestation in behavior: Cham, CH:Springer, pp. 193–213.
Fifteen experience-sampling studies and a meta-analysis. Journal John, O. P., & Srivastava, S. (1999). The Big Five trait taxonomy:
of Personality and Social Psychology, 97, 1097–1114. https:// History, measurement, and theoretical perspectives, (2nd ed.).
doi.org/10.1037/a0016786 In Pervin, L. A, & John, O. P. (Eds.), Handbook of personality:
Fleeson, W., & Jayawickreme, E. (2015). Whole trait theory. Jour- Theory and research. New York, NY:Guilford Press,
nal of Research in Personality, 56, 82–92. https://doi.org/ pp. 159–181.
10.1016/j.jrp.2014.10.009 Kan, K.-J., van der Maas, H. L. J., & Levine, S. Z. (2019). Extend-
Forbes, M. K., Wright, A. C., Markon, K. E., & Krueger, R. F. ing psychometric network analysis: Empirical evidence against g
(2017). Evidence that psychopathology symptom networks have in favor of mutualism? Intelligence, 73, 52–62. https://doi.org/
limited replicability. Journal of Abnormal Psychology, 126, 10.1016/j.intell.2018.12.004
969–988. https://doi.org/10.1037/abn0000276 Kane, M. T. (2013). Validating the interpretations and uses of test
Forbes, M. K., Wright, A. G. C., Markon, K. E., & Krueger, R. F. scores. Journal of Educational Measurement, 50, 1–73. https://
(2019). Quantifying the reliability and replicability of psychopa- doi.org/10.1111/jedm.12000
thology network characteristics. Multivariate Behavioral Re- Kelley, T. L. (1927). Interpretation of educational measurements.
search. https://doi.org/10.1080/00273171.2019.1616526 New York, NY:MacMillan.
Fortunato, S. (2010). Community detection in graphs. Physics Re- Kruis, J., & Maris, G. (2016). Three representations of the Ising
ports, 486, 75–174. https://doi.org/10.1016/j.physrep.2009.11.002 model. Scientific Reports, 6, srep34175. https://doi.org/10.1038/
Friedman, J., Hastie, T., & Tibshirani, R. (2008). Sparse inverse co- srep34175
variance estimation with the graphical lasso. Biostatistics, 9, Lauritzen, S. L. (1996). Graphical models. Oxford, UK:Clarendon
432–441. https://doi.org/10.1093/biostatistics/kxm045 Press.
Goldberg, L. R. (1993). The structure of phenotypic personality Lee, K., & Ashton, M. C. (2004). Psychometric properties of the
traits. American Psychologist, 48, 26–34. https://doi.org/ HEXACO personality inventory. Multivariate Behavioral Re-
10.1037/0003-066X.48.1.26 search, 39, 329–358. https://doi.org/10.1207/
Golino, H., & Christensen, A. P. (2020). EGAnet: Exploratory s15327906mbr3902_8
Graph Analysis: A framework for estimating the number of di- Lee, K., & Ashton, M. C. (2018). Psychometric properties of the
mensions in multivariate data using network psychometrics. HEXACO-100. Assessment, 25, 543–556. https://doi.org/
https://CRAN.R-project.org/package=EGAnet 10.1177/1073191116659134
Golino, H., & Demetriou, A. (2017). Estimating the dimensionality Loevinger, J. (1957). Objective tests as instruments of psychologi-
of intelligence like data using Exploratory Graph Analysis. Intel- cal theory. Psychological Reports, 3, 635–694.
ligence, 62, 54–70. https://doi.org/10.1016/j.intell.2017.02.007 Markus, K. A., & Borsboom, D. (2013). Frontiers of test validity
Golino, H., & Epskamp, S. (2017). Exploratory Graph Analysis: A theory: Measurement, causation, and meaning. New York, NY:
new approach for estimating the number of dimensions in psy- Routledge. https://doi.org/10.4324/9780203501207
chological research. PloS ONE, 12, e0174035. https://doi.org/ Marsman, M., Borsboom, D., Kruis, J., Epskamp, S., van Bork, R.,
10.1371/journal.pone.0174035 Waldorp, L. J., van der Maas, H. L. J., & Maris, G. (2018).
Golino, H., Shi, D., Christensen, A. P., Garrido, L. E., Nieto, M. D., An introduction to network psychometrics: Relating Ising net-
Sadana, R., Thiyagarajan, J. A., & Martinez-Molina, A. (2020). work models to item response theory models. Multivariate Be-
Investigating the performance of Exploratory Graph Analysis havioral Research, 53, 15–35. https://doi.org/10.1080/
and traditional techniques to identify the number of latent factors: 00273171.2017.1379379
A simulation and tutorial. Psychological Methods. https://doi.org/ McCrae, R. R. (2015). A more nuanced view of reliability: Specific-
10.1037/met0000255 ity in the trait hierarchy. Personality and Social Psychology Re-
Green, S. B., Lissitz, R. W., & Mulaik, S. A. (1977). Limitations of view, 19, 97–112. https://doi.org/10.1177/1088868314541857
coefficient alpha as an index of test unidimensionality. Educa- McCrae, R. R., & Costa, P. T. (1987). Validation of the five-factor
tional and Psychological Measurement, 37, 827–838. https:// model of personality across instruments and observers. Journal of
doi.org/10.1177/001316447703700403 Personality and Social Psychology, 52, 81–90. https://doi.org/
Guttman, L. (1953). Image theory for the structure of quantitative 10.1037/0022-3514.52.1.81
variates. Psychometrika, 18, 277–296. McCrae, R. R., & Costa, P. T. (2008). The five-factor theory of per-
Hallquist, M., Wright, A. C. G., & Molenaar, P. C. M. (2019). Prob- sonality, (3rd ed.). In John, O. P., Robins, R. W., & Pervin, L. A.
lems with centrality measures in psychopathology symptom net- (Eds.), Handbook of personality: Theory and research. New
works: Why network psychometrics cannot escape psychometric York, NY:Guilford Press, pp. 159–181.
theory. Multivariate Behavioral Research. https://doi.org/ McCrae, R. R., Costa, P. T., Ostendorf, F., Angleitner, A.,
10.1080/00273171.2019.1640103 Hřebíčková, M., Avia, M. D., Sanz, J., Sanchez-Bernardos, M.
Hamaker, E. L., Nesselroade, J. R., & Molenaar, P. C. M. (2007). L., Kusdil, M. E., Woodfield, R., Saunders, P. R., & Smith, P. B.
The integrated trait–state model. Journal of Research in Person- (2000). Nature over nurture: Temperament, personality, and life
ality, 41, 295–315. https://doi.org/10.1016/j.jrp.2006.04.003 span development. Journal of Personality and Social Psychology,
Haslbeck, J. M. B., & Waldorp, L. J. (2015). mgm: Structure esti- 78, 173–186. https://doi.org/10.1037//0022-3514.7S.1.17
mation for time-varying mixed graphical models in McCrae, R. R., & Mõttus, R. (2019). A new psychometrics: What
high-dimensional data. arXiv. https://arxiv.org/abs/1510.06871 personality scales measure, with implications for theory and as-
Haslbeck, J. M. B., & Waldorp, L. J. (2018). How well do network sessment. Current Directions in Psychological Science.
models predict observations? On the importance of predictability McDonald, R. P. (1999). Test theory: A unified treatment. New
in network models. Behavior Research Methods, 50, 853–861. York, NY:Taylor & Francis. https://doi.org/10.4324/
https://doi.org/10.3758/s13428-017-0910-x 9781410601087
Hogan, R., & Foster, J. (2016). Rethinking personality. Interna- McDonald, R. P. (2003). Behavior domains in theory and in prac-
tional Journal of Personality Psychology, 2, 37–43. https://ijpp. tice. Alberta Journal of Educational Research, 49, 212–230.
rug.nl/article/view/25245

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per
A. Christensen et al.

McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from Revelle, W. (2019). psychTools: Tools to accompany the ‘psych’
here. Psychological Methods, 23, 412–433. https://doi.org/ package for psychological research, Northwestern University,
10.1037/met0000144 Evanston, Illinois. https://CRAN.R-project.org/package=
Messick, S. (1989). Meaning and values in test validation: The sci- psychTools
ence and ethics of assessment. Educational Researcher, 18, 5–11. Sass, D. A., & Schmitt, T. A. (2010). A comparative investigation
https://doi.org/10.3102/0013189X018002005 of rotation criteria within exploratory factor analysis. Multivari-
Messick, S. (1995). Standards of validity and the validity of ate Behavioral Research, 45, 73–103. https://doi.org/10.1080/
standards in performance asessment. Educational measurement: 00273170903504810
Issues and practice, 14, 5–8. https://doi.org/10.1111/j.1745- Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psycho-
3992.1995.tb00881.x logical Assessment, 8, 350–353. https://doi.org/10.1037/1040-
Mõttus, R., & Allerhand, M. (2017). Why do traits come together? 3590.8.4.350
the underlying trait and network approaches. In Ziegler-Hill, V., Schmittmann, V. D., Cramer, A. O. J., Waldorp, L. J., Epskamp, S.,
& Shackelford, T. K. (Eds.), SAGE handbook of personality Kievit, R. A., & Borsboom, D.(2013). Deconstructing the con-
and individual differences: The science of personality and indi- struct: A network perspective on psychological phenomena.
vidual differences. London, UK:SAGE Publications, pp. 1–22. New Ideas in Psychology, 31, 43–53. https://doi.org/10.1016/j.
Mõttus, R., Bates, T., Condon, D. M., Mroczek, D., & Revelle, W. newideapsych.2011.02.007
(2018). Leveraging a more nuanced view of personality: Narrow Schwaba, T., Rhemtulla, M., Hopwood, C. J., & Bleidorn, W.
characteristics predict and explain variance in life outcomes. (2020). The facet atlas: Using network analysis to describe the
PsyArXiv. https://doi.org/10.31234/osf.io/4q9gv blends, cores, and peripheries of personality structure. PsyArXiv.
Nowick, K., Gernat, T., Almaas, E., & Stubbs, L. (2009). Differ- https://doi.org/10.31234/osf.io/zskfu
ences in human and chimpanzee gene expression patterns define Seeboth, A., & Mõttus, R. (2018). Successful explanations start
an evolving network of transcription factors in brain. Proceedings with accurate descriptions: Questionnaire items as personality
of the National Academy of Sciences, 106, 22358–22363. https:// markers for more accurate predictions. European Journal of Per-
doi.org/10.1073/pnas.0911376106 sonality, 32, 186–201. https://doi.org/10.1002/per.2147
Pérez, M.-E., & Pericchi, L. R. (2014). Changing statistical signifi- Sijtsma, K. (2009). On the use, the misuse, and the very limited use-
cance with the amount of information: The adaptive α signifi- fulness of Cronbach’s alpha. Psychometrika, 74, 107–120.
cance level. Statistics & Probability Letters, 85, 20–24. https:// https://doi.org/10.1007/S11336-008-9101-0
doi.org/10.1016/j.spl.2013.10.018 Sočan, G. (2000). Assessment of reliability when test items are not
Perugini, M., Costantini, G., Hughes, S., & De Houwer, J. (2016). essentially τ-equivalent. Developments in Survey Methodology,
A functional perspective on personality. International Journal 15, 23–35.
of Psychology, 51, 33–39. https://doi.org/10.1002/ijop.12175 van Bork, R., Rhemtulla, M., Waldorp, L. J., Kruis, J., Rezvanifar, S.,
Pervin, L. A. (1994). A critical analysis of current trait theory. Psy- & Borsboom, D. (2019). Latent variable models and networks: Sta-
chological Inquiry, 5, 103–113. https://doi.org/10.1207/ tistical equivalence and testability. Multivariate Behavioral Re-
s15327965pli0502_1 search, 1–24. https://doi.org/10.1080/00273171.2019.1672515
Pons, P., & Latapy, M. (2006). Computing communities in large net- van der Maas, H. L. J., Dolan, C. V., Grasman, R. P. P. P., Wicherts,
works using random walks. Journal of Graph Algorithms and Ap- J. M., Huizenga, H. M., & Raijmakers, M. E. J. (2006). A dynam-
plications, 10, 191–218. https://doi.org/10.7155/jgaa.00185 ical model of general intelligence: The positive manifold of intel-
R Core Team (2020). R: A language and environment for statistical ligence by mutualism. Psychological Review, 113, 842–861.
computing, R Foundation for Statistical Computing, Vienna, https://doi.org/10.1037/0033-295X.113.4.842
Austria. https://www.R-project.org/ Wood, D., Gardner, M. H., & Harms, P. D. (2015). How function-
Rauthmann, J. F., Horstmann, K. T., & Sherman, R. A. (2019). Do alist and process approaches to behavior can explain trait covari-
self-reported traits and aggregated states capture the same thing? ation. Psychological Review, 122, 84–111. https://doi.org/
A nomological perspective on trait-state homomorphy. Social 10.1037/a0038423
Psychological and Personality Science, 10, 596–611. https:// Zhang, B., & Horvath, S. (2005). A general framework for weighted
doi.org/10.1177/1948550618774772 gene co-expression network analysis. Statistical Applications in
Rauthmann, J. F., & Sherman, R. A. (2018). The description of sit- Genetics and Molecular Biology, 4, 17. https://doi.org/10.2202/
uations: Towards replicable domains of psychological situation 1544-6115.1128
characteristics. Journal of Personality and Social Psychology, Zinbarg, R. E., Yovel, I., Revelle, W., & McDonald, R. P. (2006).
482–488. https://doi.org/10.1037/pspp0000162 Estimating generalizability to a latent variable common to all of
Read, S. J., Monroe, B. M., Brownstein, A. L., Yang, Y., Chopra, a scale’s indicators: A comparison of estimators for ωh. Applied
G., & Miller, L. C. (2010). A neural network model of the struc- Psychological Measurement, 30, 121–144. https://doi.org/
ture and dynamics of human personality. Psychological Review, 10.1177/0146621605278814
117, 61–92. https://doi.org/10.1037/a0018131

© 2020 European Association of Personality Psychology Eur. J. Pers. (2020)


DOI: 10.1002/per

You might also like