SA Journal of Human Resource Management
ISSN: (Online) 2071-078X, (Print) 1683-7584
Page 1 of 11
Original Research
Job applicants’ attitudes towards cognitive
ability and personality testing
Authors:
Rachelle Visser1
Pieter Schaap1
Orientation: Growing research has shown that not only test validity considerations but also
the test-taking attitudes of job applicants are important in the choice of selection instruments
as these can contribute to test performance and the perceived fairness of the selection process.
Affiliations:
1
Department of Human
Resources Management,
University of Pretoria,
South Africa
Research purpose: The main purpose of this study was to determine the test-taking attitudes
of a diverse group of job applicants towards personality and cognitive ability tests administered
conjointly online as part of employee selection in a financial services company in South Africa.
Corresponding author:
Pieter Schaap,
[email protected]
Dates:
Received: 16 Oct. 2016
Accepted: 08 Aug. 2017
Published: 05 Oct. 2017
How to cite this article:
Visser, R., & Schaap, P. (2017).
Job applicants’ attitudes
towards cognitive ability and
personality testing. SA
Journal of Human Resource
Management/SA Tydskrif vir
Menslikehulpbronbestuur,
15(0), a877. https://doi.
org/10.4102/sajhrm.
v15i0.877
Copyright:
© 2017. The Authors.
Licensee: AOSIS. This work
is licensed under the
Creative Commons
Attribution License.
Motivation for the study: If users understand how job applicants view specific test types, they
will know which assessments are perceived more negatively and how this situation can
potentially be rectified.
Research design, approach and method: A non-experimental and cross-sectional survey
design was used. An adapted version of the Test Attitude Survey was used to determine job
applicants’ attitudes towards tests administered online as part of an employee selection
process. The sample consisted of a group of job applicants (N = 160) who were diverse in terms
of ethnicity and age and the educational level applicable for sales and supervisory positions.
Main findings: On average, the job applicants responded equally positively to the cognitive
ability and personality tests. The African job applicants had a statistically significantly more
positive attitude towards the tests than the other groups, and candidates applying for the sales
position viewed the cognitive ability tests significantly less positively than the personality test.
Practical and managerial implications: The choice of selection tests used in combination as
well as the testing conditions that are applicable should be considered carefully as they are the
factors that can potentially influence the test-taking motivation and general test-taking
attitudes of job applicants.
Contribution: This study consolidated the research findings on the determinants of attitudinal
responses to cognitive ability and personality testing and produced valuable empirical findings
on job applicants’ attitudes towards both test types when administered conjointly.
Introduction
The sensible application of psychological assessment tools could play an important role in the
transformation of organisations, in particular organisations in post-apartheid South Africa
(Donald, Thatcher & Milner, 2014). Despite the finding that occupational assessment has the
potential to provide valid and reliable performance predictions in an occupational setting,
participants’ perceptions may still adversely impact the results (McCarthy & Goffin, 2003).
Variables, such as individuals’ attitudes towards an assessment process, should be considered in
order to come to an understanding of individual performance levels in assessments (Chu, Guo &
Leighton, 2014). Many research studies support the notion that attitude has a profound impact on
the performance of an individual in a wide variety of assessment tasks (Schmitt, 2013; Smith,
1997). Gilliland and Steiner (2012) have identified some broad implications of positive reactions to
testing, for instance, positive reactions can impact significantly the attractiveness of a job and an
organisation and result in positive intentions regarding the job offer.
Read online:
Scan this QR
code with your
smart phone or
mobile device
to read online.
The assessments chosen for inclusion in a selection battery may impact the job applicants’
reactions to the selection process as a whole. In general, practitioners and users of psychometric
tools agree that the use of a combination of different assessment types serves the purpose of
giving the user a more objective view of candidates’ abilities and preferences (Saville, Nyfield,
McCarthy & Gibbons, 1997). Numerous studies and meta-analyses have provided evidence of the
validity of cognitive and personality tests as predictors of job performance. Although personality
tests have low validity compared with cognitive ability tests, evidence has shown that when these
http://www.sajhrm.co.za
Open Access
Page 2 of 11
are used in combination, personality tests have significant
incremental validity (Schmitt, 2013). In addition, cognitive
ability and personality tests are popular choices for inclusion
in selection processes as these measures have shown
remarkable validity generalisation and cost-effectiveness
(Gilliland & Steiner, 2012). However, test takers’ attitudes
and perceptions may affect the validity of selection tools
(Schmitt, 2013). Research has shown that job applicants
generally evaluate cognitive ability and personality tests as
less favourable than other competing selection tools (e.g.
interviews and work samples) (Anderson, Salgado &
Hüsheger, 2010). If users understand how job applicants
from diverse backgrounds view specific assessments and
what potential impact their views could have on performance,
they could get to know which assessments are perceived
more negatively and how this perception could potentially
be rectified. Evoking positive reactions to testing during the
selection process holds promise for attracting and retaining
qualified employees, in particular from underrepresented or
previously disadvantaged groups (De Jong & Visser, 2000;
Donald et al., 2014; Muchinsky, Kriek & Schreuder, 2002).
Research purpose
The main purpose of this study was to determine the attitudes
of a diverse group of job applicants towards personality and
cognitive ability tests administered conjointly in a supervised
online test session as part of a selection process in a financial
services company in South Africa. A secondary purpose of
the study was to determine the effect of the group’s
demographics (representing the group’s diversity) on the
test-taking attitudes (TA) of the group.
In the following section, a background to TAs will be given,
followed by relevant research questions that were formulated
based on relevant literature that was studied.
Literature review
Theoretical background
The study of job applicants’ attitudes towards testing forms
part of a broad study field known as ‘applicant reactions
to tests’, which entails a multitude of different theoretical
perspectives. Although no single overarching theoretical
framework has directed research, there has been substantial
cross-referencing and integration of various models
(Gilliland & Steiner, 2012; Hüsheger & Anderson, 2009).
Research on test-taking attitudinal and motivational
components and their influence on test performance and test
validity (Arvey, Strickland, Drauden & Martin, 1990; O’Neill,
Goffin & Gellatly, 2012) is one of the recognised research
streams relating to applicants’ reactions to tests. Test attitude
can be defined as the extent to which participants in
assessments demonstrate focus, effort and diligence in
completing the instrument (Arvey et al., 1990). Different
aspects of TAs, such as test anxiety, motivation during tests,
belief in tests, concentration and test ease during assessments,
have been observed (Arvey et al., 1990).
http://www.sajhrm.co.za
Original Research
A second well-known research stream that has a bearing on
job applicants’ reactions to selection practices is that of
organisational justice (Gilliland, 1993). Organisational justice
includes the concepts of procedural justice, interactional
justice and distributive justice, which are considered central
in promoting test fairness. The research issues tackled are job
relatedness, test validity, equity, equality and communication
about the assessment process and results (Donald et al.,
2014). Gilliland’s justice constructs (procedural and
interactional justice) that promote test fairness have been
shown to be related to test attitude constructs (e.g. motivation
during tests, belief in tests) (McCarthy, Hrabluik & Jelley,
2009). Lievens, De Corte and Brysse (2003) have found that
the attitudinal component known as belief in tests relates
significantly to overall perceptions of the fairness of cognitive
ability tests and personality tests. Research conducted by
these authors shows that perceived job relatedness and test
validity may positively influence test takers’ belief in tests,
resulting in a positive attitude.
According to Gilliland and Steiner (2012) and Nikolaou,
Bauer and Truxillo (2015), social psychological theoretical
models help explain reactions to tests. These models suggest
that a job applicant’s attitude towards a test may be influenced
by the job applicant’s perception of the level of congruence
between his or her own identity and what the organisation
stands for in terms of culture, values and beliefs. Where
congruence exists with respect to valued justice principles,
the applicant’s attitude towards selection measures may be
swayed to become more positive.
Ployhart and Harold (2004) have developed the Applicant
Attribution-Reaction Theory (AART), a theory which integrates
attribution theory (Weiner, 1985), the selection justice model
(Gilliland, 1993) and the test attitude model (Arvey et al., 1990).
The AART suggests that blatant mismatches between
situational perceptions and expectations of fair treatment act as
attribution triggers, leading to increased awareness of the
violation of justice rules and standards of conduct in testing
situations. In terms of the AART, unfavourable testing
experiences may trigger a critical attitude towards tests and
possibly counter-productive behavioural reactions such as
withdrawal or litigation.
A fundamental assumption that applies to test attitude
research is that self-interest consciously and unconsciously
drives attempts to maximise the likelihood of favourable
outcomes, resulting in positive or negative reactions
depending on the individual’s performance and the
perceived fairness of the process. Testing procedures tend to
be seen as more fair if they are consistent, accurate, ethical,
free from bias, open to challenge and created to provide
opportunity for input (Gilliland, 2008). Gilliland and Steiner
(2012, p. 633) have formulated justice rules that contribute
to the perceived fairness of a selection process. These rules
are job relatedness, opportunity to perform, consistence of
administration, reconsideration opportunity, selection
information, feedback timeliness, honesty, interpersonal
effectiveness of administrator, two-way communication
Open Access
Page 3 of 11
and propriety of questions. A detailed discussion of each of
the rules in this article is not justified because of limited
space; therefore, only the rules and supporting research
evidence specific to standardised proctored testing of
cognitive ability and personality tests will be further
discussed. Meta-analytical studies have demonstrated that
job applicants are inclined to react only moderately
favourably to cognitive ability and personality tests but
more favourably to work sample and interview selection
techniques (Anderson et al., 2010). This represents a
challenge as these two off-the-shelf tests (cognitive ability
and personality tests) are popular choices for inclusion in a
selection battery because of their generalised validity
evidence for predicting performance with respect to many
jobs and because of their relative cost-effectiveness (Gilliland
& Steiner, 2012). Furthermore, there is a notable tendency to
rate personality tests less favourably than cognitive ability
tests (Anderson et al., 2010; Hausknecht, Day & Thomas,
2004). Therefore, understanding the determinants of
negative or positive reactions towards personality and
cognitive ability tests could be considered important as
these reactions could influence TAs in specific contexts.
Determinants of personality and cognitive ability testtaking attitudes
The ‘opportunity to perform’ rule of Gilliland and Steiner
(2012, p. 633), which refers to the opportunity to demonstrate
the required competencies within a testing and selection
context, is also applicable to cognitive and personality tests.
According to Hausknecht et al. (2004), opportunity to
perform correlates with test-taking motivation (TM)
(r = 0.32). For instance, question steering (ability to ‘fake’ a
response) in personality tests may elicit positive or negative
responses depending on the extent to which the applicant is
able to identify the job profile requirements in the items.
According to Van Vianen, Taris, Scholten and Schinkel
(2004), personality tests are likely to be perceived as less fair
by job applicants when the responses needed to obtain a
favourable test outcome is less transparent (e.g. forcedchoice-item format as opposed to a Likert format). Job
applicants react favourably to tests in which their perceived
and actual performance is good, and the opposite is true if
they perform badly (Whitman, Kraus & Van Rooy, 2014).
Positively worded warnings not to fake on tests (e.g.
emphasising the advantages of responding honestly)
increase test takers’ motivation, whereas negative warnings
(e.g. emphasising the negative outcomes of dishonesty)
increase test takers’ levels of anxiety, resulting in inaccurate
responses (Burns, Fillipowski, Morris & Shoda, 2015).
Research has shown that anxiety has a differential impact on
applicants’ cognitive test performance within a personnel
selection context (Proost, Derous, Schreurs, Hagtvet & De
Witte, 2008). Meta-analyses have shown that test performance
and test anxiety are negatively related (r = -0.31), but
causality is unclear. Surprisingly though, Lievens et al.
(2003) have found that test anxiety fails to predict test
fairness perceptions. However, Hausknecht et al. (2004) have
found test anxiety to be related to negative TAs.
http://www.sajhrm.co.za
Original Research
The justice rule of ‘propriety of questions’ formulated by
Gilliland and Steiner (2012) refers to invasive, inappropriate
and biased questions that infringe an applicant’s privacy.
This rule may be more of an issue in structured personality
tests than in cognitive ability tests. Inappropriate questioning
is associated with less job relatedness and produces lower
justice, validity perceptions and fairness reactions, which are
likely to have an impact on the applicant’s belief in tests
(Bauer et al., 2006; Gilliland & Steiner, 2012, p. 633).
Perceptions of question invasiveness have shown to be
strongly (and negatively) related to perceptions of fairness
(Gilliland, 1993).
‘Providing selection information and explanations’ to job
applicants forms part of information justice rules (Gilliland &
Steiner, 2012, p. 633). This rule refers to the job relatedness of
the test, the validity of the test, the procedure to be followed,
the way the test will be scored and the length of time it will
take, the format of the test and the expected processes and
timelines for feedback (Gilliland & Steiner, 2012; Ryan &
Huth, 2008). Although research findings are inconsistent
(Lahuis, Perreault & Ferguson, 2003; Rolland & Steiner, 2007),
Truxillo, Bodner, Bertolino, Bauer and Yonce (2009) have
found that explanations affect performance in cognitive
ability tests as they have a mediating effect on the motivation
of the applicants taking the test. A further finding of their
study is that selection information has a stronger effect on
reactions to personality tests than to cognitive ability tests
because of personality tests showing less job relatedness,
lower transparency and controllability on the part of the
applicant.
‘The interpersonal effectiveness of the test administrator’ is
another justice rule that relates to ‘the degree to which
applicants are treated with warmth and respect’ (Gilliland &
Steiner, 2012, p. 633). Interpersonal effectiveness is a factor
applicants cite often as one that makes selection processes as
a whole fair or unfair. However, overall research findings
demonstrate less consistent results relating to the influence of
interpersonal effectiveness on test-taker reactions. This may
especially be true in standardised testing situations where
interpersonal contact may be limited because of the nature of
the testing circumstances (Gilliland & Steiner, 2012). Although
no research that has explored the differential effect of
interpersonal treatment on TAs towards personality and
cognitive tests could be identified, in a meta-analysis
conducted by Hausknecht et al. (2004) interpersonal
treatment shows a strong average effect size (r = 0.34) towards
testing attitudes in general.
According to Zibarras and Patterson (2015, p. 333), ‘job
relatedness’ is considered the procedural justice principle
that has the greatest influence on overall fairness perceptions
compared to any other characteristics of a selection method.
Steiner and Gilliland (1996) argue that people implicitly
judge widely used testing techniques to be valid, resulting in
a favourable view of tests (belief in tests). Lievens et al. (2003)
indicate a significant relation between the belief in tests and
the perceived scientific value of cognitive ability tests but not
Open Access
Page 4 of 11
of personality tests. However, belief in tests shows a
significantly positive relationship with job relatedness in
personality tests. Furthermore, reactions towards cognitive
ability tests containing concrete items are more positive than
are reactions towards tests containing abstract items
(Gilliland & Steiner, 2012).
The findings of studies about the effect of demographic
variables on TAs are generally mixed (Rosse, Miller & Stecher,
1994). The effect of race on cognitive ability test performance
is found to be mediated partially by motivation: a portion of
the difference in the test performance of African and white
people may be explained through differences in TM
(Whitman et al., 2014). De Jong and Visser (2000) point out
that black population groups are inclined to view personality
tests as less fair than do white population groups. Rynes and
Connerly (1993) have found that demographic variables are
unrelated to job applicants’ attitudes. Similarly, Hausknecht
et al. (2004) in their meta-analytical study have found no
significant relationship between applicants’ perceptions of
tests and personal characteristics, age, gender and ethnic
background.
To date, studies have found that test takers’ attitudinal
reactions to a wide range of web-based administered tests are
overwhelmingly positive. Online testing is no longer a
novelty and many people are frequently exposed to tests on
Internet portals in academic and employment settings
(Anderson, 2003; Reynolds & Lin, 2003; Wiechmann & Ryan,
2003). This includes the online administration of personality
and cognitive ability tests (Baron & Austin, 2000; Reynolds,
Sinar & McClough, 2000). Konradt, Warszta and Ellwart
(2013) point out that Gilliland’s (1993) organisational justice
rules for promoting positive job applicant reactions apply
equally to online testing platforms. The key factors that
influence job applicants’ reactions to online testing are
perceived to be efficiency and user-friendliness (e.g. system
usability and speed), provision of information (e.g. tutorials
and clear instructions), perceived process fairness (e.g. clarity
about selection criteria) and the company’s technological
image on the Internet (i.e. job applicants’ image of companies
that use modern and progressive online testing systems for
selection purposes).
To summarise, the theory suggests that job applicants’
attitude (e.g. test anxiety, motivation during tests and belief
in tests) towards cognitive ability and personality tests will
most likely be affected by the perceived opportunity that
the tests provide for optimal performance (Zibarras &
Patterson, 2015), interpersonal effectiveness during testing
(Schleicher, Venkataramani, Morgeson & Campion, 2006),
explanations and selection information provided (Truxillo
et al., 2009), question invasiveness (Nikolaou et al., 2015)
and the efficiency and user-friendliness of an online test
administration platform (Konradt et al., 2013). The
favourability of testing situations may trigger a positive or
negative attitude (according to attribution theory) towards
tests, supported by the perceived congruence (according
to social psychological models) between the applicants’
http://www.sajhrm.co.za
Original Research
self-identity and the company’s culture and values.
Although research evidence generally points to a difference
in job applicants’ favourability rating of cognitive ability
and personality tests, research carried out by Rosse et al.
(1994) suggests that measures from the same category (e.g.
psychological tests) applied conjointly may compensate for
each other in terms of perceived relevance and fairness,
resulting in job applicants reacting similarly to these
measures (Rosse et al., 1994). In line with the findings of
previous studies, it is argued that the experience of tests of
a demographically diverse group of job applicants should
affect TM and general TAs similarly irrespective of group
membership. However, research findings to support this
notion have been inconsistent.
The specific research questions that this study intends to
answer with regard to personality and cognitive ability tests
administered online under supervision as part of the relevant
financial services company’s selection process are the
following:
• Question 1: What are the job applicants’ attitudinal
responses to supervised online personality and cognitive
ability tests administered conjointly?
• Question 2: Are there significant differences in job
applicants’ attitudinal responses to supervised online
personality and cognitive ability tests administered
conjointly?
• Question 3: Do demographical differences (e.g. ethnic
origin, gender, educational level and position applied for)
relate to job applicants’ attitudinal responses to
supervised online personality and cognitive ability tests
administered conjointly?
• Question 4: Do demographical differences (e.g. ethnic
origin, gender, educational level and position applied for)
relate to significant differences in job applicants’
attitudinal responses to supervised online personality
and cognitive ability tests administered conjointly?
The main purpose of the research was to determine the testtaking attitudinal responses of a demographically diverse
sample of job applicants to personality and cognitive ability
tests administered conjointly online under supervision and
whether there are differences in test-taking attitudinal
responses with respect to the type of test and the
demographical subgroupings.
Method
Research approach
A non-experimental and cross-sectional quantitative
survey research design was used in this study. More
specifically, a descriptive and associational research
approach was taken for the purposes of addressing the
research questions (Gliner, Morgan & Leech, 2009). The
research questions can be considered explorative in
nature and current research to support clear directional
hypotheses on the association between variables is lacking
(Gliner et al., 2009).
Open Access
Page 5 of 11
Participants
The initial sample, which was a non-probability convenience
sample, consisted of 175 respondents. However, after the
exclusion of incomplete data, the data of 160 respondents
were retained. The final sample consisted of 68 (42%) males,
91 (57%) females and one person of unspecified gender.
Equity was achieved in terms of racial distribution by
including 116 (73%) African respondents, 22 (13%)
respondents of mixed race, 12 (8%) white respondents and 7
(4%) Indian respondents. The races of three participants were
not specified. In respect of respondents’ highest educational
qualifications, the data indicated that 56 (35%) respondents
had completed Grade 12, 43 (27%) had post-matric certificates,
52 (31%) had degrees, 8 (6%) had postgraduate qualifications
and that the qualification of one person was not specified.
Age distribution data showed that 80 (50%) people were
younger than 30, 75 (47%) were older than 30, whereas the
age of six respondents was not specified. The distribution
according to position occupied was as follows: 118 (74%)
were sales consultants, 26 (16%) were team leaders and 16
(9%) were branch managers.
Measuring instrument
In this study, a shorter and adapted version (20 items) of the
Test Attitude Survey (TAS) by Smith (1997) was used as
limited time was available to gather information on TAs
during the proctored testing session. The original version
(45 items) of the TAS was developed by Arvey et al. in 1990,
which was specifically designed to assess job applicants’
motivational
and
attitudinal
dispositions
towards
standardised tests in selection contexts (Chu et al., 2014). As in
the case of the original TAS, the current survey utilised a fivepoint Likert-type rating scale that ranged from 1 – Strongly
Disagree to 5 – Strongly Agree (Arvey et al., 1990).
The areas included in the final survey distributed to the
candidates of the current survey were based on Smith’s
adapted version of the TAS (Smith, 1997), namely:
• motivation (five items, e.g. doing well on the test[s] was
important to me)
• lack of concentration (three items, e.g. I was bored while
taking the test[s])
• belief in tests (four items, e.g. the test[s] is[are] probably a
good way of selecting people for jobs)
• comparative anxiety (five items, e.g. I felt nervous when
taking the test[s])
• external attribution (one item, I was ill or in a bad mood
when I took the test[s])
• future effects (two items, e.g. the way I answered the
test[s] should help me).
Procedure
The study utilised secondary data obtained from a talent
managment and assessment solutions organisation collected
in the form of a TAS following the completion of supervised
online cognitive ability and personality assessments. The
http://www.sajhrm.co.za
Original Research
assessments in this study were conducted for the purpose of
selecting employees to occupy different positions at a South
Africa-based financial services organisation. The complexity
levels of the cognitive tests were different for different
positions, but the types of assessment (verbal and numerical
cognitive ability tests and a forced-choice-item–type
personality questionnaire) were the same for all three
positions. The process followed was highly structured and
the justice rules for conditions conducive to selection and
favourable test practices (Gilliland & Steiner, 2012, p. 633)
were taken into due consideration. Individuals who had
applied for the positions at the organisation were personally
contacted and invited to be participants in the online
assessment stage. The online assessments were determined
through a job profiling process to ensure job relatedness. The
participants were given detailed information about the
process to be followed on assessment day, the venue where
assessments would be completed and what would be
expected of participants.
For the data collection process, trained test administrators
gave participants a standardised verbal set of instructions
related to the purpose of the assessments (for selection), the
types of assessment to be completed for relevant positions
as well as the reason for inclusion (how the assessments
provided job-related information). In addition to the verbal
instructions, the online assessments included detailed
written instructions as well as a test tutorial that outlined
the benefit of honest responses in the personality test.
Participants were informed that feedback on the outcome of
their assessment phase would be provided within a week of
completing the assessments. The same process was followed
for the completion of the survey. A verbal explanation was
given of the purpose of the survey (to research participants’
attitudes), written instructions were given for the
completion of the survey, and participants’ informed
consent was obtained.
Data analyses
The data analysis in this study was performed using the IBM
Statistical Package for the Social Sciences (SPSS) Version 23.
Descriptive statistics, which included frequency analyses of
ethnicity, age, qualification and job roles, as well as variable
descriptive statistics were provided. Confirmatory factor
analysis (CFA) techniques for determining model fit were
untenable in this study as only 160 candidates completed the
TAS with respect to the cognitive ability and personality
tests. A larger sample size was required to avoid inadequate
model specification and to provide sufficient statistical power
for the number of free parameters (40) that needed to be
estimated (Kline, 2011). In some cases, the scales contained
too few items (some consisting of only two) and consequently
these scales did not reflect acceptable reliability scores.
Therefore, exploratory factor analysis (EFA) using principal
axis factoring (PAF) was applied to determine a more
appropriate factor structure for the TAS that would
appropriately represent the data. For further analyses of the
TAS variables and the demographical subgroups, the analysis
Open Access
Page 6 of 11
Original Research
of variance (ANOVA) test, t-test, Wilcoxon’s z-statistic and
the Kruskal–Wallis test were used for the purposes of this
study.
Three items were excluded that did not load significantly on
any of the factors; therefore, the final version of the adapted
TAS consisted of 17 items. In Table 1, the percentage variance
that can be accounted for by each of the factors is presented.
The total variability across both the factors was 43.96% for
the cognitive ability tests and 51.81% for the personality test.
Ethical considerations
This study is based on a master’s dissertation by the first
author, which was approved by the ethics committee of the
Department of Human Resources Management in the Faculty
of Economic and Management Sciences at the University of
Pretoria. Permission to conduct this study was obtained from
the talent management and assessment solutions organisation
and the financial institution in question. Participants
completed an informed consent form online prior to
participating in the survey. They also gave their informed
consent that the survey data could be used for research and
noted that participation was voluntary and anonymity was
assured.
Table 2 presents the pattern matrix based on oblique rotation
that contains item loadings that represent unique
contributions to specific factors (Field, 2009). Oblique rotation
is an appropriate rotation method if factors correlate. The
factors were identified as TM (Factor 1) and general test
attitude (TA) (Factor 2).
Tucker’s congruence coefficient after targeted rotation was
calculated to compare the pattern matrix of the TAS for the
cognitive ability and personality tests (Van de Vijver &
Leung, 1997). Values higher than 0.95 imply that the pattern
matrices can be considered equivalent. The TAS showed
factorial equivalence, which is considered important to make
valid comparisons on the TAS for the respective tests. The
correlation between the rotated factors was low, which
represented differential validity. The correlations found were
-0.261 for personality assessments and -0.234 for cognitive
ability assessments.
Results
The Kaiser–Meyer–Olkin measure of sampling adequacy
(KMO) and Bartlett’s test of sphericity were used to determine
the sample adequacy of the EFA to yield distinct and reliable
factors (Field, 2009), both measures indicating the suitability
of the data for use in factor analyses. The Bartlett’s test of
sphericity was highly significant (p < 0.001) and the KMO
measures exceeded 0.85 for the use of the TAS with respect to
both the cognitive ability and personality tests. Scree plots
and Horn’s parallel analysis were used to determine the
number of significant factors to be retained (Field, 2009). As
indicated in Figure 1, only two factors could be considered
significant based on the 95% confidence interval of the
inflexion point of the raw and random data’s eigenvalues.
Factor 1 accounted for the most variance in the overall scale
and can be described as the attitudinal response of motivation
(TM) related to the testing experience. TM factors relating to
individuals’ responses included the following: feeling that
performing well in the assessment was important, doing well
in the assessment was possible and trying one’s best was
important. Factor 1 also included individuals’ applying high
Eigenvalues values raw data
Eigenvalues random data 95th percenle
5
Eigenvalue
4
3
2
1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Factors
FIGURE 1: Scree plot and Horn’s parallel test for number of significant factors to be retained on the Test Attitude Survey – an adapted version of the measure.
TABLE 1: Total variance of extracted factors explained for the Test Attitude Survey – an adapted version of the measure.
Cognitive ability tests: Initial eigenvalues
Factor
Personality test: Initial eigenvalues
Total
% of variance
Cumulative %
Total
% of variance
1
4.73*
27.8
27.8
5.66*
33.32
33.32
2
2.75*
16.15
43.96
3.14*
18.5
51.81
*, Significant factors (95th percentile level).
http://www.sajhrm.co.za
Open Access
Cumulative %
Page 7 of 11
Original Research
levels of concentration in doing the assessment, expecting to
do well in the assessment and caring about the outcome of
the assessment. It should be noted that in the original version
of the TAS (Arvey et al., 1990) the motivation dimension also
accounted for the most variance in the overall scale.
proving normal univariate distribution) (George & Mallery,
2010). Consequently, the Shapiro–Wilk test was performed to
test the normality assumption statistically and it was rejected
(p < 0.00). The score distributions appeared to be the same for
both applications of the measure. For this reason, the nonparametric alternative for the dependent t-test (called the
Wilcoxon signed-rank test) was used to determine the statistical
difference between scale scores (Field, 2009). However, the TA
scale scores were normally distributed according to the
Shapiro–Wilk test and the dependent t-test was applied.
Factor 2 represented the attitudinal response towards
the tests in general (TA), which included the following:
general feelings towards the content of questions and the
manifestation of nervousness during testing, levels of apathy,
dissatisfaction or satisfaction with the testing experience. The
second factor also included belief that the assessments
reflected a person’s ability to perform in an occupational
setting, perceived fairness of the assessment and if such
measurement tools should be used or not for selection
purposes.
The statistical results are further reported as per research
question that was formulated earlier in the article.
Question 1: What are the job applicants’ attitudinal responses to
supervised online personality and cognitive ability tests
administered conjointly? When considering the means of the
TAS items presented in Table 3, it can be seen that for both the
cognitive ability and personality assessments for the TM
items showed a high mean score of 4.16 and 4.21, respectively
(high mean scores representing a positive TA). With regard to
the item means for the TA scale, indications were that both
tests were experienced dominantly positively, with item
means at 3.48 and 3.53 for both the tests.
The reliability (Cronbach’s alpha) of the TAS scales was
generally of an acceptable standard (see Table 3), with
reliabilities that far exceeded the coefficient of 0.60 or above
that can be utilised for group comparisons (Field, 2009; Owen
& Taljaard, 1996).
The skewness and kurtosis coefficients reported for the TM
scale represented non-normal distributions with respect to
both the cognitive ability and personality tests (skewness and
kurtosis between -2 and +2 are considered acceptable for
Question 2: Are there significant differences in job applicants’
attitudinal responses to supervised online personality and cognitive
TABLE 2: Rotated factor pattern matrix and congruence coefficients for the Test Attitude Survey – an adapted version of the measure.
Items
Ability tests: Factor
Personality test: Factor
1
2
1
2
Doing well on the test(s) was important to me (M).
0.66a
0.03
0.84a
0.16
I was bored while taking the test(s) (C).
-0.17
0.43a
-0.26
0.51a
The test(s) is (are) probably a good way of selecting people for jobs (B).
0.26
-0.42a
0.28
-0.46a
I am not good at taking test(s) (A).
-0.03
0.40a
-0.06
0.56a
I felt nervous when taking the test(s) (A).
0.02
0.37a
-0.09
0.42a
I answered the questions of the test(s) as well as I could (M).
0.79
a
Questionnaire(s) like the test(s) should not be used (B).
0.14
I usually do pretty well on test(s) (A).
The way I answered the test(s) should help me (F).
0.12
a
0.88
0.12
0.50a
0.15
0.65a
0.50a
-0.04
0.60a
-0.09
a
0.73
-0.14
0.59a
-0.23
I don’t like answering questions like those in the test(s) (A).
0.00
0.60a
-0.04
0.81a
I tried my best in the test(s) (M).
0.84a
0.15
0.86a
0.14
I concentrated well when answering the test(s) questions (C).
0.70a
-0.02
0.75a
-0.08
I do not believe the test(s) can show how well a person could do in the job (B).
0.17
0.68a
0.22
0.64a
I expected to do well on the test(s) (A).
0.73a
-0.01
0.85a
-0.02
I get tense when answering questions about myself (A).
-0.10
0.32a
-0.03
0.33a
The test(s) is (are) unfair to some applicants (B).
-0.04
0.73a
0.01
0.75a
I just did not care how well I did on the test(s) (M).
-0.40
0.13
-0.38
Tucker’s congruence coefficient between factors
-
-
0.96
a
a
0.13
0.97
() represents original scale names (Arvey et al., 1990) associated with the adapted items: M, motivation; C, concentration; A, anxiety; B, belief in tests; F, future effects.
a
, represents the salient loadings (> 0.30 or < -0.30) on the respective factors.
TABLE 3: Descriptive and Wilcoxon’s statistics of the Test Attitude Survey – an adapted version of the measure.
N
Mean
SD
Skewness
Kurtosis
Reliability α
Ability – Test-taking motivation
160
4.16a
0.93
-2.00
5.40
0.87
-
Personality – Test-taking motivation
160
4.21b
0.87
-2.01
5.96
0.90
z = -1.29 (p = 0.20)
Ability – Test-taking attitudec
160
3.48a
1.10
-0.01
-0.22
0.75
-
160
3.53b
1.07
-0.02
-0.15
0.82
t = 1.77 (p = 0.078)
Scales
Personality – Test-taking attitude
c
Significance tests: a-b
The five-point scale used to rate the items: 1, strongly disagree; 2, disagree; 3, neither disagree nor agree; 4, agree; 5, strongly agree.
a
, Test Attitude Survey scale scores for the ability test; b, Test Attitude Survey scale scores for the personality test; c, Note that the scales were reversed for purposes of the analysis so the direction
and meaning of items and scale scores are similar for all scales.
α, Cronbach’s alpha.
http://www.sajhrm.co.za
Open Access
Page 8 of 11
Original Research
ability tests administered conjointly? The Wilcoxon’s z-statistic
and the t-test results presented in Table 3 showed insignificant
differences between the TAS scores for each of the test types
(cognitive ability and personality) with respect to which
comparisons were made [TM: z(160) = 1.29, p = 0.20; TA:
t(160) = 1.77, p = 0.08].
supervised online personality and cognitive ability tests
administered conjointly? The Chi-square statistics showed that,
with respect to the cognitive ability and personality tests, age
did not significantly affect score differences on the two TAS
subscales, namely, TM [χ2(1,160) = 3.52, p = 0.07] and TA
[χ2(1,160) = 0.575, p = 0.51].
Question 3: Do demographical differences (e.g. ethnic origin,
gender, educational level and position applied for) relate to job
applicants’ attitudinal responses to supervised online personality
and cognitive ability tests administered conjointly? The
demographic data from this study showed a significant
deviation from the assumptions required for performing
parametric-based statistical analysis techniques such as the
ANOVA test (Field, 2009). The Kruskal–Wallis test is a nonparametric version of the one-way ANOVA test and can be
used when normality assumptions are violated and when
cell sizes are small (n < 40) and differences in cell sizes are
large. The Kruskal–Wallis test (Chi-square statistic) was used
to compare the total ranks between the demographic groups
with respect to each of the TAS scales in this study.
Similarly, the Chi-square statistics showed that, with respect
to the cognitive ability and personality tests, educational
level did not significantly affect score differences on the two
TAS subscales [TM = χ2(2,160) = 1.80, p = 0.40; TA = χ2(2,160) =
2.895, p = 0.206].
With respect to each individual subscale, the Chi-square
statistics showed that age did not have a significant effect on
any of the TAS subscale scores for the cognitive ability tests
[TM = χ2(1,160) = 1.44, p = 0.23; TA = χ2(1,160) = 2.9, p = 0.09]
and the personality test [TM = χ2(1,160) = 0.44, p = 0.51; TA =
χ2(1,160) = 3.17, p = 0.08].
Educational level did not have a significant effect on the TAS
subscale scores for the cognitive ability tests [TM =
χ2(1,N=160) = 5.43, p = 0.07; TA = χ2(1,160) = 3.34, p = 0.19] and
the personality test [TM = χ2(1,160) = 2.72, p = 0.51; TA =
χ2(1,160) = 1.52, p = 0.47] nor did the position applied for have
a significant effect on the TAS subscale scores for the cognitive
ability tests [TM= χ2(1,160) = 0,17, p = 0.68; TA = χ2(1,160) =
0.12, p = 0.72] and the personality test [TM = χ2(1,160) = 0.5,
p = 0.82; TA = χ2(1,160) = 0.39, p = 0.53].
With respect to ethnic groups, the African group differed
statistically significantly on all the TAS scales when compared
to the composite of the remaining groups (mixed race, Indian
and white). The Kruskal–Wallis test showed that the
remaining groups’ TAS scores did not differ significantly
from each other with respect to both tests (p-values between
0.56 and 0.87) and, therefore, justified the grouping formed to
increase the power of the comparative statistics between the
composite group (n = 40) and the African group (n = 116). The
African group appeared to be significantly more motivated
than the rest of the group with respect to completing both the
cognitive ability test [TM = χ2(1,160) = 5,63, p = 0.02] and the
personality test [TM = χ2(1,N=160) = 5.75, p = 0.016] and had
a significantly more positive general attitude towards both
the cognitive ability test [TA = χ2(1,160) = 7.16, p = 0.007] and
the personality test [TA = χ2(1,160) = 7.73, p = 0.005].
Question 4: Do demographical differences (e.g. ethnic origin,
gender, educational level and position applied for) relate to
significant differences in job applicants’ attitudinal responses to
http://www.sajhrm.co.za
Furthermore, the Chi-square statistics showed that, with
respect to the cognitive ability and personality tests, ethnicity
did not significantly affect score differences on the two TAS
subscales [TM = χ2(1,160) = 0.21, p = 0.65; TA = χ2 (1,160) =
2.863, p = 0.09].
However, the Chi-square statistics showed that, with respect
to the cognitive ability and personality tests, the factor
‘position applied for’ had a non-significant effect on score
differences on the TM subscale [χ2(1,160) = 0.84, p = 0.36] but
a significant effect on the TA subscale [χ2(1,160) =.4.759,
p = 0.03]. The applicants for the leadership positions (team
leaders and branch managers) reacted statistically
significantly more positively towards the cognitive ability
test than towards the personality test, when compared to the
applicants for sales positions.
Discussion
Outline of the results
The main purpose of the research was to determine the testtaking attitudinal responses of a diverse sample of job
applicants towards a personality test and cognitive ability
test administered conjointly in a supervised online test
session. The job applicants showed attitudinal responses that
were generally positive towards both the cognitive ability
and personality tests that were used in a selection drive by a
financial services company. On average, the respondents
reacted positively towards both the measures in terms of TM
and general TA. This is a promising finding as personality
and cognitive ability tests are popular choices for inclusion in
test batteries because of their generalisability of validity
evidence and cost-effectiveness (Gilliland & Steiner, 2012).
The research found that the sample group did not respond
significantly different to the cognitive ability test and the
personality test used conjointly in the selection process.
These results support the findings of Rosse et al. (1994) that
the compartmentalisation of selection practices (e.g.
interview, references checks, psychological tests and work
sample tests) does not occur as readily within categories (e.g.
psychological tests) and that tests may compensate for each
other in terms of perceived relevancy and fairness, resulting
in job applicants forming a heuristic evaluation of the test
battery. As Hausknecht et al. (2004) have pointed out, the
positive evaluation of a test battery and the forming of
Open Access
Page 9 of 11
positive TAs and belief in tests appear to be mostly situated
in procedural justice perceptions of job relatedness, face
validity and perceived predictive validity of tests. The tests
used in this study had been carefully aligned to the
requirements of the relevant jobs using job profiling processes
to ensure job relatedness and high face validity.
In the literature, a variety of additional factors that influence
TM and attitudes towards tests are identified and these
include interpersonal treatment during testing (Schleicher et
al., 2006), explanations and selection information (Truxillo et
al., 2009), test format, question steering (ability to fake) and
question invasiveness (Nikolaou et al., 2015). The TAs of the
job applicants in this study may have been positively
influenced as care was taken before and during the
administration of the tests to adhere to Gilliland and Steiner’s
(2012, p. 633) version of justice rules for enhancing the
perceived fairness of the selection process. With reference to
the invasion of privacy theory of Bauer et al. (2006), the issue
of invasiveness or propriety of questions may be more of an
issue in personality tests than in cognitive ability tests and
may be more salient and only important when blatantly
violated (Gilliland, 2008). In terms of the AART of Ployhart
and Harold (2004), favourable testing conditions (selection
procedure justice) may counteract the attribution trigger,
resulting in a critical stance towards individual tests. The
forced-choice-item format of the personality test used in this
study had the potential of eliciting negative responses as the
perceived influence over the outcome of the assessment was
reduced (Van Vianen et al., 2004). However, it appears that
this may not have differentially skewed TAs in favour of the
cognitive ability tests because of the compensatory effects of
using the tests in combination (Rosse et al., 1994). Cognitive
ability tests generally elicit more positive responses from job
applicants because of the perceived scientific validity of these
tests (Lievens et al., 2003) and the use of concrete test items
instead of abstract test items (as was the case in this study)
(Gilliland & Steiner, 2012). The three-option forced-choiceitem format of the personality test that was applied in this
study can be considered less laborious and cognitively
challenging than a four-option format as far as making
choices is concerned, leading to less anxiety and a more
positive experience of the testing process (Vasilopoulos,
Cucina, Dyomina, Morewitz & Reilly, 2006).
The research finding of Hausknecht et al. (2004) that
applicants’ perceptions of tests are not related to their
personal characteristics (i.e. age, gender and ethnic
background) is confirmed in this study with respect to all the
demographical variables except ethnic group and job type.
The finding of statistically significant higher TM and positive
TA of the African group compared to the other ethnic groups
refutes earlier findings of negative perceptions held by black
groups in South Africa and abroad (De Jong & Visser, 2000;
Foxcroft & Roodt, 2013; Whitman et al., 2014). The general
drive of companies towards the restitution of previously
disadvantaged groups and the use of tests that do not unfairly
exclude any group, which are principles stipulated in the
Employment Equity Act (Republic of South Africa, 2013), may
http://www.sajhrm.co.za
Original Research
have raised the employment expectations of the African
group and resulted in a higher TM and more positive TA.
This proposition is supported by social identity theoretical
perspectives suggesting that a job applicant’s social identity
matches a perceived organisational identity (such as culture,
values and beliefs), leading to a positive expectation of the
selection outcome, which is associated with high TM and a
positive TA (Gilliland & Steiner, 2012; Nikolaou et al., 2015).
Compared to the job applicants applying for leadership
positions, job applicants applying for the sales positions had
a statistically significant more negative attitude towards the
cognitive ability test than towards the personality test. The
reason for this difference is not clear and needs further
investigation. However, it may be argued that the more
negative TA of sales position applicants may be attributable
to self-serving bias formed by a negative perception of
perceived performance in a test that they may have
experienced as difficult (Whitman et al., 2014). Job applicants
generally view personality tests as more controllable than
cognitive ability tests, which may contribute to self-serving
bias manifesting as a negative TA towards a cognitive ability
test that is perceived as particularly difficult and consequently
less valid (Van Vianen et al., 2004).
Steiner and Gilliland (1996) argue that people judge implicitly
that widely used testing techniques must be valid, resulting
in a favourable view of tests (belief in tests). So far, almost
universally positive applicant reactions have been reported
for Internet-based testing batteries, which in all likelihood
has had an equally positive influence on TM and attitudes,
also with respect to the motivation and attitudes of this
study’s job applicants as regards the personality and
cognitive ability tests they completed (Anderson, 2003;
Mead, 2001; Sylva & Mol, 2009).
Practical implication
The findings of this study hold promise for the continued
use of the relevant instruments in the financial services
company because positive TAs have been found to relate to
perceived or actual performance and the perceived fairness
of the testing process, which in turn promotes positive
attitudes towards the selection process and the company in
general (Burns et al., 2015; Schmitt, 2013). Furthermore, the
way job applicants perceive the instruments a company
uses for selection purposes is advantageous for business in
that it plays an important role in attracting and retaining
workers from different ethnic groups, influences societal
perceptions of the company’s commitment to fair selection
practice and reduces the likelihood of legal action instituted
by unsuccessful applicants (Gilliland & Steiner, 2012;
Hausknecht et al., 2004). Therefore, investing resources in
ensuring that the choice of tests and the testing conditions
meet the selection justice rules for perceived fairness of the
selection process (Gilliland & Steiner, 2012, p. 633) may be
considered a good investment that will benefit the company
and broader society.
Open Access
Page 10 of 11
Limitations and future directions
The most important limitations of the study were the
relatively small convenience sample and the context-specific
nature of the study as these have an impact on the
generalisability of the findings to the broader population and
other testing contexts. The design of the study was nonexperimental; therefore, the impact of different test conditions
resulting in differences in job applicants’ TM and attitudinal
reactions towards cognitive ability and personality tests were
not determined. Such controls would, however, pose ethical
and legal challenges in field studies as job applicants have a
right to prescribed best practice testing conditions in terms of
the Constitution of South Africa and this right is guarded by
law and enforced by the Health Professions Council of South
Africa (Foxcroft & Roodt, 2013).
It is proposed that future studies should focus on experimental
research designs and simulations allowing for the systematic
control of the effect of testing conditions using subjects
who participate on a voluntary basis under mock testing
conditions.
Conclusion
Research on how job applicants perceive and respond to tests
has gained exponential popularity in recent years as
practitioners have come to realise the importance of eliciting
positive applicant reactions towards selection processes.
Research has shown that the choice of tests for a selection
process and the test conditions that apply during the
administration of the tests may influence the TAs and
motivational reactions adopted by the job applicants. This
study produced evidence that a diverse group of job
applicants applying for positions in a South African financial
services company showed the same level of positive
attitudinal and motivational responses towards personality
and cognitive ability selection tests used conjointly in a
selection battery. This finding supports the further use of the
instruments in the relevant company provided that the
psychometric properties of the specified measures have been
shown to be acceptable for selection purposes in the company.
Acknowledgements
This work is based on the research supported in part by the
National Research Foundation of South Africa (Grant
Number 103796).
Competing interests
The authors declare that they have no financial or personal
relationships that may have inappropriately influenced them
in writing this article.
Authors’ contributions
R.V. conceptualised the study, did the initial literature review,
planned and executed the empirical survey, did the initial
statistical analyses and presented the findings as part of a
http://www.sajhrm.co.za
Original Research
master’s degree study under supervision of the second
author. R.V. also reviewed and adjusted the final draft of the
article. P.S. wrote the article for publication, refined the
literature review, refined the methodology and statistical
analyses and presented the findings from the study.
References
Anderson, N. (2003). Applicant and recruiter reactions to new technology in selection:
A critical review and agenda for future research. International Journal of Selection
and Assessment, 11(2–3), 121–136. https://doi.org/10.1111/1468-2389.00235
Anderson, N., Salgado, J.F., & Hüsheger, U.R. (2010). Applicant reactions in selection:
Comprehensive meta-analysis into reaction generalization versus situational
specificity. International Journal of Selection and Assessment, 18(3), 291–304.
https://doi.org/10.1111/j.1468-2389.2010.00512.x
Arvey, R.D., Strickland, W., Drauden, G., & Martin, C. (1990). Motivational components
of test taking. Personnel Psychology, 43(4), 695–716. https://doi.org/10.1111/
j.1744-6570.1990.tb00679.x
Baron, H., & Austin, J. (2000, April). Measuring ability via the Internet: Opportunities
and issues. Paper presented at the 15th Annual Conference of the Society for
Industrial and Organizational Psychology, New Orleans, LA.
Bauer, T.N., Truxillo, D.T., Tucker, J.S., Weathers, V., Bertolino, M., Erdogan, B., et al.
(2006). Selection in the information age: The role of personal information privacy
concerns and computer use in understanding applicant reactions. Journal of
Management, 32, 601–621. https://doi.org/10.1177/0149206306289829
Burns, G.N., Fillipowski, J.N., Morris, M.B., & Shoda, E.A. (2015). Impact of electronic
warnings on online personality scores and test-taker reactions in an applicant
simulation. Computers in Human Behavior, 48(February), 163–172. https://doi.
org/10.1016/j.chb.2015.01.051
Chu, M.-W., Guo, Q., & Leighton, J.P. (2014). Students’ interpersonal trust and
attitudes towards standardised tests: Exploring affective variables related to
student assessment. Assessment in Education: Principles, Policy and Practice,
21(2), 167–192. https://doi.org/10.1080/0969594X.2013.844094
De Jong, A., & Visser, D. (2000). Organisational justice rules as determinants of black
and white employees’ fairness perceptions of personnel selection techniques.
South African Journal of Industrial Psychology, 26(1), 29–38. https://doi.org/
10.4102/sajip.v26i1.696
Donald, F., Thatcher, A., & Milner, K. (2014). Psychological assessment for redress in
South African organisations: Is it just? South African Journal of Psychology, 44(3),
333–349. https://doi.org/10.1177/0081246314535685
Field, A. (2009). Discovering statistics using SPSS. London: SAGE.
Foxcroft, C., & Roodt, G. (2013). An introduction to psychological assessment in the
South African context. (2nd edn.). Oxford, UK: Oxford University Press.
George, D., & Mallery, M. (2010). SPSS for Windows step by step: A simple guide and
reference. (17.0 update). (10th ed.). Boston, MA: Pearson.
Gilliland, S. (2008). The tails of justice: A critical examination of the dimensionality of
organizational justice constructs. Human Resource Management Review, 18,
271–281. https://doi.org/10.1016/j.hrmr.2008.08.001
Gilliland, S.W. (1993). The perceived fairness of selection systems: An organizational
justice perspective. Academy of Management Review, 18(4), 694–734, 20, 36–52.
https://doi.org/10.1111/j.1468-2389.2012.00578
Gilliland, S.W., & Steiner, D.D. (2012). Applicant reactions to testing and selection. In
N. Schmitt (Ed.), The oxford handbook of personnel assessment and selection (pp.
629–666). New York: Oxford University Press. https://doi.org/10.1093/
oxfordhb/9780199732579.013.0028
Gliner, J.A., Morgan, G.A., & Leech, N.L. (2009). Research methods in applied settings:
An integrated approach to design and analysis. (2nd edn.). New York: Routledge
Taylor & Francis Group.
Hausknecht, J.P., Day, D.V., & Thomas, S.C. (2004). Applicant reactions to selection
procedures : An updated model and meta-analysis. Personnel Psychology, 20(3),
639–683. https://doi.org/10.1111/j.1744-6570.2004.00003.x
Hüsheger, U.R., & Anderson, N. (2009). Applicant perspectives in selection: Going
beyond preference reactions. International Journal of Selection and Assessment,
17(4), 335–345. https://doi.org/10.1111/j.1468-2389.2009.00477.x
Kline, R.B. (2011). Principles and practice of structural equation modelling. (3rd edn.).
New York, NY: Guilford Press.
Konradt, U., Warszta, T., & Ellwart, T. (2013). Fairness Perceptions in web-based
selection: Impact on applicants’ pursuit intentions, recommendation intentions,
and intentions to reapply. International Journal of Selection and Assessment,
21(2), 155–169. https://doi.org/10.1111/ijsa.12026
Lahuis, D.M., Perreault, N.E., & Ferguson, M.W. (2003). The effect of legitimizing
explanations on applicants’ perceptions of selection assessment fairness. Journal
of Applied Social Psychology, 33(10), 2198–2215. https://doi.org/10.1111/j.15591816.2003.tb01881.x
Lievens, F., De Corte, W., & Brysse, K. (2003). Applicant perceptions of selection
procedures : The role of selection information, belief in tests, and comparative
anxiety. International Journal of Psychology, 11(1), 65–75. https://doi.org/
10.1111/1468-2389.00227
McCarthy, J., Hrabluik, C., & Jelley, R.B. (2009). Progression through the ranks:
Assessing employee reactions to high-stakes employment testing. Personnel
Psychology, 62(4), 793–832. https://doi.org/10.1111/j.1744-6570.2009.01158.x
Open Access
Page 11 of 11
Original Research
McCarthy, J.M., & Goffin, R.D. (2003). Is the test attitude survey psychometrically
sound? Educational and Psychological Measurement, 63(3), 446–464. https://doi.
org/10.1177/0013164402251038
Rynes, S.L., & Connerly, M.L. (1993). Applicant reactions to alternative selection
procedures. Journal of Business and Psychology, 7, 261–277. https://doi.
org/10.1007/BF01015754
Mead, A.D. (2001, April). How well does web-based testing work? Results of a survey
of users of NetAssess. In F.L. Oswald (Chair) (Eds.), Computers = Good? How testuser and test-taker perceptions affect technology-based employment testing.
Symposium conducted at the 16th Annual Conference of the Society for Industrial
and Organizational Psychology, San Diego, CA.
Saville, P., Nyfield, G., McCarthy, T., & Gibbons, G. (1997). Occupational testing course
notes. Thames Ditton, Surrey: SHL Group.
Muchinsky, P., Kriek, H., & Schreuder, A. (2002). Personnel psychology. (2nd edn.).
Cape Town, South Africa: Oxford University Press.
Nikolaou, I., Bauer, T.N., & Truxillo, D.M. (2015). Applicant reactions to selection
methods: An overview of recent research and suggestions for the future. In I.
Nikolaou & J.K. Oostrom (Eds.), Employee recruitment, selection, and assessment:
Contemporary issues for theory and practice (Current issues in work and
organizational psychology) (pp. 80–96). New York: Psychology Press.
O’Neill, T., Goffin, R.D., & Gellatly, I.R. (2012). The knowledge, skill, and ability
requirements for teamwork: Revisiting the Teamwork-KSA test’s validity.
International Journal of Selection and Assessment, 20, 36–52. https://doi.org/
10.1111/j.1468-2389.2012.00578.x
Owen, K., & Taljaard, J. (1996). Handbook for the use of psychological and scholastic
tests of the HSRC. Pretoria: Human Sciences Research Council.
Ployhart, R.E., & Harold, C.M. (2004) The applicant attribution- reaction theory
(AART): An integrative theory of applicant attributional processing. International
Journal of Selection and Assessment, 12, 84–98. https://doi.org/10.1111/j.0965075X.2004.00266.x
Proost, K., Derous, E., Schreurs, B., Hagtvet, K.A., & De Witte, K. (2008). Selection test
anxiety: Investigating applicants’ self- vs other-referenced anxiety in a real
selection setting. International Journal of Selection and Assessment, 16(1), 14–26.
https://doi.org/10.1111/j.1468-2389.2008.00405.x
Republic of South Africa. (2013). Employment Equity Amendment Act, No. 47 of 2013.
Government Gazette. (Vol. 583, No. 37238 of January 16, 2014). Pretoria:
Government Printers.
Reynolds, D.H., & Lin, L.F. (2003, April). An unfair problem? Subgroup reactions to
Internet selection techniques. Paper presented at the Annual Conference of the
Society for Industrial and Organizational Psychology, Orlando, FL.
Reynolds, D.H., Sinar, E.F., & McClough, A.C. (2000, April). Evaluation of an Internetbased selection procedure. In N.J. Mondragon (Chair) (Eds.), Beyond the demo:
The empirical nature of technology-based assessments. Symposium conducted at
the 15th Annual Conference of the Society for Industrial and Organizational
Psychology, New Orleans, LA.
Rolland, F., & Steiner, D.D. (2007). Test-taker reactions to the selection process: Effects
of outcome favorability, explanations, and voice on fairness perceptions. Journal
of Applied Social Psychology, 37(12), 2800–2826. https://doi.org/10.1111/j.15591816.2007.00282.x
Rosse, J.G., Miller, J.L., & Stecher, M.D. (1994). A field study of job applicants’ reactions
to personality and cognitive ability testing. Journal of Applied Psychology, 79(6),
987–992. https://doi.org/10.1037/0021-9010.79.6.987
Ryan, A.M., & Huth, M. (2008). Not much more than platitudes? A critical look at the
utility of applicant reactions research. Human Resource Management Review,
18(3), 119–132. https://doi.org/10.1016/j.hrmr.2008.07.004
http://www.sajhrm.co.za
Schleicher, D.J., Venkataramani, V., Morgeson, F.P., & Campion, M.A. (2006). So you
didn’t get the job … Now what do you think? Examining opportunity-to-perform
fairness perceptions. Personnel Psychology, 59, 559–590. https://doi.org/
10.1111/j.1744-6570.2006.00047.x
Schmitt, N. (2013). Personality and cognitive ability as predictors of effective
performance at work. Annual Review of Organizational Psychology and
Organizational Behavior, 1(1), 45–65. https://doi.org/10.1146/annurev-orgpsych031413-091255
Smith, J.A. (1997). An examination of test-taking attitudes and response distortion on
a personality test. Unpublished doctoral thesis, Blacksburg, Virginia: Virginia State
University.
Steiner, D.D., & Gilliland, S.W. (1996). Fairness reactions to personnel selection
techniques in France and the United States. Journal of Applied Psychology, 81,
134–141. https://doi.org/10.1037/0021-9010.81.2.134
Sylva, H., & Mol, S.T. (2009). E-recruitment: A study into applicant perceptions of an
online application system. International Journal of Selection and Assessment,
17(3), 311–323. https://doi.org/10.1111/j.1468-2389.2009.00473.x
Truxillo, D.M., Bodner, T.E., Bertolino, M., Bauer, T.N., & Yonce, C.A. (2009). Effects of
explanations on applicant reactions: A meta-analytic review. International Journal
of Selection and Assessment, 17(4), 346–361. https://doi.org/10.1111/j.1468-2389.
2009.00478.x
Van de Vijver, F.J.R., & Leung, K. (1997). Methods and data analysis for cross-cultural
research. Newbury Park, CA: SAGE.
Van Vianen, A.E.M., Taris, R., Scholten, E., & Schinkel, S. (2004). Perceived fairness in
personnel selection: Determinants and outcomes in different stages of the
assessment procedure. International Journal of Selection and Assessment,
12(1/2), 149–159. https://doi.org/10.1111/j.0965-075X.2004.00270.x
Vasilopoulos, N.L., Cucina, J.M., Dyomina, N.V., Morewitz, C.L., & Reilly, R.R. (2006).
Forced-choice personality tests: A measure of personality and cognitive ability?
Human Performance, 19(3), 175–199. https://doi.org/10.1207/s15327043
hup1903_1
Weiner, B. (1985). An attribution theory of achievement motivation and emotion.
Psychological Review, 92, 548–573. https://doi.org/10.1037/0033-295X.92.4.548
Whitman, D.S., Kraus, E., & Van Rooy, D.L. (2014). Emotional intelligence among black
and white job applicants: Examining differences in test performance and test
reactions. International Journal of Selection and Assessment, 22(2), 199-210.
https://doi.org/10.1111/ijsa.12069
Wiechmann, D., & Ryan, A.M. (2003). Reactions to computerized testing in selection
contexts. International Journal of Selection and Assessment, 11(2–3), 215–229.
https://doi.org/10.1111/1468-2389.00245
Zibarras, L.D., & Patterson, F. (2015). The role of job relatedness and self-efficacy in
applicant perceptions of fairness in a high-stakes selection setting. International
Journal of Selection and Assessment, 23(4), 332–344. https://doi.org/10.1111/
ijsa.12118
Open Access