Measurement and Scaling
Measurement and Scaling
Measurement and Scaling
Scaling is the procedure of measuring and assigning the objects to the numbers
according to the specified rules. In other words, the process of locating the
measured objects on the continuum, a continuous sequence of numbers to which
the objects are assigned is called scaling.
In research, usually, the numbers are assigned to the qualitative traits of the
object because the quantitative data help in the statistical analysis of the
resulting data and further facilitate the communication of measurement rules
and results.
Levels of Measurements
There are four different scales of measurement. The data can be defined as
being one of the four scales. The four types of scales are:
Nominal Scale
Ordinal Scale
Interval Scale
Ratio Scale
Nominal Scale
A nominal scale is the 1st level of measurement scale in which the numbers
serve as “tags” or “labels” to classify or identify the objects. A nominal scale
usually deals with non-numeric variables or numbers that do not have any
value.
Characteristics of Nominal Scale
Ordinal Scale
The ordinal scale is the 2nd level of measurement that reports the ordering and
ranking of data without establishing the degree of variation between them.
Ordinal represents the “order.” Ordinal data is known as qualitative data or
categorical data. It can be grouped, named, and also ranked.
Characteristics of the Ordinal Scale
o Very often
o Often
o Not often
o Not at all
Assessing the degree of agreement
o Totally agree
o Agree
o Neutral
o Disagree
o Totally disagree
Interval Scale
The interval scale is the 3rd level of the measurement scale. It is defined as a
quantitative measurement scale in which the difference between the two
variables is meaningful. In other words, the variables are measured in an exact
manner, not in a relative way in which the presence of zero is arbitrary.
Characteristics of Interval Scale:
Likert Scale
Net Promoter Score (NPS)
Bipolar Matrix Table
Ratio Scale
The ratio scale is the 4th level of the measurement scale, which is quantitative. It
is a type of variable measurement scale. It allows researchers to compare the
differences or intervals. The ratio scale has a unique feature. It possesses the
character of the origin or zero points.
Characteristics of Ratio Scale:
The Rank order scaling is often used to measure the preference for the
brand and attributes. The ranking data is typically obtained from
respondents in the conjoint analysis (a statistical technique used to
determine how the brand and the combination of its attributes such as
features, functions, and benefits, influences the decision-making of a
person), as it forces the respondents to discriminate among the stimulus
objects. The Rank order scaling results in the ordinal data.
Once the points are allocated, the attributes are scaled by counting the
points as assigned by the respondents to each attribute and then dividing it
by the number of respondents under analysis. Such type of information
cannot be obtained from rank order data unless it is transformed into
interval data. The constant sum scaling is considered an ordinal scale
because of its comparative nature and lack of generalization.
Thus, Q-Sort Scaling helps in assigning ranks to different objects within the
same group, and the differences among the groups (piles) are visible.
The continuous rating scale is also called a Graphic Rating Scale. Here
the respondent can place a mark anywhere on the line based on his
opinion and is not restricted to selecting from the values as previously set
by the researcher. The continuous scale can observe many forms, i.e. it
can either be vertical or horizontal; scale points, in the form of numbers or
brief descriptions, may be provided, and if these are provided, then the
scale points might be few or many.
Once the ratings are obtained, the researcher splits up the line into several
categories and then assigns the scores depending on the category in which
the ratings fall. We can say that the continuous rating scale possesses the
characteristics of description, order, and distance. By description, we
mean, the unique tags, names, or labels used to designate each scale
value. The order refers to the relative position of the descriptors, and
the distance means an absolute difference between the descriptors is
known and can be expressed in unitary terms.
The following are the most commonly used itemized rating scales:
Reliability
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the
scores they obtain should also be consistent across time. Test-retest reliability is the extent to
which this is actually the case. For example, intelligence is generally thought to be consistent
across time. A person who is highly intelligent today will be highly intelligent next week.
This means that any good measure of intelligence should produce roughly the same scores for
this individual next week as it does today. Clearly, a measure that produces highly
inconsistent scores over time cannot be a very good measure of a construct that is supposed to
be consistent.
Assessing test-retest reliability requires using the measure on a group of people at one time,
using it again on the same group of people at a later time, and then looking at test-
retest correlation between the two sets of scores. This is typically done by graphing the data
in a scatterplot and computing Pearson’s r. In general, a test-retest correlation of +.80 or
greater is considered to indicate good reliability.
Again, high test-retest correlations make sense when the construct being measured is
assumed to be consistent over time, which is the case for intelligence, self-esteem, and the
Big Five personality dimensions. But other constructs are not assumed to be stable over time.
The very nature of mood, for example, is that it changes. So a measure of mood that produced
a low test-retest correlation over a period of a month would not be a cause for concern.
Internal Consistency
Like test-retest reliability, internal consistency can only be assessed by collecting and
analyzing data. One approach is to look at a split-half correlation. This involves splitting the
items into two sets, such as the first and second halves of the items or the even- and odd-
numbered items. Then a score is computed for each set of items, and the relationship between
the two sets of scores is examined.
Perhaps the most common measure of internal consistency used by researchers in psychology
is a statistic called Cronbach’s α (the Greek letter alpha). Conceptually, α is the mean of all
possible split-half correlations for a set of items. For example, there are 252 ways to split a
set of 10 items into two sets of five. Cronbach’s α would be the mean of the 252 split-half
correlations. Note that this is not how α is actually computed, but it is a correct way of
interpreting the meaning of this statistic. Again, a value of +.80 or greater is generally taken
to indicate good internal consistency.
Interrater Reliability
Validity
Validity is the extent to which the scores from a measure represent the variable they are
intended to. But how do researchers make this judgment? We have already considered one
factor that they take into account—reliability. When a measure has good test-retest reliability
and internal consistency, researchers should be more confident that the scores represent what
they are supposed to. There has to be more to it, however, because a measure can be
extremely reliable but have no validity whatsoever. As an absurd example, imagine someone
who believes that people’s index finger length reflects their self-esteem and therefore tries to
measure self-esteem by holding a ruler up to people’s index fingers. Although this measure
would have extremely good test-retest reliability, it would have absolutely no validity. The
fact that one person’s index finger is a centimetre longer than another’s would indicate
nothing about which one had higher self-esteem.
Discussions of validity usually divide it into several distinct “types.” But a good way to
interpret these types is that they are other kinds of evidence—in addition to reliability—that
should be taken into account when judging the validity of a measure. Here we consider three
basic kinds: face validity, content validity, and criterion validity.
Face Validity
Face validity is the extent to which a measurement method appears “on its face” to measure
the construct of interest. Most people would expect a self-esteem questionnaire to include
items about whether they see themselves as a person of worth and whether they think they
have good qualities. So a questionnaire that included these kinds of items would have good
face validity. The finger-length method of measuring self-esteem, on the other hand, seems to
have nothing to do with self-esteem and therefore has poor face validity. Although face
validity can be assessed quantitatively—for example, by having a large sample of people rate
a measure in terms of whether it appears to measure what it is intended to—it is usually
assessed informally.
Face validity is at best a very weak kind of evidence that a measurement method is measuring
what it is supposed to. One reason is that it is based on people’s intuitions about human
behaviour, which are frequently wrong. It is also the case that many established measures in
psychology work quite well despite lacking face validity. The Minnesota Multiphasic
Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders
by having people decide whether each of over 567 different statements applies to them—
where many of the statements do not have any obvious relationship to the construct that they
measure. For example, the items “I enjoy detective or mystery stories” and “The sight of
blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In
this case, it is not the participants’ literal answers to these questions that are of interest, but
rather whether the pattern of the participants’ responses to a series of questions matches those
of individuals who tend to suppress their aggression.
Content Validity
Content validity is the extent to which a measure “covers” the construct of interest. For
example, if a researcher conceptually defines test anxiety as involving both sympathetic
nervous system activation (leading to nervous feelings) and negative thoughts, then his
measure of test anxiety should include items about both nervous feelings and negative
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and
actions toward something. By this conceptual definition, a person has a positive attitude
toward exercise to the extent that he or she thinks positive thoughts about exercising, feels
good about exercising, and actually exercises. So to have good content validity, a measure of
people’s attitudes toward exercise would have to reflect all three of these aspects. Like face
validity, content validity is not usually assessed quantitatively. Instead, it is assessed by
carefully checking the measurement method against the conceptual definition of the
construct.
Criterion Validity
Criterion validity is the extent to which people’s scores on a measure are correlated with
other variables (known as criteria) that one would expect them to be correlated with. For
example, people’s scores on a new measure of test anxiety should be negatively correlated
with their performance on an important school exam. If it were found that people’s scores
were in fact negatively correlated with their exam performance, then this would be a piece of
evidence that these scores really represent people’s test anxiety. But if it were found that
people scored equally well on the exam regardless of their test anxiety scores, then this would
cast doubt on the validity of the measure.
A criterion can be any variable that one has reason to think should be correlated with the
construct being measured, and there will usually be many of them. For example, one would
expect test anxiety scores to be negatively correlated with exam performance and course
grades and positively correlated with general anxiety and with blood pressure during an
exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s
scores on this measure should be correlated with their participation in “extreme” activities
such as snowboarding and rock climbing, the number of speeding tickets they have received,
and even the number of broken bones they have had over the years. When the criterion is
measured at the same time as the construct, criterion validity is referred to as concurrent
validity; however, when the criterion is measured at some point in the future (after the
construct has been measured), it is referred to as predictive validity (because scores on the
measure have “predicted” a future outcome).
Criteria can also include other measures of the same construct. For example, one would
expect new measures of test anxiety or physical risk taking to be positively correlated with
existing measures of the same constructs. This is known as convergent validity.
Assessing convergent validity requires collecting data using the measure. Researchers John
Cacioppo and Richard Petty did this when they created their self-report Need for Cognition
Scale to measure how much people value and engage in thinking (Cacioppo & Petty, 1982)[1].
In a series of studies, they showed that people’s scores were positively correlated with their
scores on a standardized academic achievement test, and that their scores were negatively
correlated with their scores on a measure of dogmatism (which represents a tendency toward
obedience). In the years since it was created, the Need for Cognition Scale has been used in
literally hundreds of studies and has been shown to be correlated with a wide variety of other
variables, including the effectiveness of an advertisement, interest in politics, and juror
decisions (Petty, Briñol, Loersch, & McCaslin, 2009)[2].
Discriminant Validity
When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence
of discriminant validity by showing that people’s scores were not correlated with certain
other variables. For example, they found only a weak correlation between people’s need for
cognition and a measure of their cognitive style—the extent to which they tend to think
analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.”
They also found no correlation between people’s need for cognition and measures of their test
anxiety and their tendency to respond in socially desirable ways. All these low correlations
provide evidence that the measure is reflecting a conceptually distinct construct.