Reliability Psychometrics
Reliability Psychometrics
Reliability Psychometrics
For other uses, see Reliability (disambiguation). In the psychometrics, reliability is used to describe the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions. For example, measurements of peoples height and weight are often extremely reliable.[1][2]
Contents
[hide]
1 Types 2 Difference from validity 3 General model 4 Classical test theory 5 Item response theory 6 Estimation 7 See also 8 References 9 External links
Types[edit]
There are several general classes of reliability estimates:
Inter-rater reliability assesses the degree to which test scores are consistent when measurements are taken by different people using the same methods.
Test-retest reliability assesses the degree to which test scores are consistent from one test administration to the next. Measurements are gathered from a single rater who uses the same methods or instruments and the same testing conditions.[2] This includes intra-rater reliability.
Inter-method reliability assesses the degree to which test scores are consistent when there is a variation in the methods or instruments used. This allows inter-rater reliability to be ruled out. When dealing with forms, it may be termed parallel-forms reliability.[3]
Internal consistency reliability, assesses the consistency of results across items within a test. [3]
Reliability does not imply validity. That is, a reliable measure that is measuring something consistently, may not be measuring what you want to be measuring. For example, while there are many reliable tests of specific abilities, not all of them would be valid for predicting, say, job performance. In terms of accuracy and precision, Reliability is a more accurate way of describing precision, while validity is a more precise way of describing accuracy. While reliability does not imply validity, a lack of reliability does place a limit on the overall validity of a test. A test that is not perfectly reliable cannot be perfectly valid, either as a means of measuring attributes of a person or as a means of predicting scores on a criterion. While a reliable test may provide useful valid information, a test that is not reliable cannot possibly be valid.[4] An example often used to illustrate the difference between reliability and validity in the experimental sciences involves a common bathroom scale. If someone who is 200 pounds steps on a scale 10 times and gets readings of 15, 250, 95, 140, etc., the scale is not reliable. If the scale consistently reads "150", then it is reliable, but not valid. If it reads "200" each time, then the measurement is both reliable and valid.
General model[edit]
In practice, testing measures are never perfectly consistent.Theories of test reliability have been developed to estimate the effects of inconsistency on the accuracy of measurement. The basic starting point for almost all theories of test reliability is the idea that test scores reflect the influence of two sorts of factors: [4] 1. Factors that contribute to consistency: stable characteristics of the individual or the attribute that one is trying to measure 2. Factors that contribute to inconsistency: features of the individual or the situation that can affect test scores but have nothing to do with the attribute being measured
Temporary but general characteristics of the individual: health, fatigue, motivation, emotional strain Temporary and specific characteristics of individual: comprehension of the specific test task, specific tricks or techniques of dealing with the particular test materials, fluctuations of memory, attention or accuracy
Aspects of the testing situation: freedom from distractions, clarity of instructions, interaction of personality, sex, or race of examiner
The goal of estimating reliability is to determine how much of the variability in test scores is due to errors in measurement and how much is due to variability in true scores.[4] A true score is the replicable feature of the concept being measured. It is the part of the observed score that would recur across different measurement occasions in the absence of error. Errors of measurement are composed of both random error and systematic error. It represents the discrepancies between scores obtained on tests and the corresponding true scores. This conceptual breakdown is typically represented by the simple equation:
Reliability theory shows that the variance of obtained scores is simply the sum of the variance of true scores plus the variance of errors of measurement.[4]
This equation suggests that test scores vary as the result of two factors: 1. Variability in true scores 2. Variability due to errors of measurement. The reliability coefficient provides an index of the relative influence of true and error scores on
attained test scores. In its general form, the reliability coefficient is defined as the ratio of true score variance to the total variance of test scores. Or, equivalently, one minus the ratio of the variation of the error score and the variation of the observed score:
Unfortunately, there is no way to directly observe or calculate the true score, so a variety of methods are used to estimate the reliability of a test. Some examples of the methods to estimate reliability include test-retest reliability, internal consistency reliability, and parallel-test reliability. Each method comes at the problem of figuring out the source of error in the test somewhat differently.
Estimation[edit]
The goal of estimating reliability is to determine how much of the variability in test scores is due to errors in measurement and how much is due to variability in true scores. Four practical strategies have been developed that provide workable methods of estimating test reliability.[4] 1. Test-retest reliability method: directly assesses the degree to which test scores are consistent from one test administration to the next.
It involves:
Administering a test to a group of individuals Re-administering the same test to the same group at some later time Correlating the first set of scores with the second
The correlation between scores on the first test and the scores on the retest is used to estimate the reliability of the test using the Pearson product-moment correlation coefficient: see also itemtotal correlation. 2. Parallel-forms method: The key to this method is the development of alternate test forms that are equivalent in terms of content, response processes and statistical characteristics. For example, alternate forms exist for several tests of general intelligence, and these tests are generally seen equivalent.[4] With the parallel test model it is possible to develop two forms of a test that are equivalent in the sense that a persons true score on form A would be identical to their true score on form B. If both forms of the test were administered to a number of people, differences between scores on form A and form B may be due to errors in measurement only.[4] It involves:
Administering one form of the test to a group of individuals At some later time, administering an alternate form of the same test to the same group of people
The correlation between scores on the two alternate forms is used to estimate the reliability of the test. This method provides a partial solution to many of the problems inherent in the test-retest reliability method. For example, since the two forms of the test are different, carryover effect is less of a problem. Reactivity effects are also partially controlled; although taking the first test may change responses to the second test. However, it is reasonable to assume that the effect will not be as strong with alternate forms of the test as with two administrations of the same test. [4] However, this technique has its disadvantages:
It may very difficult to create several alternate forms of a test It may also be difficult if not impossible to guarantee that two alternate forms of a test are parallel measures
3. Split-half method: This method treats the two halves of a measure as alternate forms. It provides a simple solution to the problem that the parallel-forms methodfaces: the difficulty in developing alternate forms.[4] It involves:
Administering a test to a group of individuals Splitting the test in half Correlating scores on one half of the test with scores on the other half of the test
The correlation between these two split halves is used in estimating the reliability of the test. This halves reliability estimate is then stepped up to the full test length using the SpearmanBrown prediction formula. There are several ways of splitting a test to estimate reliability. For example, a 40-item vocabulary test could be split into two subtests, the first one made up of items 1 through 20 and the second made up of items 21 through 40. However, the responses from the first half may be systematically different from responses in the second half due to an increase in item difficulty and fatigue.[4] In splitting a test, the two halves would need to be as similar as possible, both in terms of their content and in terms of the probable state of the respondent. The simplest method is to adopt an odd-even split, in which the odd-numbered items form one half of the test and the even-numbered items form the other. This arrangement guarantees that each half will contain an equal number of items from the beginning, middle, and end of the original test.[4] 4. Internal consistency: assesses the consistency of results across items within a test. The most common internal consistency measure isCronbach's alpha, which is usually interpreted as the mean of all possible split-half coefficients.[6] Cronbach's alpha is a generalization of an earlier form of estimating internal consistency, KuderRichardson Formula 20.[6] Although the most commonly used, there are some misconceptions regarding Cronbach's alpha. [7] [8] These measures of reliability differ in their sensitivity to different sources of error and so need not be equal. Also, reliability is a property of thescores of a measure rather than the measure itself and are thus said to be sample dependent. Reliability estimates from one sample might differ from those of a second sample (beyond what might be expected due to sampling variations) if the
second sample is drawn from a different population because the true variability is different in this second population. (This is true of measures of all typesyardsticks might measure houses well yet have poor reliability when used to measure the lengths of insects.) Reliability may be improved by clarity of expression (for written assessments), lengthening the measure,[6] and other informal means. However, formal psychometric analysis, called item analysis, is considered the most effective way to increase reliability. This analysis consists of computation of item difficulties and item discrimination indices, the latter index involving computation of correlations between the items and sum of the item scores of the entire test. If items that are too difficult, too easy, and/or have near-zero or negative discrimination are replaced with better items, the reliability of the measure will increase.
(where is the failure rate)