Unit 6 Reliability Group 8
Unit 6 Reliability Group 8
Unit 6 Reliability Group 8
Reliability refers to the consistency with which it yields the same rank for individuals who
take the test more than once (Kubiszyn and Borich, 2007). That is, how consistent test results
or other assessment results from one measurement to another. We can say that a test is
reliable when it can be used to predict practically the same scores when test administered
twice to the same group of students and with a reliability index of 0.60 or above.
3. Split-half Method. Administer test once and score two equivalent halves of the test.
To split the test into halves that are equivalent, the usual procedure is to score the
even-numbered and the odd-numbered test item separately. This provides two
scores for each student. The results of the test scores are correlated using the
Spearman-Brown formula and this correlation coeffi- cient provides a measure of
internal consistency. It indicates the degree to which consistent results are obtained
from two halves of the test.
4. Kuder-Richardson Formula. Administer the test once. Score the total test and apply
the Kuder-Richardson formula. The Kuder-Richardson 20 formula is applicable only
in situations where students’ responses are scored dichotomously, and therefore, is
most useful with traditional test items that are scored as right or wrong, true or false,
and yes or no type. KR-20 formula estimates of reliability provide information whether
the degree to which the items in the test measure is of the same characteristic, it is
an assumption that all items are of equal in difficulty. (A statistical procedure used to
estimate coefficient alpha, a correlation coefficient is given.) Another formula for
testing the internal consistency of a test is the KR-21 formula, which is not limited to
test items that are scored dichotomously.
RELIABILITY COEFFICIENT
Reliability coefficient is a measure of the amount of error associated with the test
scores.