Chapter 4
Chapter 4
Chapter 4
Key Terms
item analysis
difficulty index
discrimination index
measures of attractiveness
positive discrimination
negative discrimination
zero discrimination
miskeyed item
guessing item
ambiguous item
Learning Objectives
After designing the assessment tools, package the test, administer the test
to the students, check the test papers, score and then record them. Return the test
papers and then give feedback to the students regarding the result of the test.
Assuming that you have already assembled the test, you write the
instructional objectives, prepare the table of specification, and write the test
items that match with the instructional objectives, the next thing to do is to
package the test and reproduce it as discussed in the previous chapter.
After constructing the test items and putting them in order, the next step is
to administer the test to the students. The administration procedures greatly
affect the performance of the students in the test. The test administration does
not simply means giving the test questions to the students ad collecting the test
papers after the given time. Below are the guidelines in administering the test
before, during and after the test.
After the examination, the next activity that the teacher needs to do is to
score the test papers, record the result of the examination; return the test papers and
last to discuss the test items in the class so that you can analyze and improve the test
items for future use.
1. Grade the papers (and add comments if you can); do test analysis (see the
module on test analysis) after scoring and before returning papers to
students if at all possible. If it is impossible to do your test analysis before
returning the papers, be sure to do it at another time. It is important to do
both the evaluation of your students and the improvement of your tests.
2. If you are recording grades or scores, record them in pencil in your class
record before returning the papers. If there are errors/ adjustments in
grading they (grades) are easier to change when recorded in pencil.
3. Return papers in a timely manner.
4. Discuss test items with the students. If students have questions, agree to
look over their papers again, as well as the papers of others who have the
same question. It is usually better not to agree to make changes in grades
on the spur of the moment while discussing the tests with the students but
to give yourself time to consider what action you want to take. The test
analysis may have already alerted you to a problem with a particular
question that is common to several students, and you may already have
made a decision regarding, that question (to disregard the question and
reduce the highest possible score according, to give all students credit for
that question, among others).
After administering and scoring the test, the teacher should also analyze
the quality of each item in the test. Through this you can identify the item that is
good, item that needs improvement or items to be removed from the test. But
when do we consider that the test is good? How do we evaluate the quality of
each item in the test? Why is it necessary to evaluate each item in the test? Lewis
Aiken (1997) an author or psychological and educational measurement pointed
out that a “postmortem” is just as necessary in classroom assessment as it is in
medicine.
There are two kinds of item analysis, quantitative item analysis and
qualitative item analysis (Kubiszyn and Borich, 2007).
Item Analysis
1. Item analysis data provide a basis for efficient class discussion of the
test results.
2. Item analysis data provide a basis for remedial work.
3. Item analysis data provide a basis for general improvement of
classroom instruction.
4. Item analysis data provide a basis for increased skills in test construction.
5. Item analysis procedures provides a basis for constructing test bank.
There are three common types of quantitative item analysis which provide
teachers with three different types of information about individual test items.
These are difficulty index, discrimination index, and response options analysis.
1. Difficulty Index
It refers to the proportion of the number of students in the upper
and lower groups who answered an item correctly. The larger the
proportion, the more students, who have learned the subject is measured
by the item. To compute the difficulty index of an item, use the formula:
n
D F= , where
N
DF = difficulty index
n = number of the students selecting item correctly in the upper
group and in the lower group.
N = total number of students who answered the test
Level of Difficulty
2. Discrimination Index
The power of the item to discriminate the students between those
who scored high and those who scored low in the overall test. In other
words, it is the power of the item to discriminate the students who know
the lesson and those who do not know the lesson.
It also refers to the number of students in the upper group who got
an item correctly minus the number of students in the power group who
got an item correctly. Divide the difference the difference by either the
number of the students in the upper group or number of students in the
lower group or get the higher number if they are not equal.
Discrimination index is the basis of measuring the validity of an item.
This index can be interpreted as an indication of the extent to which overall
knowledge of the content area or mastery of the skills is related to the
response on an item.
Level of Discrimination
Ebel and Frisbie (1986) as cited by Hetzel (1997) recommended the use of
Level of Discrimination of an Item for easier interpretation.
Index Range Discrimination Level
0.19 and Poor item, should be eliminated or need to be revised
below
0.20 – 0.29 Marginal item, needs some revision
0.30 – 0.39 Reasonably good item but possibly for improvement
0.40 and Very good item
above
CUG −C LG
D I= , where
D
CUG = number of the students selecting the correct answer in the upper
group
CLG = number of the students selecting the correct answer in the lower
group
Note: Consider the higher number in case the sizes in upper and lower group
are not equal
Yes No
1. Does the key discriminate positively?
2. Does the incorrect options discriminate negatively?
If the answer to questions 1 and 2 are both YES, retain the item.
If the answers to questions 1 and 2 are either YES or NO, revise the item.
If the answer to questions 1 and 2 are both NO, eliminate or reject the item.
Distracter Analysis
1. Distracter
Distracter is the term used for the incorrect options in the muliple-choice
type of test while the correct answer represents the key. It is very
important for the test writer to know if the distracters are effective or good
distracters. Using quantitative item analysis we can determine if the
options are good or if the distracters are effective.
Item analysis can identify non-performing test items, but this item seldom
indicates the error or the problem in the given item. There are factors to
be considered why students failed to get the correct answer in the given
question.
Consider the following examples in analyzing the test item and some
notes on how to improve the item based from the results of items analysis.
Example 2.A class is composed of 50 students. Use 27% to get the upper and the
lower groups. Analysis the item given the following results. Option D is the
correct answer. What will you do with the test item?
Option A B C D* E
Upper Group 3 1 2 6 2
(27%)
Lower Group 5 0 4 4 1
(27%)
1. Compute the difficulty
index n = 6 +4 = 10
N = 28
D n
F=¿ ¿
N
D 10
F=¿ ¿
28
D F=¿0.36 ∨36 % ¿
3. Make an analysis
a. Only 36% of the examinees got the answer correctly, hence, the item is
difficult.
b. More students from the upper group got the answer correctly, hence, it
has a positive discrimination.
c. Modify options and B and E because more students from the upper
group chose them compare with the lower group, hence, they are not
effective distracters because most of the students who performed well
in the overall examination selected them as their answer.
d. Retain options A and C because most of the students who did not perform
well in the overall examination selected them as the correct answer.
Hence, options A and C are effective distracters.
4. Conclusion: Revised the item by modifying options B and E.
Example 3.A class is composed of 50 students. Use 27% to get the upper and the
lower groups. Analyze the item given the following results. Option E is the
correct answer. What will you do with the test item?
Option A B C D E*
Upper Group 2 3 2 2 5
(27%)
Lower Group 2 2 1 1 8
(27%)
1. Compute the difficulty
index: n = 5 + 8 = 13
N = 28
D n
F=¿ ¿
N
D 13
F=¿ ¿
28
D F=¿0.46 ∨46 % ¿
3. Make an analysis.
a. 46% of the students got the answer to test item correctly, hence, the
test item is moderately difficult.
b. More students from the lower group got the item correctly; therefore, it
is a negative discrimination. The discrimination index is -21%.
c. No need to analyze the distracters because the item discriminates negatively.
d. Modify all the distracters because they are not effective. Most of the
students in the upper group chose the incorrect options. The options
are effective if most of the students in the lower group chose the
incorrect options.
4. Conclusion: Reject the item because it has a negative discrimination index.
Example 4.Potential Miskeyed Item. Make an item analysis about the table below.
What will you do with the test that is a potential miskeyed item?
Option A* B C D E
Upper Group 1 2 3 10 4
Lower Group 3 4 4 4 5
Option A B C D E*
Upper Group 7 1 1 2 8
Lower Group 6 2 3 3 6
Example 6.Guessing Item.Below is the result of item analysis for a test with students’
answers mostly based on a guess. Are you going to reject, revise or retain the test
item?
Option A B C* D E
Upper Group 4 3 4 3 6
Lower Group 3 4 3 4 5
3. Make an analysis.
a. Only 18% of the students got the answer to the test item correctly, hence,
the test item is very difficult.
b. More students from the upper group got the correct answer to the test
item; therefore, the test item is a positive discrimination. The
discrimination index is 5%.
c. Students respond about equally to all alternatives, an indication that
they are quessing.
Three possibilities why student guesses the answer on a test item:
The content of the test item had not yet been discussed in
the class because the test is designed in advanced;
Test items were badly written that students have no idea
what the question is really about; and
Test items were very difficult as shown from the difficulty
index and low discrimination index.
4. Conclusion: Reject the item because it is very difficult; reteach the material
to the class.
Example 7.Guessing Item.The table below shows an item analysis of a test item
with ineffective distracters. What can you conclude about the test item?
Option A B C* D E
Upper Group 5 3 9 0 3
Lower Group 6 4 6 0 4
3. Make an analysis.
a. Only 38% of the students got the answer to the test item correctly, hence,
the test item is difficult.
b. More students from the upper group answered the test item correctly;
as a result, the test got a positive discrimination. The discrimination
index is 15%.
c. Options A, B and E are attractive distracters.
d. Option D is ineffective, therefore, change it with more realistic one.
4. Conclusion: Revise the item by changing option D.
Chapter Exercises