Criteria of Test

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

CRITERIA OF TEST:

1. reliability :A reliable test is one which will produce the same result if it is administered
again.
- Circumstances
make an effort to see (like noise levels, room temperature, level of distraction etc. are kept
stable)
- Marking
more subjectively a test- more carefully standardize markers.
the fewer markers- the easier it is to do this.
=> ensure at least double marking of most work.
- Uniformity :parallel versions of the same test
- Quantity
The more evidence - the more reliable the judgment.
Disadvantages: marking time takes longer and the test-takers are tired
- Constraints
free tasks are error avoiding but controlled and allow more reliably to gauge the targets can be
successfully achieved.
a structured response test and the rubric forces the test subjects to produce language in
specific areas
- Make rubrics clear
Any misunderstanding of what's required undermines reliability.
Learners vary in their familiarity with certain types of task => Making the rubric clear
contributes to leveling the playing field.

2. validity : to ensure that the test is testing what you think it's testing so the
meaningful results .
- Face validity
Students won't perform at their best in a test they don't trust is really assessing properly what
they can do.
The environment ( a formal event held in silence with no cooperation between test-takers)
- Content validity
contain that which has been taught only and not have any extraneous material => Coverage
plays a role here
- Predictive validity
tests tell how well learners will perform in the tasks set and the lessons to help them
prepare for the examination.`
- Concurrent validity
administer both tests to a large group and compare the results. Parallel results are a sign of
good concurrent validity
- Construct validity
something that happens in the brain and has nothing to do with constructing a test.
To have high construct validity a test-maker must be able succinctly and consistently to
answer the question

3. practicality: the test is deliverable in practice


- administration
+ The test should not be too complicated or too complex to conduct.
+ The test should be quite simple to administer.
- scoring/ evaluation
+ The scoring/evaluation process should fit into the time allocation.
+ The test should be accompanied by scoring rubrics, key answers,
and so on to make it easy to evaluate.
- design: based on time, money, space, equipment
+ appropriately utilizes available material resources.
+ create the test (10-15') with a different code (lightweight and
scalable)

4. Discrimination:
- it refers to the ability which a test has to distinguish clearly and quite finely
between different levels of learner.
- If a test is too simple, most of the learners in a group will get most of it right
which is good for boosting morale but poor if you want to know who is best and
worst at certain tasks.
- if a test is too difficult, most of the tasks will be poorly achieved and your ability
to discriminate between the lea corners' abilities in any area will be
compromised

TYPES OF TEST
Aptitude test
1. not a test for which a person can study.
2. generally used for job placement, college program entry.
3. used to determine an individual's skill or propensity.
4. used to assess how learners are likely to perform in an area in which they
have no prior training or knowledge.
5. used to test a learner’s general ability to learn a language rather than the
ability to use a particular language.

Achievement test
1. an assessment of developed knowledge or skill.
2. used to assess the learners' cognitive abilities in the course.
3. an end-of-course or end-of-week test (even a mid-lesson test).
4. Measure learner's performance at the end of a period of study to evaluate the
effectiveness of the programme.
5. Evaluate a learner's language knowledge to show how their learning has
progressed.

Diagnostic test
1. a test set early in a program to plan the syllabus.
2. used to determine a learner's proficiency level in English before they begin a
course.
3. Quizzes, surveys, checklists, discussion boards, etc.
4. a test that helps the teacher to know the gap in learners’ understanding.
5. Discover the learner's strengths and weaknesses for planning purposes.

Proficiency test
1. used to test learners' abilities.
2. used to determine the language level of the learner.
3. Happen regardless of which course they have taken.
4. Have the same form as the public examination.
5. a test used for placement.

Progress test
assessment enables you to accurately measure how your school and your students
are performing – student by student, class by class and year by year.

You might also like