Nurtan 13
Nurtan 13
Nurtan 13
At various stages during their learning, students may need or want to be tested on their ability in the
English language. If they arrive at a school and need to be put in a class at an appropriate level, they
may do a placement test. This often takes the form of a number of discrete (indirect) items (see
below), coupled with an oral interview and perhaps a longer piece of writing. The purpose of the test
is to find out not only what students know, but also what they don’t know. As a result, they can be
placed in an appropriate class.
At various stages during a term or semester, we may give students progress tests. These have the
function of seeing how students are getting on with the lessons, and how well they have assimilated
what they have been taught over the last week, two weeks or a month.
At the end of a term, semester or year, we may want to do a final achievement test (sometimes
called an exit test) to see how well students have learnt everything. Their results on this test may
determine what class they are placed in next year (in some schools, failing students have to repeat a
year), or may be entered into some kind of school-leaving certificate. Typically, achievement tests
include a variety of test types and measure the students’abilities in all four skills, as well as their
knowledge of grammar and vocabulary.
Many students enter for public examinations such as those offered by the University of Cambridge
ESOL, Pitman or Trinity College in the UK, and in the US, the University of Michigan and TOEFL and
TOEIC. These proficiency tests are designed to show what level a student has reached at any one
time, and are used by employers and universities, for example, who want a reliable measure of a
student’s language abilities.
So far in this chapter we have been talking about testing in terms of ‘one-off’ events, usually taking
place at the end of a period of time (except for placement tests). These ‘sudden death’ events
(where ability is measured at a particular point in time) are very different from continuous
assessment, where the students’ progress is measured as it is happening, and where the measure of
a student’s achievement is the work done all through the learning period and not just at the end.
One form of continuous assessment is the language portfolio, where students collect examples of
their work over time, so that these pieces of work can all be taken into account when an evaluation is
made of their language progress and achievement. Such portfolios (called dossiers in this case) are
part of the CEF (Common European Framework), which also asks language learners to complete
languagpassports (showing their language abilities in all the languages they speak) and language
biographies (describing their experiences and progress).
There are other forms of continuous assessment, too, which allow us to keep an eye on how well our
students are doing. Such continuous recording may involve, among other things, keeping a record of
who speaks in lessons and how often they do it, how compliant students are with homework tasks
and how well they do them, and also how well they interact with their classmates.
Some students seem to be well suited to taking progress and achievement tests as the main way of
having their language abilities measured. Others do less well in such circumstances and are better
able to show their abilities in continuous assessment environments. The best solution is probably a
judicious blend of both.
Good tests
Good tests are those that do the job they are designed to do and which convince the people taking
and marking them that they work. Good tests also have a positive rather than a negative effect on
both students and teachers.
A good test is valid. This means that it does what it says it will. In other words, if we say that a certain
test is a good measure of a student’s reading ability, then we need to be able to show that this is the
case. There is another kind of validity, too, in that when students and teachers see the test, they
should think it looks like the real thing - that it has face validity. As they sit in front of their test paper
or in front of the screen, the students need to have confidence that this test will work (even if they
are nervous about their own abilities). However reliable the test is (see below) face validity demands
that the students think it is reliable and valid.
A good test should have marking reliability. Not only should it be fairly easy to mark, but anyone
marking it should come up with the same result as someone else. However, since different people
can (and do) mark differently, there will always be the danger that where tests involve anything other
than computer-scorable questions, different results will be given by different markers. For this
reason, a test should be designed to minimise the effect of individual marking styles.
When designing tests, one of the things we have to take into account is the practicality of the test.
We need to work out how long it will take both to sit the test and also to mark it. The test will be
worthless if it is so long that no one has the time to do it. In the same way, we have to think of the
physical constraints of the test situation. Some speaking tests, especially for international exams, ask
not only for an examiner but also for an interlocutor (someone who participates in a conversation
with a student). But this is clearly not practical for teachers working on their own.
Tests have a marked washback/backwash effect, whether they are public exams or institution-
designed progress or achievement tests. The washback effect occurs when teachers see the form of
the test their students are going to have to take and then, as a result, start teaching for the test. For
example, they concentrate on teaching the techniques for answering certain types of question rather
than thinking in terms of what language students need to learn in general. This is completely
understandable since teachers want as many of their students as possible to pass the test. Indeed,
teachers would be careless if they did not introduce their students to the kinds of test item they are
likely to encounter in the exam. But this does not mean that teachers should allow such test
preparation to dominatetheir lessons and deflect from their main teaching aims and procedures.
The washback effect has a negative effect on teaching if the test fails to mirror our teaching because
then we will be tempted to make our teaching fit the test, rather than the other way round. Many
modern public examinations have improved greatly from their more traditional versions, so that they
often do reflect contemporary teaching practice. As a result, the washback effect does not have the
baleful influence on teaching which we have
been discussing.
When we design our own progress and achievement tests, we need to try to ensure that
we are not asking students to do things which are completely different from the activities they have
taken part in during our lessons. That would clearly be unfair.
Finally, we need to remember that tests have a powerful effect on student motivation. Firstly,
students often work a lot harder than normal when there is a test or examination in sight. Secondly,
they can be greatly encouraged by success in tests, or, conversely, demotivated by doing badly. For
this reason, we may want to try to discourage students from taking public examinations that they are
clearly going to fail, and when designing our own progress and achievement tests, we may want to
consider the needs of all our students, not just the ones who are doing well. This does not mean
writing easy tests, but it does suggest that when writing progress tests, especially, we do not want to
design the test so that students fail unnecessarily - and are consequently demotivated by the
experience.