5.si3tsagarivogt Final Formatted Proofed
5.si3tsagarivogt Final Formatted Proofed
5.si3tsagarivogt Final Formatted Proofed
6, Issue 1, 2017 41
Dina Tsagari 1
University of Cyprus
Karin Vogt
University of Education Heidelberg
The setting
In the field of Language Testing and Assessment (LTA), the area of Classroom Based
Language Assessment (CBLA) has received unprecedented attention with relevant
research activity (e.g. Colby-Kelly, 2014, 2015; Hamp-Lyons, 2009; Hill, 2012; Hill &
McNamara, 2012; Leung, 2014; Tsagari & Csépes, 2011; Yin, 2010). The scope of the
research undertaken in CBLA shows, among other things, that there is a need for
constant professional development for language teachers in order to keep up to date
with the challenges and expectations that arise in classroom-based foreign language
Coombe, Troudi and Al-Hamly et al. (2012) estimate that about 30 to 50 per cent of
language teachers’ everyday activities are assessment-related. However, evidence
from empirical studies (Fulcher, 2012; Lam, 2014; Mendoza & Arandia, 2009; Yan,
2010) suggests a lack of sufficient knowledge and skills on assessment matters, which
implies a low level of language assessment literacy (LAL). The present paper discusses
the notion of LAL and relates the results of a mixed-methods study on current LAL
literacy levels as perceived by pre- and in-service language teachers. The focus is on
the findings of the qualitative part of the study undertaken to explore three European
contexts in depth, based on findings of a questionnaire (n=853) administered in seven
European countries.
Given the concept of literacy that has been expanded considerably to other domains,
e.g. computer literacy, visual literacy (NCRL/ Metiri Group, 2003; Taylor, 2013), the
addition of “assessment” literacy was to be expected (Engelsen & Smith, 2014; Taylor,
2013). The term “assessment literacy” was first coined by Stiggins (1991) and taken up
by Falsgraf (2005, p. 6) as “… the ability to understand, analyze and apply information
on student performance to improve instruction”. Similarly, Pill and Harding (2013, p.
382) maintain that LAL comprises a variety of competencies that enable the individual
to “understand, evaluate and create language tests and analyze test data”. O’Loughlin
(2013, p. 363) equally takes a skills-based perspective that views LAL as “…a range of
skills related to test production, test score interpretation and use, and test evaluation
in conjunction with the development of a critical understanding about the roles and
functions of assessment within society”. The critical stance of individuals concerned
with language assessment is considered in Inbar-Lourie’s (2008, p. 389) definition,
which highlights “the capacity to ask and answer critical questions about the purpose
for assessment, about the fitness of the tool being used, about testing conditions, and
about what is going to happen on the basis of the test results”. The most
Papers in Language Testing and Assessment Vol. 6, Issue 1, 2017 43
comprehensive definition to date has been suggested by Fulcher (2012, p. 125), which
is:
Finally, our own definition of LAL, used also in this paper, is “the ability to design,
develop and critically evaluate tests and other assessment procedures, as well as the
ability to monitor, evaluate, grade and score assessments on the basis of theoretical
knowledge” (Vogt & Tsagari, 2014, p. 377).
LAL as a concept seems to be more closely associated with language testers (Popham,
2009; Malone, 2013) and consequently much of the research that has been carried out
is related to this important group of stakeholders. However, research seems to suggest
that the levels of teachers’ LAL are to be improved. For example, Scarino (2013), who
continuously worked with teachers at schools and used ‘collaborative dialogues’ with
teachers as the major source of data, observes that teachers were struggling with the
construct of assessment on a theoretical, practical and institutional level. She stressed
the need to develop LAL for teachers in in-service training informed by a holistic
approach that goes beyond a mere knowledge-based concept of LAL.
Hasselgreen, Carlsen and Helness (2004) focused on previous training in LTA with
three types of stakeholders as a key parameter of LAL as well as their perceived
training needs in various areas of LTA. Among the stakeholders were teachers, teacher
trainers and testing experts (n=914) from Europe and beyond, with the majority of
respondents coming from Finland, Sweden and the UK. The results of the online
survey revealed that all stakeholders seemed to lack formal education in LTA and
expressed a need for training across the board.
In a more recent study that partly replicated the research instruments used by
Hasselgreen et al. (2004), Kvasova and Kavytska (2014) targeted Ukrainian university
foreign language teachers in order to gauge their LAL needs and to investigate their
training needs in the field. Despite good competence levels in assessment-related tasks
such as using ready-made tests or providing feedback, Kvasova and Kavytska
conclude that “overall assessment literacy has not yet reached an appropriately high
level” (2014, p. 175).
44 D. Tsagari & K. Vogt
Finally, the aim of Fulcher’s (2012) study was to identify the training needs of foreign
language teachers (n=278) using an online survey. His findings suggest that language
teachers are aware of assessment requirements that are not currently supplied in
existing training materials and that there is a need for comprehensible and practical
materials in order to enhance their assessment literacy.
The aim of the main study was to gauge LAL levels of regular European foreign
language teachers and identify training needs to enhance their LAL on a European
and local level. The study was a response to the call for more investigation in the
training needs of FL teachers round Europe expressed by Hasselgreen et al. (2004) to
gain a clearer picture of the assessment literacy of practicing FL educators. Our large-
scale, mixed-methods study focused on “regular” teachers (that is, teachers whose
only assessment experience was within the classroom) as an important stakeholder
group. This is in contrast to other European studies, which have included teacher
trainers and testing experts or teachers who also perform assessment roles outside the
classroom context (e.g., as item writers or examiners).
The next section will detail the aims and design for the study. The Results section will
include a brief overview of the questionnaire findings for the three focus countries
followed by a detailed discussion of the results of the interview data, which is the
focus of this paper.
Research Focus
As mentioned before, the aim of the main study was to investigate the level of LAL of
regular foreign language teachers across Europe. By ‘regular’ foreign language
teachers we understand those who have undergone standard training and who teach
foreign languages at state tertiary institutions, colleges and schools, and have no
additional assessment roles e.g. working as item writers or as examiners in
standardized tests. The aim of the study was to explore levels of training received as
well as investigate training needs. It also looked into teachers’ compensation strategies
in case of low LAL levels. The research questions for the main study were formulated
as follows:
This paper will focus on the qualitative part of the study, which focused on the
educational contexts of Cyprus, Germany and Greece. This set out to investigate
the following questions:
Study design
The main study was a mixed-methods study (Creswell, 2015; Tashakkori & Teddlie,
1998; Teddlie & Tashakkori, 2009) that aimed at triangulating qualitative and
quantitative data in order to enhance the validity of the study (Elsner & Viebrock,
2014; Turner, 2014). The questionnaire represented the quantitative part of the study
and it was administered to foreign language teachers based in Cyprus, the Former
Yugoslavian Republic of Macedonia (FYROM), Germany, Greece, Italy, Poland and
Turkey. It was important for us that the informants only served as language teachers
and did not have additional assessment-related roles such as item writers. English was
the main language that was taught by the informants. The questionnaire was
administered to 878 informants, of which 853 (97.2 %) were returned.
The qualitative part of the study, which is the focus of this paper, set out to investigate
individual teachers’ training biographies, their perceptions of their LAL levels and
their individual needs in terms of in-service teacher training in their respective local
contexts. To do this, semi-structured interviews were conducted with primary and
secondary state school teachers teaching EFL in Cyprus (n=16), Germany (n=25) and
Greece (n=22). All informants consented to take part in the study. While the German
teachers taught in secondary schools (age 10-19), the Cypriot and Greek sample
covered both the primary and secondary sectors of education. Their teaching
experience ranged from two to 34 years.
The questionnaire used for the main study was based on the one used by Hasselgreen
et al. (2004) excluding the questions on large-scale testing, an aspect that teachers are
not typically engaged with in their teaching lives (see Appendix 1). The first part of
the questionnaire obtained background information about the informants, e.g.
teaching qualifications, current institution, age of target learner group, etc. The
remainder of the questionnaire was divided into three sections, namely classroom-
46 D. Tsagari & K. Vogt
focused LTA (n=12 questions), purposes of testing (n=8 questions) and content and
concepts of LTA (n=16 questions) (Table 1). Each part was further subdivided into one
section enquiring about the training that respondents had received and one relating
to the training needs that respondents felt was necessary for them. As in the
Hasselgreen et al. (2004) questionnaire, a three-point Likert scale was used for the
answers. The questionnaire was found to be highly internally consistent with
Cronbach’s alpha coefficient ranging from .80 to .93 for the individual scales (see
Dörnyei, 2010, p. 94). Descriptive statistics such as frequencies and percentages were
calculated for the data.
1. During your pre-service teacher training, did you learn about language testing and assessment
(LTA)?
2. Did you feel appropriately prepared for your LTA tasks after pre-service training?
3. If not, how did you learn about LTA?
4. Do you know more about recent LTA methods, e.g. portfolio assessment, self- or peer-assessment?
Have you ever tried them?
5. Have you ever worked with standardized tests or have you advised learners in this area? What do
you think of them?
6. What types of LTA do you use in your school / institute?
7. Have you received in-service training in LTA? If yes, what was the focus of this training?
8. How satisfied are you with in-service training offered in LTA? What LTA training would you like
in the short-term?
After seeking the relevant permissions by persons and institutions, the interviews
were timetabled and conducted at venues and times convenient for the teachers.
Interviews lasted between 30 and 45 minutes. They were audiotaped and transcribed
before being sent out to informants for communicative validation (Dörnyei, 2007).
Informants were invited to comment on any ambiguous elements in the transcripts,
which resulted in editing the transcripts. The final versions were then content
analysed. The categories of analysis were both deductive (using the interview
questions) and inductive with emergent themes and patterns arising from the data
(Aguado, 2014).
The analysis of the data was undertaken separately by the researchers in order to
enhance cross-verification of data by way of investigator triangulation. As a last step,
researchers collated the group tendencies before comparing the data across the three
regional contexts.
Since the focus of this paper is on the qualitative part of the study, we will briefly
summarise the questionnaire results for the three focus countries before relating and
discussing the interview data in greater depth. More detailed discussions of the
quantitative part of the study can be found in Vogt and Tsagari (2014).
Looking at the overall trends (see Table 3), we generally observe low LAL levels. On
average, the majority of foreign language teachers in our sample either received “no”
(35%) or “a little” (32.5%) training. Only certain aspects of LTA were developed, e.g.
in testing microlinguistic aspects and language skills. Other aspects of LTA, e.g.
48 D. Tsagari & K. Vogt
To further investigate individual teachers’ LAL profiles and needs in their local
contexts, findings from the interview data will be discussed in more detail, following
the order of the research questions.
Assessment procedures
Teachers were asked what assessment procedures they make use of. It is obvious in
the data that the regulations of the national or regional educational authorities highly
impact on teachers’ assessment practices and procedures. Most of the Cypriot and
Greek teachers (n=12 and 20 respectively) and all of the German (n=25) teachers said
that they used pencil-and-paper tests to assess their learners’ foreign language
performance. Primary school teachers in Cyprus and Greece, in particular, stressed
that they were not obliged to do so but that they did administer classroom tests
occasionally since this gives them the opportunity to find out about students’
progress. In secondary schools in Germany, pencil-and-paper tests are very much part
of the local foreign language assessment culture and the number of tests per semester
(usually three to six depending on grade level) is either regulated by the regional
school authorities and / or by the schools themselves. Teachers in Germany also
Papers in Language Testing and Assessment Vol. 6, Issue 1, 2017 49
administer these classroom tests on a regular basis, with additional shorter tests, e.g.
vocabulary tests. The Greek and Cypriot data indicated that teachers test their
students usually every three to four units in the textbook whereas only two teachers
(GRT3, GRT4) test their students as soon as they finish a unit. This helps them not only
understand if they have taught the unit successfully (“that tells me how to plan the
next unit”, CYT1), but also identifies any possible problems students face so that
remedial work can be planned accordingly.
In the class tests themselves, teachers indicated they used a variety of task types,
almost all of which are rather traditional. The most frequent are sentence completion
and answering questions for grammar and vocabulary while more open-ended tasks
such as essay writing are used for more advanced learners. German teachers
uniformly added vocabulary tests, mostly bilingual, i.e. asking for translation of
isolated words or sentences into English: “Sure, class tests and vocabulary tests after
a unit has been finished” (GERT19). Although most teachers also evaluated their
learners’ oral performance in some way, the data yielded that there was a clear focus
on pencil-and-paper tests and therefore on receptive skills such as reading and
sometimes listening and writing as a productive skill. Speaking as a productive skill
hardly played any role in classroom tests.
Teachers tended to rely heavily on the information written tests yield about their
learners’ performance. A Greek informant maintained, “[Tests] tell me almost
everything. They clearly show each student’s progress” (GRT2) and a very
experienced German teacher did not question the use of written tests at all: “In
secondary school you have to administer more written class tests” (GERT6). One
German respondent estimated that 80% of colleagues at his school relied on written
tests for their final evaluation in the school reports (GERT3).
Regarding feedback from test results, teachers provided various types of feedback to
students once the class tests were marked. The majority of informants in our sample
talked about test results in class while the test papers are being handed back to the
learners. However, a deficit-oriented approach seems to prevail in some contexts as
the majority also emphasized the mistakes learners made and they identified
problematic areas, as one Greek teacher suggested: “I show them the tests and then
we have an in-class discussion about how to avoid making the same mistakes”
(GRT1). One Cypriot teacher said: “If I see that there is some kind of difficulty, I will
indicate that on the test paper” (CYT3). This same teacher, however, also praised
positive aspects of the students’ performance on the test. Others addressed learners’
language problems that are obvious from their test performance by offering remedial
work. In other words the test results were fed back into their teaching. All in all, we
can conclude that the ways teachers provide feedback are rather individualized.
50 D. Tsagari & K. Vogt
Marking criteria
With regard to the marking criteria of tests, the majority of informants confirmed that
they were strict with the correction of tests (“When I mark, I’m strict with the things I
have taught and the objectives of the test tasks”, CYT2). All of the Greek and Cypriot
teachers except GRT4 indicated that the marking criteria are also transparent to the
learners. One Cypriot teacher asserted that his students “know what the test exercises
ask from them and the way I mark [sic]” (CYT4). The German teachers, however, were
not that positively minded in this respect. As one informant (GERT6) suggested, he
lacked knowledge about criteria for classroom tests, having always relied on a gut
feeling: “how you can give marks, how to design class tests, how to prepare for class
tests ... one has always acted intuitively”. Thus basic elements of classroom design and
related scoring rubrics did not seem to be that clear to some informants, which is
reflected in the in-service teacher training needs regarding language testing and
assessment.
When asked about the extent to which the respondents used alternative forms of
assessment, e.g. portfolio assessment, self- or peer-assessment, they mainly indicated
that they were aware of them but that they were not necessarily part of their
assessment routines. In Greece, primary school teachers were still far from
implementing them with their young learners. For example, GRT2 showed some
awareness of other forms of assessment besides tests and she said she used games and
various other playful activities. Also, GRT3 said she uses self-assessment and peer-
assessment to assess her students but could not explain details. The majority of the
Cypriots teachers acknowledged the use of games and activities as assessment
methods, e.g.: “If I want to assess vocabulary I will have a Bingo activity where
through the use of a dice students will ask and answer or find out things. Through
this, for instance, I can check their vocabulary” (CYT3). CYT2 also referred to the use
of projects for assessment purposes, e.g.: “We usually have some mini-scale projects.
For instance, we work with a small story, ‘The Three Little Pigs’, and then I prepare a
small project for them which I then use as a form of evaluation”. CYT2 also used plays
performed during the lesson as a way to assess her students’ performance: “Some
units are ideal for using ‘theatre plays’ because they are very indicative of what they
know. They are more natural, more spontaneous, and as a result you can understand
their level and progress”. CYT1 said she “checks on students through singing
activities” while other Cypriot teachers stressed that they use either pair or group
Papers in Language Testing and Assessment Vol. 6, Issue 1, 2017 51
work as a means of assessment. With regard to portfolio assessment, only CYT4 said
that he had partly used portfolio while CYT2 was planning to implement portfolio the
following year.
The results suggest that while the CBLA research community has long embraced non-
traditional forms of assessment, the reality in frontline foreign language teaching still
often looks different. However, some application of innovative assessment forms was
reported by teachers in our sample, and their use was often well reflected in an
attempt to make Assessment for Learning (Black & Wiliam, 1998) possible, as the
following example from Germany suggests: “You gave peer-assessment as an
example. Well, I love doing writing conferences with my pupils ... so that pupils learn
how to give fair feedback and get fair feedback in the same way ... as a very
constructive way of feedback” (GERT6). In the German context, innovative forms of
assessment were often implemented by few dedicated teachers or they were used
because the school required the teachers to do so. When their first experience with a
new form of assessment was negative, they tended to refrain from further use. This
result shows that teachers mainly need support with the concrete, practical
implementation of more recent assessment procedures in order to help them expand
their repertoire of assessment procedures.
Our data showed that once teachers had included alternative forms of assessment in
their repertoires, they saw them as valuable additions or even a good alternative to
tests, e.g. Cypriot teacher CYT4 who was convinced that they were “better ways
because they are more indicative of what teachers can do”. Some Greek teachers in
our sample were positive: “I think they are equally useful and I wish I had more time
to discuss them with the students” (GRT1).
When asked why they used alternative forms of assessment, the informants
mentioned various functions, namely making their teaching more effective and
adjusting it to the students’ needs and helping them overcome language difficulties.
Once teachers embraced alternative forms of assessment as a tool for Assessment for
Learning, they held them in high esteem because they had come to believe that these
methods enhanced both their teaching methods and students’ learning. As one Greek
teacher suggested: “I take them into account and try to adjust my teaching to my
students’ needs” (GRT2). Thus, professionalization in LAL also needs to consider
teachers’ subjective theories on assessment (Scarino, 2013) and encourage them to
include the perspective of the learners in their assessment practices.
Standardized tests
To the question of whether teachers have ever worked with standardized tests (e.g.
Cambridge Preliminary English Test, Key English Test or English Young Learners, etc.) or if
52 D. Tsagari & K. Vogt
they have advised learners in this area, the majority of the teachers said they have
never done so. This happens because there is no mandate to do so in public school
education, and this is valid for all three educational contexts. But even if there was,
teachers said that they are not qualified or trained to assist students in the preparation
for such tests.
In the German context, some schools offer clubs that prepare learners to sit a
standardised test, e.g. Cambridge Preliminary English Test, on a voluntary basis. This
is offered as an extracurricular activity and is totally separate from regular classes. At
one school in our German sample this was done, and the teacher used past papers to
prepare learners for the test, albeit without critically evaluating them. This result is
not surprising as teachers are not trained in the area of standardised tests and,
consequently, are not in a position to critically evaluate them.
The interview data mostly corroborated the findings from the questionnaire in that
there are generally low levels of LAL to be observed. However, one of the purposes
of the interview was to look into the perceptions of teachers, or more precisely to
investigate whether teachers feel prepared for their everyday assessment-related
activities at school.
When asked whether they had learned anything about LTA during their pre-service
teacher training, the majority of respondents answered either that they had not
learned anything or only very little during the initial stages of their career. Teachers
stressed that training in language assessment was neglected during their
undergraduate studies and pre-service training, e.g. “... definitely not. We didn’t
receive any such information” (CYT4), “As far as I can remember, we learned as good
as nothing about it” (GERT1). One typical statement that stresses how little our
respondents felt prepared for their assessment activities was made by a Cypriot
teacher: “I do not feel prepared… There’s no preparation in such matters for the
primary English school teacher” (CYT2). Voices from the German context confirmed
this unease: “... you started your practical phase and you were really clueless ... you
are at an utter loss in the beginning, you get some advice from colleagues ..., adopt
things from colleagues and then work your way through it” GERT21); “it was in at the
deep end ... we all experienced this” (GERT19).
Teachers seem to learn about LTA on the job, they rely on mentors, colleagues and
published assessment materials in order to survive. This is a general tendency that
seems to be valid across educational contexts, as the statement of a Greek informant
(GRT11) suggests: “... to be honest with you, I was heavily dependent on the methods
Papers in Language Testing and Assessment Vol. 6, Issue 1, 2017 53
Teachers were further asked about their in-service teacher training in LTA in order a)
to investigate whether they were satisfied with current in-service teacher training
provision in the field and b) to identify their individual training needs in their
respective contexts. Our hope was to be able to devise effective topics and formats that
would enhance teachers’ professionalization in terms of LAL. The latter proved to be
impossible, however, since the majority of respondents in our sample had not taken
part in in-service teacher training in the field (e.g. 72% of the teachers asked in the
German sample). They were thus not able to clearly identify training needs. Some
went to attend professional development activities whenever they were required by
school authorities to use a certain assessment format or scoring rubrics, e.g. in school-
leaving exams. The general feeling was that work needs to be done in LTA because
teachers lack the appropriate knowledge and they need to improve their overall
competence in LTA practices, skills and knowledge.
The informants were also asked about their personal professionalization needs. Again,
wishes that were expressed were vague most of the time. One of the teachers from
Cyprus commented: “I believe that there should be better professional training and
orientation in language assessment because we may actually use some individual and
group assessment but this is not enough…” (CYT3). When informants did identify
concrete training needs, which happened only in the German sample, they were rather
general (“concrete things for everyday assessment”, GERT24; “how marks come
about”, GERT5), which shows that they lacked a systematic overview of the field or
that they did not see the need to take action. Among the concrete needs expressed
were two discernible trends, namely oral assessment practices and criterion-oriented
assessment.
The present study explored European foreign language teachers’ perceived LAL
levels generally and in their local educational contexts as well as their expressed
training needs in the field. The results of both the questionnaire and interview data
indicate that the informants’ perceived LAL profiles are not sufficient for the
assessment activities they have to accomplish in their professional field. This main
result corroborates findings from other studies such as Boraie (2015), Cheng, Rogers
54 D. Tsagari & K. Vogt
and Hu (2004), Hasselgreen et al. (2004), Inbar-Lourie & Levi (2015), Lam (2014),
Kvasova and Kavytska (2014) and Xu (2015). The interview data additionally suggests
that teachers in our sample do not feel effectively prepared, which adds a
psychological component that might have an impact on teachers’ job satisfaction as
well, supporting Scarino’s (2013) call for a holistic approach to LAL.
All sources of data clearly showed that teacher education programs do not provide
adequate training in LTA (albeit with individual exceptions), which drives teachers to
use compensation strategies such as reliance on published assessment materials or the
uncontested adoption of mentors’ or colleagues’ assessment practices. Such a
tendency to “test as you were tested” (in analogy to “teach as you were taught”)
implies the danger of innovations in assessment not being implemented in practice
due to a lack of assessment routines that are firmly based on published knowledge.
Generally, approaches like learning on the job, which was displayed by many
informants, inhibits effective teacher development.
The interview data yielded individual insights into perceptions and sensitivities of
teaching professionals in their local contexts. Summarizing the findings from the
interviews, one can see that respondents in our study tend to revert to traditional
assessment procedures that are essentially written and typically use similar
assessment formats. Opinions are divided on the transparency of marking criteria for
learners, ranging from rather positive statements to raised doubts and uncertainty
caused by a lack of LAL in this area. Feedback procedures seem to reflect a deficit-
oriented approach rather than the more positively worded feedback that is inherent
in the Common European Framework of Reference and its descriptors that value
competencies even on low levels instead of highlighting deficits (Vogt, 2004).
Alternative forms of assessment have not yet entered mainstream assessment
practices although once teachers in our sample have tried them out successfully, they
realise their potential for Assessment for Learning. There is no experience with
standardized tests and teachers are not in a position to evaluate them critically.
Concepts related to LTA remain fuzzy to a substantial number of respondents in our
study, which is attributable to low LAL levels. Consequently, teachers have difficulties
to specify their personal professional development needs although teachers in some
contexts do identify urgent areas for training.
The evidence from the existing data in the study showed, however, that foreign
language teachers are seriously thinking about their place within language
assessment, and are ready for greater levels of involvement in training initiatives to
broaden and diversify their assessment literacy with varying priorities depending on
contextual assessment requirements. Teachers have pointed out repeatedly that their
assessment competency has not yet reached a level that would allow them to feel
confident enough about their language assessment activities. Rather, they realize that
professional training is required and this aspect of their teaching is one that definitely
needs improvement. What will benefit teachers is professional development in this
area.
References
Aguado, K. (2014). Triangulation: Möglichkeiten, Grenzen, Desiderate.
[Triangulation: possibilities, limitations, desiderata.] In D. Elsner & B. Viebrock
56 D. Tsagari & K. Vogt
Stiggins, R. J. (1999a). Learning teams can help educators develop their collective
assessment literacy. Journal of Staff Development, 20/3. Retrieved April 11, 2017
from http://www.nsdc.org/library/publications/jsd/stiggins203.cfm.
Stiggins, R.J. (1999b). Teams. Journal of Staff Development, 20/3. Retrieved April 11, 2017
from http://www.nsdc.org/library/publications/jsd/stiggins203.cfm.
Stiggins, R.J. (2001). Student-Involved Classroom Assessment (3rd ed.). Upper Saddle
River, NJ: Prentice Hall.
Stiggins, R.J. (2006). Assessment for Learning: A Key to Motivation and Achievement.
Edge 2(2), 3-19.
Tashakkori, A. & Teddlie, C. (1998). Mixed Methodology: Combining Qualitative and
Quantitative Approaches. Thousand Oaks, Ca: Sage.
Taylor, L. (2009). Developing Assessment Literacy. Annual Review of Applied Linguistics
29, 21-36. DOI: 10.1017/S0267190509090035.
Taylor, L. (2013). Communicating the Theory, Practice and Principles of Language
Testing to Test Stakeholders: Some Reflections. Language Testing 30(3), 403-412.
DOI: 10.1177/0265532213480338.
Teddlie, C. & Tashakkori, A. (2009). Foundations of Mixed Methods Research: Integrating
Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Los
Angeles, CA: Sage.
Tsagari, D. (2016). ‘Assessment orientations of primary state school EFL teachers in
two Mediterranean countries’. In C·E·P·S Journal – CEPS Journal (Center for
Educational Policy Studies Journal). Special thematic issue entitled ‘Assessment of
Young Foreign Language Learners’ 6(1), 9-30. Retrieved April 11, 2017 from
http://www.cepsj.si/doku.php?id=en:volumes:2016-vol6-no1
Tsagari, D. (2014) Unplanned LOA in EFL Classrooms: Findings from an Empirical Study.
Paper presented at the Roundtable on Learning-Oriented Assessment In
Language Classrooms And Large-Scale Assessment Contexts. Teachers
College, Columbia University, New York, October 10-12, 2014.
Tsagari, D. & Csépes, I. (Eds.) (2011). Classroom-based Language Assessment. Frankfurt /
Main et al: Peter Lang.
Turner, C. E. (2014). Mixed Methods Research. In A. J. Kunnan (Ed.), The Companion to
Language Assessment (pp. 1-15). Hoboken, NJ: John Wiley & Sons, Inc.
Vogt, K. (2004). Der Gemeinsame Europäische Referenzrahmen für Sprachen: Inhalte,
Ziele, Diskussion. [The Common European Framework of Reference for
Languages: Contents, Aims, Discussion.] Der Fremdsprachliche Unterricht
Englisch, 38(2004), 69, 48–51.
60 D. Tsagari & K. Vogt
2. Purposes of testing
2.1. Please specify if you were trained in the following domains.