Conference Presentations by Janna Fox
The use of diagnostic assessment in the post-admission context of first-year engineering requires... more The use of diagnostic assessment in the post-admission context of first-year engineering requires finding the right balance in the interplay of marketing (Read, 2008), task, rater, scale, and feedback. This longitudinal study investigates the potential of diagnostic assessment to identify students-at-risk, provide more effective academic support, and help prevent failure.
Increasingly, universities are using post-entry diagnostic assessment (Alderson, 2005. 2007) to i... more Increasingly, universities are using post-entry diagnostic assessment (Alderson, 2005. 2007) to identify entering students at-risk and provide early, individualized academic support. However, whether such support should be mandatory has remained a question of considerable debate (Read, 2008). This paper presents findings from a longitudinal study in a Canadian university which utilizes diagnostic assessment to better inform its undergraduate engineering program. In the study’s first year, 489 students (50% of the engineering cohort) were assessed with a modified, engineering-based version of the University of Auckland’s Diagnostic English Language Needs Assessment (DELNA). Students were informed of their results and invited on a voluntary basis to meet with peer mentors. Only 12 students (2%) sought additional feedback, including 3 of the 27 students (11%) who were identified at-risk. At the end of the year, 10 from the at-risk group had dropped out or were failing; 7 were borderline failures; and 10 were performing well (including the 3 at-risk students who had sought feedback). In the study’s second year, 899 students (95% of cohort) were assessed, but only 33 students (4%) voluntarily followed-up on their results. However, there was evidence that 3 of these students remained in the engineering program because of early diagnosis and pedagogical intervention. In year three, the diagnostic assessment was embedded within a required engineering course and students seeking feedback dramatically increased. Findings suggest that voluntary uptake of diagnostic feedback may not work; whereas embedding uptake in the context of a required course dramatically increases the potential of reaching students-at-risk.
When the goal of a diagnostic assessment is positive washback on learning (Cheng, 2005), finding ... more When the goal of a diagnostic assessment is positive washback on learning (Cheng, 2005), finding the right balance in the interplay of marketing (Read, 2009), task, rater, scale, and feedback that results in optimal use that optimizes the use of assessment information is a critical issue. This paper presents initial findings from a longitudinal study investigating the potential of diagnostic assessment to identify students-at-risk early in their first year of engineering, to provide academic support, and help prevent failure. In the first year of the study, 489 students (50% of the first-year engineering cohort) were assessed with a modified version of the DELNA, which reflected the engineering context in: 1) tasks (i.e., the writing prompt used an engineering graph; a mathematics diagnostic was added); and 2) raters (i.e., 4 were drawn from with engineering; and 4 from with writing studies backgrounds). Raters were trained to use the generic DELNA scale and to write detailed feedback for students-at-risk. Students were informed of their results by e-mail and invited on a voluntary basis to discuss them with engineering and writing studies peer mentors. Only 12 (2%) of 489 students sought additional feedback, including 3 (11%) of the 27 students who were identified at-risk. At the end of the year, 10 of from the at-risk group had dropped out or were failing; 7 were borderline failures; and 10 were performing well. Case studies (Creswell, 1998; Merriam, 1998) of conducted with first-year students identified three additional risk factors: second language (L2) learners without language support, transfers from other contexts, and students repeating first-year. Amongst at-risk students who were successful in their first year, 2 had sought additional feedback on their diagnostic assessment results. Other key factors were: evidence of social networks, making connections with learning support, and strategic management of course demands. In the second year of the study, 899 students (95% of cohort) were assessed. While changes to the scale, raters, and tasks increased the consistency and quality of feedback, these changes failed to resolve the problem of voluntary uptake. Only 33 students (4%) followed up on their results. Proposed revisions to the diagnostic process include using it as a placement test. Implications of this change in purpose and use will be discussed.
Papers by Janna Fox
The research literature is full of examples of how better understanding of our students’ language... more The research literature is full of examples of how better understanding of our students’ language learning experiences can improve the impact and effectiveness of our teaching and, as a result, enhance learning (see, for example, Gottlieb, 2006; Ivaniˇc, 2010; Zamel and Spack, 2004). In Chapter 5, we discuss assessment approaches and practices that can help us to better understand our students’ learning by addressing the following questions about who our individual students are: What is each student in my class bringing to the learning of the language? How does each of my students intend to use the language in the future? What are my students’ learning goals? When we recognize the unique and varied cognitive, cultural, educational and emotional differences of our students, we are in a far better position to address gaps in their knowledge, skill and understandings and develop their strengths in learning (Cheng, 2013, Fox and Cheng, 2007).
Les Presses de l’Université d’Ottawa | University of Ottawa Press eBooks, 2007
It should be noted, that there is no one way to design a test, but there are commonly agreed stan... more It should be noted, that there is no one way to design a test, but there are commonly agreed standards that should apply to all testing activities, whether they occur within your classroom as a result of your own test development, across a programme, a system, a nation, or around the world. In the sections below, we will ‘walk through’ a process of designing a test. First, we will consider some of the key steps in developing a test. The higher the stakes of the test, the more time and effort will go into work on each of these steps. However, as teachers, we also need to clearly understand the importance of each step and do our best, working alone or with our colleagues, to insure that we have designed a test that measures what we intend it to measure; adequately represents or samples the outcomes, content, skills, abilities, or knowledge we are measuring; and elicits information that is useful in informing our teaching and in supporting the learning of our students.
Les Presses de l’Université d’Ottawa | University of Ottawa Press eBooks, 2007
As we discussed in Chapter 5, the research literature is full of examples of how better understan... more As we discussed in Chapter 5, the research literature is full of examples of how better understanding of our students’ language learning experiences can improve the impact and effectiveness of our teaching and, as a result, enhance learning (see, for example, Gottlieb, 2006; Ivaniˇc, 2010; Zamel and Spack, 2004). In this chapter, we again discuss assessment approaches and practices that can help us to better understand our students’ learning by addressing the following questions about who our individual students are: What are my students’ learning goals? What motivates their learning? How can our feedback support their learning? In this chapter we take a closer look at feedback in assessment practice and how we can shape the feedback to support our students’ learning as part of day-to-day classroom activity.
Les Presses de l’Université d’Ottawa | University of Ottawa Press eBooks, 2007
Language Assessment Quarterly, Jan 2, 2015
Why would Language Assessment Quarterly have a special issue on trends in language assessment in ... more Why would Language Assessment Quarterly have a special issue on trends in language assessment in Canada? As the articles in this special issue suggest, Canada is a living laboratory for language assessment, whose unique language history, policies, and resulting complexity have provided extraordinarily fertile ground for fundamental considerations of language teaching, learning, and testing. Historically, Canada was the site of colonial conquest in which intergroup linguistic and cultural rights were often contested and conflictual, and throughout its history, language has been a driving force of social policy in Canada. Some language policies have been decidedly unjust. For example, from 1755 to 1762, French-speaking Acadians were forced from their homes and deported by English-speaking colonial authorities; from the late 19th through the mid-20th century, a government policy of aggressive assimilation forced First Nations and aboriginal children to enrol in residential boarding schools that attempted to erase home languages and cultures. Arguably, such dark periods in Canada’s history have led to the current heightened awareness of linguistic sensitivities, rights, and freedoms. However, shadows from Canada’s historical past continue to influence current linguistic and cultural policy initiatives, discussion, and debate. Three policy and policy-making areas dominate Canadian language concerns and concomitant research on and practices in language assessment, namely, aboriginal and First Nations affairs or “indigeneity” (Haque & Patrick, 2015, p. 28), bilingualism, and multiculturalism. A consideration of language assessment within the Canadian context requires some background regarding each of these three key language policy fields.
System, Sep 1, 2013
Despite being one of the most widely used proficiency measures of English for Academic Purposes, ... more Despite being one of the most widely used proficiency measures of English for Academic Purposes, the newly-designed Internet-based Test of English as a Foreign Language (TOEFL iBT) remains externally under researched compared to previous TOEFL versions. Specifically, research is needed on factors that effect student performance within this new testing context. The purpose of this research was to identify and raise potential issues associated with the TOEFL iBT as explicitly linked to constructdependent and construct-irrelevant variance factors. Through a key informant method, four language testing researchers sat for the TOEFL iBT and reported their experiences through two externally mediated focus groups. The testing researchers' experiences were analyzed using a standard thematic analysis then deductively grouped based on their form of measurement variance. Results provide valuable considerations for measuring English for academic purposes and serve to identify specific, practical issues related to the language testing conditions, question design, and the testing protocol of the TOEFL iBT.
As we discussed in Chapter 1, teachers routinely deal with large-scale testing which is external ... more As we discussed in Chapter 1, teachers routinely deal with large-scale testing which is external to their classrooms, often with more at stake (or higher stakes). They routinely engage in small-scale testing, which is internal to their classrooms and measures achievement at the end of a unit or course with less at stake or (with lower stakes). Such testing, often referred to as assessment of learning, tends to be a special event, a signpost or marker in the flow of activity within a course. On the other hand, assessment for and as learning is part of ongoing classroom assessment practices. In Chapter 2, we examined how defining learning goals and outcomes, and designing our learning activities and assessment tasks in relation to those goals and outcomes, can both support our students’ learning and inform and focus our teaching. Before discussing the processes and procedures of classroom assessment planning and practices, we will highlight some of the key differences between large-scale testing and classroom assessment practices. We will then walk you through classroom test development in Chapter 4.
This chapter addresses some of the most challenging aspects in teaching: deciding what to teach, ... more This chapter addresses some of the most challenging aspects in teaching: deciding what to teach, what to assess and how to align assessment in the classroom to the learning goals and outcomes for our students. Such goals and outcomes may be explicitly defined for a whole programme by benchmarks and standards, curriculum and external tests, or implicitly defined by teachers through textbooks at the classroom level. Teachers may also define outcomes by eliciting information from their students through needs analysis, student–student and student– teacher interaction, student self-assessment, and so forth. Look at Figure 2.1 and think about what it means to us as teachers to achieve instructional goals or outcomes through teaching and assessment. In the centre of this triangle is our students’ learning. The first question we need to ask relates to the learning goals or outcomes we have for our students: What do I want my students to learn?
Large-scale language assessment affects the lives of millions of test takers every year. As a mea... more Large-scale language assessment affects the lives of millions of test takers every year. As a means of achieving fairness in large-scale tests and testing practices, language testers have traditionally focused on uniformity and technical quality, and large-scale language assessment has increasingly been associated with political, bureaucratic, or accountability agendas. However, some scholars suggest that it can serve pedagogic and learning purposes as well. More recently, considerations of large-scale language assessment have been shaped by a more critical view of the contexts in which it occurs and of the sociopolitical motives it serves. Keywords: Assessment evaluation; Assessment methods; Classroom assessment
Springer eBooks, 2017
Alternative assessment has often been cast in opposition to traditional testing (particularly, hi... more Alternative assessment has often been cast in opposition to traditional testing (particularly, high-stakes, discrete-point, multiple-choice testing). However, some (e.g., Bailey, K. Learning About Language Assessment: Dilemmas, Decisions, and Directions. Pacific Grove: Heinle & Heinle, 1998) have argued that it is more accurate to regard such tests and testing practices as one end of a continuum of assessment possibilities or alternatives in assessment. Although there are many alternative assessment approaches (see, e.g., criterion-referenced observational checklists, reading response journals, learning logs, poster presentations), the literature reviewed for this chapter suggests that over the past few years, portfolio assessment has become the most pervasive and prominent alternative assessment approach. Although portfolios take different forms and serve different purposes, they share in common the ongoing selection and collection of work as evidence of learning and development over time. Portfolio assessment initiatives have become increasingly used in language teaching and learning contexts, and their potential benefits have been widely promoted (e.g., Little, Lang Test 22(3):321-336, 2005), particularly when they serve formative/learning purposes; they have been less successful in summative assessment contexts (e.g., Fox, Contact Spec Res Symp Issue, 40(2), 68-83, 2014). Discussions of alternative assessment (which are at times viewed as more authentic, because they are closer to and have more in common with classroom practices) have continued to prompt lively discussions of validity and reliability. Arguably, however, the most substantive changes in alternative assessment have occurred as a result of the widespread use of increasingly sophisticated technologies. For example, e-portfolios have emerged as an important assessment alternative, which can provide a more flexible, less cumbersome, and longer-term J. Fox
Any person who does any unauthorized act in relation to this publication may be liable to crimina... more Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages.
Language Testing, Oct 1, 2004
This study took a grounded theory approach in investigating the relationship between the criteria... more This study took a grounded theory approach in investigating the relationship between the criteria indigenously drawn for an English for academic purposes (EAP) test and the outcomes of test decisions over time. Test outcomes were examined across three theoretical samples: (1) an EAP teacher-identified sample of misplaced students; (2) a database-generated sample that matched the key variables in the EAP teachers’ sample; and (3) a random sample of test-takers drawn from a single administration of the test. Although analysis revealed two sources of testing error, namely, under-specification of bands in the writing sub-test scale and an under-valuing of the listening sub-test in the overall weighted average of the test, it also revealed the relationship between key patterns of performance on the test and performance in university classrooms that signposted students at risk. As such, the study provides evidence of the usefulness of inquiry that examines the relationship between EAP test performance and the use of English as a mediating tool in academic performance.
Language Testing, May 9, 2023
Routledge eBooks, Feb 16, 2022
Uploads
Conference Presentations by Janna Fox
Papers by Janna Fox