Audrey Amrein-Beardsley
Audrey Amrein-Beardsley is currently a Professor in Mary Lou Fulton Teachers College. Her research interests include educational policy, educational measurement, research methods, and more specifically, high-stakes tests and value-added methodologies and systems. She is author of over 50 peer- and editorially-reviewed journal articles and most recently an academic book published in 2014 titled "Rethinking Value-Added Models in Education Critical Perspectives on Tests and Assessment-Based Accountability."
Given her scholarly contributions in the areas of educational research and policy, from 2011-2014 she was honored as one of the top edu-scholars in the nation, honored for being a university-based academic who is contributing most substantially to public debates about the nation's educational system. Her research has also been highlighted in popular press outlets including National Public Radio (NPR), The New York Times, USA Today, the Washington Post, Education Week, The Huffington Post, The Economist; other local news outlets (e.g., The Boston Globe, the Houston Chronicle, the Arizona Republic); and television media, including Public Broadcasting Service (PBS) shows and most recently HBO’s Last Week Tonight with John Oliver.
Related, she is creator of the blog VAMboozled! She is also the creator and host of a show titled Inside the Academy during which she interviews and archives the personal and professional histories of some of the top educational researchers in the academy of education. Follow her on twitter at @amreinbeardsley or @vamboozled_
Phone: 602-561-4731
Given her scholarly contributions in the areas of educational research and policy, from 2011-2014 she was honored as one of the top edu-scholars in the nation, honored for being a university-based academic who is contributing most substantially to public debates about the nation's educational system. Her research has also been highlighted in popular press outlets including National Public Radio (NPR), The New York Times, USA Today, the Washington Post, Education Week, The Huffington Post, The Economist; other local news outlets (e.g., The Boston Globe, the Houston Chronicle, the Arizona Republic); and television media, including Public Broadcasting Service (PBS) shows and most recently HBO’s Last Week Tonight with John Oliver.
Related, she is creator of the blog VAMboozled! She is also the creator and host of a show titled Inside the Academy during which she interviews and archives the personal and professional histories of some of the top educational researchers in the academy of education. Follow her on twitter at @amreinbeardsley or @vamboozled_
Phone: 602-561-4731
less
InterestsView All (7)
Uploads
Papers by Audrey Amrein-Beardsley
Four separate standardized and commonly used tests that overlap the same domain as state tests were examined: the ACT, SAT, NAEP and AP tests. Archival time series were used to examine the effects of each state's high-stakes testing program on each of these different measures of transfer. If scores on the transfer measures went up as a function of a state's imposition of a high-stakes test we considered that evidence of student learning in the domain and support for the belief that the state's high-stakes testing policy was promoting transfer, as intended.
The uncertainty principle is used to interpret these data. That principle states "The more important that any quantitative social indicator becomes in social decision-making, the more likely it will be to distort and corrupt the social process it is intended to monitor." Analyses of these data reveal that if the intended goal of high-stakes testing policy is to increase student learning, then that policy is not working. While a state's high-stakes test may show increased scores, there is little support in these data that such increases are anything but the result of test preparation and/or the exclusion of students from the testing process. These distortions, we argue, are predicted by the uncertainty principle. The success of a high-stakes testing policy is whether it affects student learning, not whether it can increase student scores on a particular test. If student learning is not affected, the validity of a state's test is in question.
Evidence from this study of 18 states with high-stakes tests is that in all but one analysis, student learning is indeterminate, remains at the same level it was before the policy was implemented, or actually goes down when high-stakes testing policies are instituted. Because clear evidence for increased student learning is not found, and because there are numerous reports of unintended consequences associated with high-stakes testing policies (increased drop-out rates, teachers' and schools' cheating on exams, teachers' defection from the profession, all predicted by the uncertainly principle), it is concluded that there is need for debate and transformation of current high-stakes testing policies.
Four separate standardized and commonly used tests that overlap the same domain as state tests were examined: the ACT, SAT, NAEP and AP tests. Archival time series were used to examine the effects of each state's high-stakes testing program on each of these different measures of transfer. If scores on the transfer measures went up as a function of a state's imposition of a high-stakes test we considered that evidence of student learning in the domain and support for the belief that the state's high-stakes testing policy was promoting transfer, as intended.
The uncertainty principle is used to interpret these data. That principle states "The more important that any quantitative social indicator becomes in social decision-making, the more likely it will be to distort and corrupt the social process it is intended to monitor." Analyses of these data reveal that if the intended goal of high-stakes testing policy is to increase student learning, then that policy is not working. While a state's high-stakes test may show increased scores, there is little support in these data that such increases are anything but the result of test preparation and/or the exclusion of students from the testing process. These distortions, we argue, are predicted by the uncertainty principle. The success of a high-stakes testing policy is whether it affects student learning, not whether it can increase student scores on a particular test. If student learning is not affected, the validity of a state's test is in question.
Evidence from this study of 18 states with high-stakes tests is that in all but one analysis, student learning is indeterminate, remains at the same level it was before the policy was implemented, or actually goes down when high-stakes testing policies are instituted. Because clear evidence for increased student learning is not found, and because there are numerous reports of unintended consequences associated with high-stakes testing policies (increased drop-out rates, teachers' and schools' cheating on exams, teachers' defection from the profession, all predicted by the uncertainly principle), it is concluded that there is need for debate and transformation of current high-stakes testing policies.