View metadata, citation and similar papers at core.ac.uk
brought to you by
CORE
provided by University of Hertfordshire Research Archive
BLENDED LEARNING IN PRACTICE | Spring 2020
Decoding the rubric for dissertation writing: a pilot workshop
Laura Urbano
[email protected]
Abstract
Discussion of exemplars of student work is a productive means of explaining tacit
knowledge and guiding students into the requirements of academic writing. Through a pilot
workshop on dissertation writing, this study examines how exemplars marking can be used
to better understand grading criteria and of quality; and to promote positive transfer of
skills from exemplars to assessment task.
A student-led workshop involved student group analysis and annotation of a series of
example extracts showing a range of good examples and common mistakes from different
sections of a scientific dissertation. For each section of the report the students were shown
the assessment criteria and asked to mark it and additionally to explicitate the marking
criteria by rephrasing them as a list of do’s and don’ts.
Student perception of the usefulness of the workshop was positive and was reflected in
improved assessment outcomes. Teaching implications of these results are discussed, and
some avenues for future workshop applications are outlined.
Keywords:
Exemplars, exemplars; quality; deconstructing; peer discussion; teacher guidance
Introduction
A writing crisis?
The recent expansion of ‘‘writing intensive’’ courses across a wide range of disciplines,
including and not limited to the STEM disciplines (Science, Technology, Engineering,
Mathematics), has led to discussions regarding how the benefits and the mechanisms of
writing within the disciplines encourages learning, socialization and radically changes
student’s attitude towards the discipline (Carter, Ferzli, and Wiebe 2007). However, many
students do not have a wide experience in writing lab reports in a professional,
“publishable” style. On the other hand, it is not uncommon to hear undergraduate
instructors lament over the weak writing skills of many of their students, and the concept of
a “Writing crisis” has been circulating for many years, often perpetuating a long-standing
Page | 55
BLENDED LEARNING IN PRACTICE | Spring 2020
‘moral panic’ about the poor quality of students' academic writing (French 2013; Kim et al.
2009).
Some solutions outlining design implications of a dialogic approach to student writing
pedagogy have been highlighted as calling for dialogue to be at the centre of an academic
literacies stance (Lillis 2003). The more the subject lecturer is involved in this integrated
writing instruction, the better are the opportunities to elicit student perspectives, consider
‘the resources that writers bring to the academy’(Lillis and Scott 2015). Fernsten and Reda,
support the idea that writer self-awareness provides students with a better understanding
of the writing process, additional tools with which to attempt writing assignments, and
greater confidence to move through the multiple literacy tasks of the academy and beyond.
By inviting students to examine their beliefs about writing, these activities are useful in any
classroom and across disciplines (Fernsten and Reda 2011).
Decoding the rubric
One tool which is available for the students to consider their beliefs about writing and for
decoding the discipline is the use of rubrics. Educators tend to define the word ‘rubric’ in
slightly different ways. A commonly used definition is a document that articulates the
expectations for an assignment by listing the criteria or what counts and describing levels of
quality from outstanding to poor.
The number of studies in this field is limited, and the results are of complex interpretation.
On one hand, some studies (Greenberg 2015; Petkov and Petkova 2006; Reitmeier,
Svendsen, and Vrchota 2006) suggest that including students in the development and use of
rubrics or sharing the rubric prior to an assignment was associated with improved
assessment outcomes, while other studies have shown no differences between students’
marks with and without rubrics (Green and Bowser 2006). This would appear to suggest that
simply circulating a rubric to the students cannot be expected to have significant impact on
student work and perception - students must actively make use of a rubric (e.g. in
assessments or revision) in order to gain benefits.
However, if students do not understand the rubric terminology and cannot differentiate
between academic standards, rubrics have little value for either preparation or feedback.
Such ‘barriers to learning’ can be particularly significant for students from unusual
backgrounds as students, at all levels, do not necessarily ‘know what to do’ in response to
conventional assessment tasks, essay criteria, or instructions about styles of referencing.
Many of the problems experienced by learners are at least partly being caused by the
cultural values and assumptions which underpin different aspects of pedagogy and
assessment. In particular, “Problems in decoding and responding to expectations appear to
be particularly acute in relation to assessment criteria ” (Haggis 2006) and terms such as
‘critical analysis’ are often unclear to students and need further explanation (Reddy and
Page | 56
BLENDED LEARNING IN PRACTICE | Spring 2020
Andrade 2010). Haggis 2006 also makes a cogent argument against the risks of considering
these findings as an reason for ‘dumbing down’ or as an indication of the erosion of
standards – it highlights instead the need to shift the framing of the ‘problem’ from a static,
condition‐based view of the individual learner (‘what is wrong with this student’) towards a
dynamic, process‐based view which tries to identify problematic aspects of higher education
discourse and practice (what elements of the curriculum are preventing some students from
being able to access this subject?’) (Haggis 2006).
An example of such an approach is represented by the work undertaken as part of the
WhatWorks? Student retention and success change programme which was recently
implemented at the University of Wolverhampton. The initiative focused on implementing
and evaluating an inclusive assessment intervention, which included a student led
assignment unpacking session where students discussed in groups their understanding of
the assignment requirements and fed them back to the group and the lecturer (Debra
Cureton 2012). The initiative saw the students as active parts and co-creators in the further
development of the inclusive assessment curricula, empowering them and resulting in
improved attainment and confidence (Curran 2017; Debra Cureton 2012).
The support of exemplars
While the rubric can improve students attainment by clarifying the outcomes, it relies on
the assumption that the description of such outcomes is clear and unequivocal, giving the
‘tools of the trade’ for granted (Lillis and Scott 2015). Students often find it difficult to
understand assessment criteria and the nature of good quality work in their discipline.
Under these circumstances, they face challenges in identifying and providing what teachers
are looking for in an assessment task (Sadler 1987).
Royce Sadler has been a remarkably influential promoter of the value of using exemplars,
and he defined exemplars as key examples chosen so as to be typical of designated levels of
quality or competence (Sadler 1989, 2002). Exemplars are therefore given examples of best
or worst practice which are designed to assist students to increase their understanding of
competences, content or knowledge and to explicate established criteria and standards
(Greenberg 2015). In contrast with model answers – which are single “perfect” answers –
exemplars often show a grade range and can indicate how the exemplar satisfies the stated
criteria for assessment or they may simply be presented as they were submitted for
assessment by the former student (Huxham 2007; Newlyn 2013).
Handley and Williams have observed that exemplars can be effective tools in increasing
students’ engagement with feedback (Handley and Williams 2011). In addition, Scoles,
Huxham, and McArthur al observed that students showed great support for the use of
exemplars. They identified exemplars as a practical tool that students can access to help
close the gap between feedback and exams, as it allows students to take control of the
Page | 57
BLENDED LEARNING IN PRACTICE | Spring 2020
feedback process, and increasing exam marks (Scoles, Huxham, and McArthur 2013). They
state that the exemplars helped “understand what was wanted from their lecturers”,
especially “ in conjunction with conversations with lecturers” (Scoles, Huxham, and
McArthur 2013, p. 6-7).
Issues of time and consent are very important considerations in the argument against the
use of exemplars, as well as the idea that providing exemplars 'gives students the answer'
and may lead to plagiarism (Newlyn 2013; Newlyn and Spencer 2009). Such issues need to
be considered and could potentially be addressed by making the exemplar as generic as
possible, therefore ensuring that the exemplar could be multiple cohorts.
An interesting example of intervention which merged the benefits of using exemplars with
the benefits of rubrics has been reported by Jones et al. Their intervention comprised of (1)
deconstruction of the rubric and standardising the marking method; (2) examples and
exemplars; (3) peer review; (4) self-review; and (5) a reflective diary; and resulted in
improved marks and students confidence (Jones et al. 2017).
A specific UH perspective
The BSc in Pharmaceutical Sciences at the University of Hertfordshire programme was
designed with extensive input from external stakeholders from pharmaceutical and
biotechnology companies to meet the needs of the ever-evolving pharma industry in the UK
and worldwide, to produce graduates who are able to contribute to research, discovery,
development and production in the pharmaceutical industry and related areas (LMS 2019).
To “communicate effectively both orally and in written form” is included among the
intended learning outcomes for the programme and is supported through exercises on
report writing, feedback on written assignments personal academic tutors and seminars at
level 6 (LMS 2019, p.5). However, in line with the larger higher education context, students
appear to struggle in writing extended reports in an academic style and this can lead to
negative perceptions of the student writer identity and anxiety, in particular for students
from varied backgrounds (Fernsten and Reda 2011). At Level 6 the BSc in Pharmaceutical
Sciences includes two written assignments for which the assessment criteria are available,
but often misunderstood es exemplified by the questions received in relation to the
assignment as “How much detail the critical analysis should be?”, “I just wanted to ask how
much detail and information do we need to include”.
This study focuses on the preliminary evaluation of a workshop designed to offer these
students a chance to bridge this lack of clarity for them to achieve the learning outcomes
more easily and perform better.
Page | 58
BLENDED LEARNING IN PRACTICE | Spring 2020
Methods
A student led workshop was piloted as part of the Advances in Pharmaceutical Formulations
and Drug Delivery (APFDD) optional level 6 module (n= 15 students). The module
assessment includes a written report on lab-based activities which accounts for 30% of the
final module mark.
The workshop was designed as a tool to clarify the assessment criteria by analysing and
discussing as a group and annotating a series of example extracts showing a range of good
examples and common mistakes from different sections of a scientific dissertation. The
exemplars were based on published research papers, amended to include frequent
variations and mistakes frequently observed by the author in previous year’s marking. The
workshop was delivered in 1.5 hours, students were provided the exemplars as printed
version to annotate. For each section of the report the students were shown the assessment
criteria and asked to annotate the exemplar (Figure 1 and Figure 2) and to additionally
explicitate the marking criteria by rephrasing them as a list of do’s and don’ts. Each group
compiled a list in real time on post-it notes and then handed it to the teacher who
transcribed the criteria onto the slides (Figure 3). The updated slides were subsequently
circulated to the students to be used as a support during the write-up.
Immediate evaluation was performed using Brookfield’s critical incident questionnaire
(“What was the most engaged/disengaged moment?”, “What was the most confusing
moment?”, “what was the most useful moment?”) and collecting anonymous feedback on
post-it on a voluntary basis. Delayed evaluation was based on comparing responses to midmodule feedback questionnaire and written assignment marks of the cohort that was
administered the workshop with the previous year cohort that was not administered the
workshop (2018/19, n=15; 2017/18, n=16). Welch t test was performed given the different
sample size.
Page | 59
BLENDED LEARNING IN PRACTICE | Spring 2020
Figure 1. Example of annotated exemplar (1)
Figure 2. Example of annotated exemplar (2)
Figure 3. Example of “translated” assessment criteria
Results and discussion
Brookfield’s critical incident questionnaire has been reported to be a beneficial instrument
for educators to assess their own teaching, make adjustments to class delivery based on
student feedback to engender greater student engagement, and encourage future teachers
to engage in the process of self-reflection (Jacobs 2015). A simplified version of the
questionnaire was answered by 8 in 15 participants (53%) and results are represented
through a word cloud in Figure 4. The immediate feedback was overwhelmingly positive in
terms of perceived usefulness and (e.g. “engaged throughout the workshop”, “would have
been useful before dissertation”), no elements of confusion were identified and the only
criticism concerned the general character of the workshop (not very specific to lab report, to
Page | 60
BLENDED LEARNING IN PRACTICE | Spring 2020
reports in general”, “should be more specific to the actual assessment”). The latter is not
unexpected as the exemplars were purposely designed as generic in order not to provide a
report “template” and to be potentially administrable to multiple cohorts (Newlyn 2013;
Newlyn and Spencer 2009).
The good perception of the intervention also found a correspondence in the improved
assessment outcomes as shown in Figure 5, and the introduction of the workshop
corresponded to a 12% increase in the written assignment marks. Such increase was
observed across the coursework written assignments average marks (written assignment:
12% increase, lab report: 28 % increase) but not in end of module examination which on the
contrary showed a 7.5% decrease in final marks. This suggests that the increased
coursework marks cannot be explained by an overall higher academic strength of the cohort
and appear instead to be produced by the effectiveness of the workshop.
A last point of investigation was the effect of the workshop on students understanding of
the assessment processes, and on module structure and learning outcomes. While there
was no direct metric available to measure this, the mid-module feedback for APFDD shows
an increase in understanding of the assessment, and of the module structure and learning
outcomes (Table 1). This is however an indirect measure and suffers of two main limitations:
it is a global evaluation of the module, and not of the single assessments, and of the
reduced number of participants (2017/18: n=10; 2018/19: n=7). This calls for further
investigation, however it does not disprove a positive effect of the workshop introduction.
Figure 4. Word cloud summarising the answers to critical incidence questions for immediate feedback assessment
Page | 61
BLENDED LEARNING IN PRACTICE | Spring 2020
Figure 5. Written assignment results before and after workshop trial (* indicates p<0.05)
Table 1. Average responses to mid-module feedback questionnaire before and after workshop trial
Conclusions & future developments
This study focused on the preliminary evaluation of a workshop designed to bridge the gap
between defined criteria (the rubric) and standards to help students to achieve the learning
outcomes more easily and perform better. The positive outcomes, namely the students’
positive response to the introduction of the workshop and the increase in student marks,
suggests that this workshop represent a promising strategy to achieve such aims, probably
due to the active role of the students in “decoding” the rubric – in line with the previous
research (Curran 2017; Debra Cureton 2012; Green and Bowser 2006; Jones et al. 2017).
It has to be noted that another advantage of the intervention is that it is versatile and time
effective, and could be easily integrated in most higher education courses – in contrast to
other studies which contemplate 11-weeks interventions (Jones et al. 2017). The main
limitations are represented by the small number of participants, which makes it difficult to
draw final conclusions, and by the reduced reflection element in this work. Cultivating
reflective and critical practice with rubrics has been reported to support the development of
engaged, self-regulated learners capable of applying their knowledge and appropriate skills
Page | 62
BLENDED LEARNING IN PRACTICE | Spring 2020
to new tasks and the incorporation of a reflective journal should be considered for future
embodiments of the workshop (Bryan and Clegg 2006; Race 2007).
Future work will focus on continuous monitoring of the workshop within the APFDD
module, and potentially on the exploration of widening its application to larger cohorts of
adjacent disciplines, for example level 6-7 of the Master of Pharmacy course. In addition,
since the rubric is an integral and routine component of assessment, and the goal of
educators is to enhance academic performance, it is in the student interest to support their
understanding of how to effectively utilise a rubric early in their university career –
therefore it could be imagined to pilot similar interventions across the whole duration of
their course and not just in their final year.
References
Bryan, Cordelia and Karen Clegg. 2006. Innovative Assessment in Higher Education.
Routledge.
Carter, Michael, Miriam Ferzli, and Eric N. Wiebe. 2007. “Writing to Learn by Learning to
Write in the Disciplines.” Journal of Business and Technical Communication 21(3):278–302.
Curran, Roisin. 2017. “Staff-Student Partnership: A Catalyst for Staff-Student Engagement.”
Debra Cureton. 2012. “Inclusive Assessment Approaches: Giving Students Control in
Assignment Unpacking.”
Fernsten, Linda A. and Mary Reda. 2011. “Helping Students Meet the Challenges of
Academic Writing.” Teaching in Higher Education 16(2):171–82.
French, Amanda. 2013. “‘Let the Right Ones In!’: Widening Participation, Academic Writing
and the Standards Debate in Higher Education.” Power and Education 5(3):236–47.
Green, Rosemary and Mary Bowser. 2006. “Observations from the Field.” Journal of Library
Administration 45(1–2):185–202.
Greenberg, Kathleen P. 2015. “Rubric Use in Formative Assessment.” Teaching of
Psychology 42(3):211–17.
Haggis, Tamsin. 2006. “Pedagogies for Diversity: Retaining Critical Challenge amidst Fears of
‘Dumbing Down.’” Studies in Higher Education 31(5):521–35.
Page | 63
BLENDED LEARNING IN PRACTICE | Spring 2020
Handley, Karen and Lindsay Williams. 2011. “From Copying to Learning: Using Exemplars to
Engage Students with Assessment Criteria and Feedback.” Assessment & Evaluation in
Higher Education 36(1):95–108.
Huxham, Mark. 2007. “Fast and Effective Feedback: Are Model Answers the Answer?”
Assessment & Evaluation in Higher Education 32(6):601–11.
Jacobs, Mary Ann. 2015. “By Their Pupils They’ll Be Taught: Using Critical Incident
Questionnaire as Feedback.” Journal of Invitational Theory and Practice 21:9–22.
Jones, Lorraine, Bill Allen, Peter Dunn, and Lesley Brooker. 2017. “Demystifying the Rubric: A
Five-Step Pedagogy to Improve Student Understanding and Utilisation of Marking Criteria.”
Higher Education Research & Development 36(1):129–42.
Kim, Eunjung, Jaemoon Yang, Jihye Choi, Jin-Suck Suh, Yong-Min Huh, and Seungjoo Haam.
2009. “Synthesis of Gold Nanorod-Embedded Polymeric Nanoparticles by a
Nanoprecipitation Method for Use as Photothermal Agents.” Nanotechnology
20(36):365602.
Lillis, Theresa. 2003. “Student Writing as ‘Academic Literacies’: Drawing on Bakhtin to Move
from Critique to Design.” Language and Education 17(3):192–207.
Lillis, Theresa and Mary Scott. 2015. “Defining Academic Literacies Research: Issues of
Epistemology, Ideology and Strategy.” Journal of Applied Linguistics and Professional
Practice 4(1):5–32.
LMS. 2019. “Programme Specification BSc Pharmaceutical Science.”
Newlyn, David. 2013. “Providing Exemplars in the Learning Environment: The Case for and
Against.” Universal Journal of Educational Research 1(1):26–32.
Newlyn, David and Liesel Spencer. 2009. “Using Exemplars in an Interdisciplinary Law Unit :
Listening to the Students’ Voices.” Journal of the Australasian Law Teachers Association
121–33.
Petkov, Doncho and Olga Petkova. 2006. “Development of Scoring Rubrics for Projects as an
Assessment Tool across an IS Program.” Issues in Informing Science and Information
Technology 3:499–510.
Race, Philip. 2007. The Lecturer’s Toolkit : A Practical Guide to Assessment, Learning and
Teaching. Routledge.
Page | 64
BLENDED LEARNING IN PRACTICE | Spring 2020
Reddy, Y. Malini and Heidi Andrade. 2010. “A Review of Rubric Use in Higher Education.”
Assessment & Evaluation in Higher Education 35(4):435–48.
Reitmeier, C. A., L. K. Svendsen, and D. A. Vrchota. 2006. “Improving Oral Communication
Skills of Students in Food Science Courses.” Journal of Food Science Education 3(2):15–20.
Sadler, D. Royce. 1987. “Specifying and Promulgating Achievement Standards.” Oxford
Review of Education 13(2):191–209.
Sadler, D. Royce. 1989. “Formative Assessment and the Design of Instructional Systems.”
Instructional Science 18(2):119–44.
Sadler, DR. 2002. “Ah!... So That’s ’Quality’in Schartz P and Webb G (Eds) Assessment Case
Studies: Experience and Practice from Higher Education.”
Scoles, Jenny, Mark Huxham, and Jan McArthur. 2013. “No Longer Exempt from Good
Practice: Using Exemplars to Close the Feedback Gap for Exams.” Assessment & Evaluation
in Higher Education 38(6):631–45.
Carter, Michael, Miriam Ferzli, and Eric N. Wiebe. 2007. “Writing to Learn by Learning to
Write in the Disciplines.” Journal of Business and Technical Communication 21(3):278–302.
Fernsten, Linda A. and Mary Reda. 2011. “Helping Students Meet the Challenges of
Academic Writing.” Teaching in Higher Education 16(2):171–82.
French, Amanda. 2013. “‘Let the Right Ones In!’: Widening Participation, Academic Writing
and the Standards Debate in Higher Education.” Power and Education 5(3):236–47.
Green, Rosemary and Mary Bowser. 2006. “Observations from the Field.” Journal of Library
Administration 45(1–2):185–202.
Greenberg, Kathleen P. 2015. “Rubric Use in Formative Assessment.” Teaching of
Psychology 42(3):211–17.
Haggis, Tamsin. 2006. “Pedagogies for Diversity: Retaining Critical Challenge amidst Fears of
‘Dumbing Down.’” Studies in Higher Education 31(5):521–35.
Kim, Eunjung, Jaemoon Yang, Jihye Choi, Jin-Suck Suh, Yong-Min Huh, and Seungjoo Haam.
2009. “Synthesis of Gold Nanorod-Embedded Polymeric Nanoparticles by a
Nanoprecipitation Method for Use as Photothermal Agents.” Nanotechnology
20(36):365602.
Page | 65
BLENDED LEARNING IN PRACTICE | Spring 2020
Lillis, T., M. Scott-Journal of applied linguistics, and undefined 2007. n.d. “Defining Academic
Literacies Research: Issues of Epistemology, Ideology and Strategy.” Oro.Open.Ac.Uk.
Lillis, Theresa. 2003. “Student Writing as ‘Academic Literacies’: Drawing on Bakhtin to Move
from Critique to Design.” Language and Education 17(3):192–207.
Petkov, Doncho and Olga Petkova. 2006. “Development of Scoring Rubrics for Projects as an
Assessment Tool across an IS Program.” Issues in Informing Science and Information
Technology 3:499–510.
Reddy, Y. Malini and Heidi Andrade. 2010. “A Review of Rubric Use in Higher Education.”
Assessment & Evaluation in Higher Education 35(4):435–48.
Reitmeier, C. A., L. K. Svendsen, and D. A. Vrchota. 2006. “Improving Oral Communication
Skills of Students in Food Science Courses.” Journal of Food Science Education 3(2):15–20.
Sadler, D. Royce. 1989. “Formative Assessment and the Design of Instructional Systems.”
Instructional Science 18(2):119–44.
Sadler, DR. 2002. “Ah!... So That’s ’Quality’ in Schartz P and Webb G (Eds) Assessment Case
Studies: Experience and Practice from Higher Education.”
Page | 66