Introduction To SLO Assessment
Introduction To SLO Assessment
Introduction To SLO Assessment
OUTCOMES ASSESSMENT
Introduction ........................................................................................................... 2
Assessing Student Learning at the Program Level ...................................................................... 2
Purpose of this Guide .................................................................................................................. 3
Frequently Asked Questions .................................................................................. 3
What is a “learning outcome”? How is an outcome different from a goal or objective? ......... 3
What do you mean by “assessment”? Don’t we already assess individual students’
performances in our classes, labs, internships, etc.? ................................................................. 3
What is a “Program”?.................................................................................................................. 4
What are Department and Program Faculty Required to Do? ................................................... 5
Developing Assessment Plans ................................................................................ 5
Begin with a brief statement of the mission and general goals for the program ...................... 5
Identify the intended student learning outcomes of the program ............................................ 6
Describe how data will be collected and analyzed to measure student learning outcomes ..... 7
Make decisions about the logistics for each assessment ............................................................. 8
Develop a tentative schedule for assessing all outcomes ............................................................. 9
Summary: Recommendations for Developing An Assessment Plan ...................... 9
For More Information or Assistance .................................................................... 10
Appendix A .......................................................................................................... 11
UNC-Chapel Hill’s Policy on Outcomes Assessment of Academic Programs and Non-
Instructional Unit Outcomes ..................................................................................................... 11
Appendix B........................................................................................................... 17
Action Verbs That Can Be Used in Writing Learning Outcomes Statements ........................... 17
Appendix C ........................................................................................................... 18
Rubric Used by Grant Review Panels at the National Institutes of Health to Evaluate Research
Proposals ................................................................................................................................... 18
Appendix D .......................................................................................................... 19
Helpful References .................................................................................................................... 19
There are a number of reasons for measuring and assessing student learning outcomes at the program level:
• Curriculum Evaluation: To confirm that the actual knowledge students acquire by completing the
requirements of the major is consistent with the intended goals of the curriculum.
• Student Success: To monitor student success across the program, identify gaps, and suggest initiatives to
enhance the educational experience for all students.
• Evaluate Alumni Success: To ensure that graduates demonstrate competencies such as critical thinking
and communication skills that employers in all fields consistently identify as prerequisites for success in a
rapidly changing economic environment.
• Measure Effectiveness: To gather and aggregate evidence across the program – not just in individual
courses -- to measure effectiveness and guide efforts to continuously improve the quality of the program.
• Accountability: To respond to the increasing pressure from the public and our constituents to be
accountable and to demonstrate the value students receive from participating in our programs and
services.
• Accreditation: To meet regional and professional accreditation requirements. UNC-Chapel Hill’s regional
accreditor, the Southern Association of Colleges and Schools Commission on Colleges (SACSCOC), requires
that:
“The institution identifies expected outcomes, assesses the extent to which it achieves these
outcomes, and provides evidence of improvement based on analysis of the results in…educational
programs, to include student learning outcomes.” 1
2“Self-Study Outline and Required Key Elements,” Program Review webpage on The Graduate School website:
http://gradschool.unc.edu/policies/faculty-staff/program-review/outline.html
What is a “Program”?
For purposes of student learning outcomes assessment, the University of North Carolina at Chapel Hill has defined a
“program” as a credit-bearing course of study that results in a degree or a stand-alone professional certificate 3.
The following guidance is provided to help determine what programs are required to submit assessment plans and
reports:
• A program with multiple degrees at the same level and a common core curriculum (e.g., BA and BS in
Biology) may submit one report, but should include at least one unique measure for each degree.
• Graduate programs that only admit students to pursue a doctoral degree but are approved to award a
master’s degree as students progress toward the doctorate may prepare one report. The outcomes
should reflect what students know or can do upon completion of the doctoral degree.
• Programs with residential and distance education versions of the same degree may submit joint or
separate reports, but either way, need to present evidence that graduates demonstrate equivalent
knowledge and skills, regardless of mode of delivery.
3
A professional certificate program is defined here as admits non-degree students whose objective is the
development of a specialization in a specific field (for example, Dental Assisting).
1. a mission statement,
3. a description of the methods that will be used to gather data to measure student
achievement of each outcome.
Begin with a brief statement of the mission and general goals for the program
• A brief description of the purpose of the program (usually a paragraph)
o Educational values;
o What the program prepares students for (e.g., graduate study, professional positions)
• Focus on selecting 3 - 6 of the most important learning outcomes. More are acceptable, but the
practical ability of program faculty to adequately measure, analyze, and reflect upon the results becomes
compromised when there are too many. For example, many doctoral programs identify four major
outcomes that, with some variations, describe: (1) advanced knowledge of the discipline, (2) research
skills, (3) college teaching skills, and (4) professional development such as presentation skills, ethics, grant
writing, etc.
• Discipline-based societies and professional associations can be good sources for identifying learning
outcomes for specific majors. Many of these organizations have already articulated outcomes and
competencies at each degree level. Below is an example from the American Psychological Association.
• Knowledge Base of Psychology: Students will demonstrate knowledge and comprehension of the major concepts,
theoretical perspectives, historical trends, and empirical findings and the ability to discuss how psychological
principles apply to behavioral problems.
• Scientific Inquiry and Critical Thinking: Students will demonstrate scientific reasoning and problem-solving skills,
focusing on the use of theory use in the design and execution of research plans to address psychological questions
that employ appropriate research methods, data analysis, and interpretation.
• Ethical and Social Responsibility in a Diverse World: Students will demonstrate adherence to professional values to
optimize their contributions and work effectively, even with those who do not share their heritage and traditions, as
well as the adoption of personal and professional values that can strengthen community relationships.
• Communication: Students will demonstrate competence in writing and in oral and interpersonal communication
skills by producing a research study or other psychological project, explaining scientific results, and presenting
information to a professional audience.
• Professional Development: Students will demonstrate psychology-specific content and skills, effective self-
reflection, project-management skills, teamwork skills, and career preparation.
• Direct assessments examine student work products straight from the source — the student. Focus on
student work as the primary source for assessing outcomes — what your majors are actually producing as
part of your department’s curriculum on their way to graduation.
• They take tests or comprehensive exams in certain content domains required for majors.
• They participate in certain experiences that are valued by your department, such as capstone
experiences or field or service learning work.
• They prepare portfolios to summarize their work at the end of the year.
• They write thought papers where they reflect on what, how, and why they learned.
• Indirect assessments examine secondary information about what students have learned (e.g., student
opinions about what they learned or course-taking patterns within a department. Often, indirect
assessments provide feedback that is useful in interpreting results of direct assessments or suggesting
how processes might be improved to enhance learning. For example, if direct methods revealed that
students were not achieving the desired outcomes in a specific area of the curriculum, perhaps a surveys
or focus groups with students might provide clues for improving learning conditions.
o Graduation Rates — Completing the program is not a measure of what students learned.
However, completions might be another type of goal that the program sets for itself.
o Course Grades — Course grades are poor measures of learning outcomes for several reasons:
1. Since grading criteria and standards are matters decided by the individual
instructor, the grades in one course cannot be assumed to be equivalent to grades in
other courses. (There is no “gold standard” to which all teachers adhere.)
2. The tests, assignments, projects, and papers in a course may not measure the
program outcomes of interest to the department. However, it is possible with
careful planning to map specific course assignments to program-level learning
outcomes and to develop standard measures of performance that allow the results
to be aggregated and reviewed as outcomes data. For example, final papers in
capstone courses can be graded using a common rubric that anchors the ratings to
specific performances on certain dimensions, such as critical thinking.
We recommend that you attempt to use mostly direct assessment techniques, but encourage you to
supplement those with indirect assessment methods to the extent that you find the data useful in
improving learning in your discipline.
• If a sample of work or papers will be evaluated, what size sample will be drawn, and how will it be drawn?
• What steps will be taken to protect the identity of students whose work will be judged?
• Who will conduct the assessment? How many judges will there be, and how will these judges be selected?
• Who will ensure that the assessments will take place in a timely way?
• Who will store and analyze the data once the assessments have been made?
If you choose to have judges rate student work, develop a clear rubric for these evaluations.
• What 5-10 common dimensions or attributes should be present in the student work?
• What skills (consistent with the learning goals) should students have demonstrated by completing the
assignment, project, or course?
Develop a tentative schedule for assessing all outcomes, either annually or in multi-
year cycles
This plan can and should be revised by faculty as often as needed. With experience, faculty sometimes recognize
that certain methods they had planned to use to collect data on student outcomes just aren’t feasible or did not
generate the information they needed to make improvements to the program.
If you make significant changes to your assessment plan, send a copy of the revised version to the dean’s office.
SUMMARY:
RECOMMENDATIONS FOR DEVELOPING AN ASSESSMENT PLAN
• Develop (or pull from existing catalog or website materials) a brief mission statement for the program – a
paragraph is sufficient.
• Choose 3-6 of the most important learning outcomes for each degree program.
• Design at least one direct assessment of student learning for each learning outcome. Multiple methods
are most informative, but keep time constraints in mind. The plan does not have to include complete
methodological details on each assessment, but should include a basic description of where in the
curriculum the outcome will be assessed (e.g., capstone course, internship, etc.), what work products will
be assessed (papers, presentations, performances, etc.), how the work will be evaluated (e.g., with a
rubric, by a team of external reviewers, etc.), and any known criteria that would define success or signal
the need for action. The first year of assessing outcomes
might be used to gather baseline data that would then be
used to set criteria for later administrations or chart
improvement.
Summary Steps:
• Indirect assessment methods can be included in your plan
along with direct methods. The Office of Institutional 1. Mission Statement
Research and Assessment can help you in taking advantage of
existing survey data collected on your majors, or help you 2. Define Learning
design and administer other types of data collection such as
alumni surveys, student interviews, etc. Outcomes
• Determine roughly when you plan to conduct each of the 3. Design Methods
assessments and over what period of time. The goal should be
to assess at least one of your major learning outcomes per 4. Determine Schedule
year, so that you are collecting and reflecting upon
manageable portions of feedback about program quality on an
ongoing basis.
Questions or assistance with completing assessment plans and reports, please contact:
Dr. Lynn Williford, Assistant Provost for Institutional Research & Assessment
[email protected], 919-962-1339
UNIVERSITY POLICY
Introduction
PURPOSE
Consistent with its mission statement, UNC-Chapel Hill embraces “…an unwavering
commitment to excellence” and as such is committed to continuous improvement informed by
assessment of institutional effectiveness across all areas and levels. In addition to institution-
level planning and evaluation, assessment of the outcomes of academic programs and non-
instructional units is required by the University’s regional accreditor, the Southern Association of
Colleges and Schools Commission on Colleges (SACSCOC).
The purpose of this policy is to articulate requirements for assessment of outcomes and use of
results for improvement purposes in academic and non-academic units and to specify the roles
and responsibilities for implementing and overseeing assessment processes to ensure
compliance with this policy and with the requirements of SACSCOC.
This policy replaces “UNC-Chapel Hill Guidelines for Student Learning Outcomes Assessment”
approved by the Executive Vice Chancellor and Provost in 2004 and last revised and approved
in 2007, and codifies existing practices for assessment in non-instructional units.
SCOPE OF APPLICABILITY
This policy applies to the following types of programs and units at UNC-Chapel Hill:
Policy
Policy Requirements
The University requires academic programs and non-academic units defined above to prepare
and submit the following to the Executive Vice Chancellor and Provost, through their respective
deans or vice chancellors:
• An assessment plan that contains a mission statement; expected outcomes (that include
student learning outcomes for educational programs); appropriate evaluation methods or
metrics to assess these outcomes; and performance targets.
These assessment plans and annual reports are required in addition to any other evaluation-
related reporting obligations, such as those for Program Review, specialized accreditation,
administrator reviews, five-year reviews of centers and institutes, and sponsored research.
Each plan and report must meet standards that address required elements and appropriate
assessment methodology developed from best practices for assessment of institutional
effectiveness in higher education. These standards, as well as procedures for reporting,
submission timelines, and review and approval processes, are described in the “Standards and
Procedures Related to the Policy on Assessment of Academic and Non-Academic Units”
document available on the website of the Office of Institutional Research and Assessment
(http://oira.unc.edu/institutional-effectiveness/).
The Executive Vice Chancellor and Provost has overall responsibility and oversight for
outcomes assessment processes for academic program and non-instructional units.
Deans and vice chancellors are responsible for ensuring that all of the academic programs and
non-instructional units within their respective organizations have assessment plans, carry out
assessments that meet prescribed standards, and submit annual reports that document
improvements made based on assessment results.
Each dean and vice chancellor will appoint one or more Assessment Coordinators to manage
internal assessment process and to serve as liaisons to the Office of Institutional Research and
Assessment. Coordinators of academic program assessment must be full-time faculty members.
Assessment Coordinators will be responsible for collecting and reviewing assessment plans and
reports, providing feedback to faculty and staff to improve the quality of their assessments, and
providing the plans and reports to the dean or vice chancellor for approval prior to submission to
the Executive Vice Chancellor and Provost. Assessment Coordinators must participate in
periodic training and professional development activities sponsored by the Office of the
Executive Vice Chancellor.
The Office of Institutional Research and Assessment will offer training and consultation to
Assessment Coordinators and program faculty about effective assessment practices. They will
publish the annual calendar of due dates for plans and reports and provide templates and other
assessment resources through their website. In addition to maintaining a central repository for
assessment plans and reports, they will also review these documents for compliance with
standards, provide feedback to Assessment Coordinators on necessary changes, and report to
the Executive Vice Chancellor and Provost concerning policy compliance and opportunities for
process improvement.
Definitions
Non-Instructional Unit: An organization with a mission that does not include offering credit-
bearing courses that lead to a degree or certificate but instead provides services and
operational support in fulfillment of the University’s mission.
Outcomes: Statements that describe what should occur as a result of a program or unit’s
work. Outcomes are often synonymous with goals and objectives; however, they are typically
focused on the quality and impact of the unit’s work as opposed to completion of tasks.
Student Learning Outcomes: Statements that describe what students should know, think and
be able to do upon completion of an academic program.
Assessment Plan: A document that articulates the program or unit’s mission, the intended
outcomes of its work, methods to be used to measure these outcomes, and targets for
determining success.
Assessment Report: An annual report from a program or unit that describes the outcomes
measured during the past year, the findings from those assessments, and how the results were
used to make decisions and improvements.
Related Requirements
Contact Information
POLICY CONTACTS
Important Dates
• Effective Date and title of Approver: March 1, 2017. Approved by Executive Vice
Chancellor and Provost.
• Replaces “UNC-Chapel Hill Guidelines for Student Learning Outcomes Assessment”
approved by the Executive Vice Chancellor and Provost in 2004 and last revised and
approved in 2007.
Approved by:
*Source: Anderson, L. W., & Krathwohl , D. R. (2001). A taxonomy for learning, teaching and assessing, Abridged
edition, Boston, MA, Allyn and Bacon.
Note: Avoid vague terms such as “become familiar with,” “learn about,” and “appreciate,” which are
difficult to measure.
Allen, Mary J., Assessing Academic Programs in Higher Education, Anker Publishing Company, Inc., 2004.
Anderson, Lorin W. and Krathwohl, David R. (Eds.) with Airasian, Peter W., Cruikshank, Kathleen A., Mayer, Richard
E., Pintrich, Paul R., Raths, James, and Wittrock, Merlin C., A Taxonomy for Learning, Teaching, and Assessing: A
Revision of Bloom’s Taxonomy of Educational Objectives, Addison Wesley Longman, Inc. 2001.
Bloom, Benjamin S. (Ed.), Englehart, Max D., Furst, Edward J., Hill, Walker H., and Krathwohl, David R., Taxonomy
of Educational Objectives, The Classification of Educational Goals, Handbook I: Cognitive Domain, David McKay
Company, Inc. New York, 1954, 1956.
Eder, Douglas J., “General Education Assessment Within the Disciplines”, The Journal of General Education, Vol. 53,
No. 2, pp. 135-157, 2004.
Hernon, Peter and Dugan, Robert E. (Editors), Outcomes Assessment in Higher Education: Views and Perspectives,
Libraries Unlimited, A Member of the Greenwood Publishing Group, Inc., 2004
Huba, Mary E. and Freed, Jann E., Learner-Centered Assessment on College Campuses: shifting the focus from
teaching to learning, Allyn & Bacon, 2000
Nichols, James O. and Nichols, Karen W., A Road Map for Improvement of Student Learning and Support Services
Through Assessment, Agathon Press, 2005
Pagano, Neil, “Defining Outcomes for Programs and Courses”, June 2005 Higher Learning Commission Workshop
Making a Difference in Student Learning: Assessment as a Core Strategy, available at
http://www.ncahigherlearningcommission.org/download/Pagano_DefiningOutcomes.pdf
Palomba, Catherine A. and Banta, Trudy W., Assessment Essentials: planning, implementing, and improving
assessment in higher education, Jossey-Bass, John Wiley & Sons, Inc., 1999
Pellegrino, James W. , Chudowsky, Naomi and Glaser, Robert (editors); Knowing What Students Know: The science
and design of educational assessment, Committee on the Foundations of Assessment, Center for Education,
Division of Behavioral and Social Sciences and Education, National Research Council, National Academy Press, 2001
Stevens, Dannelle D. and Levi, Antonia J., Introduction to Rubrics: An Assessment Tool to Save Grading Time,
Convey Effective Feedback, and Promote Student Learning, Stylus Publishing, 2005
Suskie, Linda, Assessing Student Learning: A common sense guide, Anker Publishing Company, 2004
Walvoord, Barbara E., Assessment Clear and Simple, John Wiley & Sons, 2004