WK 7 Projwagnerl

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Program Evaluation Plan

Week 7 Course Project: Program Evaluation Plan


Lauren Wagner
Walden University

Dr. Burke
Program Evaluation (EDUC - 6130 - 2)
August 14, 2015

Program Evaluation Plan

Program Analysis
Math Madness was initiated in the fall of the 2013-2014 school year in grades
five and six at Bellwood-Antis Middle School located in Bellwood, Pennsylvania. Dr.
Donald Wagner, the principal, came up with the idea after a suggestion that students
love to compete from a colleague and professor at Penn State University (D. Wagner,
personal communication, June 30, 2015). The goal of the program was to improve Math
fundamentals, mainly multiplication (D. Wagner, personal communication, June 30,
2015). The various stakeholders involved in/affected by the program include Dr.
Wagner, fifth and sixth grade Math teachers, fifth and sixth grade students, the students
parents, upper grade level teachers, the community, and the Superintendent.
Dr. Wagner created and initiated the program. Fifth and sixth grade teachers
facilitate the program. Fifth and sixth grade students participate in the program.
Students parents may or may not be involved depending on if they help their children
practice their multiplication facts at home. If students strive to memorize their
multiplication facts for Math Madness, upper grade level teachers could see an
improvement in the Math skills or test scores of the incoming students. If people in the
community are aware of and pleased with Math Madness, it could have a positive
impact on Bellwood-Antis School Districts reputation. Because the Superintendent
wants students in Bellwood-Antis School District to be learning and having fun, he could
share Math Madness with parents, colleagues, and community members as an example
of how students are having fun while learning. The same is true for Dr. Wagner as well
and is a direct reflection on Dr. Wagners professional ability and initiative. Therefore, it
is obvious the interests of these stakeholders connect and overlap. It is a domino

Program Evaluation Plan

effectstudents learn while having fun which leads to improved test scores which in
turn builds students confidence and helps students/teachers as they move in to upper
grades and pleases parents, administrators, teachers, and the community.
Why was Math Madness started? In the past few years fifth and sixth grade
teachers at BAMS have noticed a decline in student memorization of multiplication
facts. This lack of knowledge is detrimental to students learning more advanced Math
concepts. Multiple teachers voiced their concern at grade level team meetings, which
Dr. Wagner holds once a week. Dr. Wagner and the middle school teachers
brainstormed different ideas to try to improve students multiplication skills while making
it fun for the students. This led to the development and implementation of Math
Madness.
So what is Math Madness? Here is a brief overview. Students in each Math class
(four classes in grade 5 and four classes in grade 6) are placed into teams for each
semester. (It is the teachers discretion how students are grouped.) The teacher
randomly gives students practice sheets or activities (word problems, fast facts,
Challenge 24, etc.) and students have a certain amount of time to complete them. For
every problem correct, students earn points for their team. At the end of the semester
the team from each Math class with the most points earns a pizza party lunch and
movie afternoon as a reward.
Here are some contextual factors that impact the program. There are
approximately 100 students in sixth grade and 100 students in fifth grade. Each Math
class is not the same size because students are ability grouped. Classes range from 10
to 30 or more students. The majority of students at BAMS are Caucasian with

Program Evaluation Plan

approximately 2% African American, 1% Hispanic, and 1% Asian (D. Wagner, personal


communication, July 7, 2015). There is a wide variety in the socio economic status of
BAMS students with 26% receiving free or reduced lunches (D. Wagner, personal
communication, July 7, 2015). There is no funding for this program and teachers may
use whatever resources they see fit. It is a small school district with a good reputation
throughout the Bellwood community as well as surrounding communities.
I believe the potential ethical challenges involved in an evaluation of this program
are limited. Because I am a teacher at BAMS and help facilitate Math Madness, I must
be aware of and reflect on my biases in order to remain impartial and honest throughout
the evaluation. I suppose if teachers ratings were dependent on how well their students
scored in Math Madness (like the new teacher evaluation system with state
standardized tests), there could be potential ethical challenges such as cheating or
skewing results, but such is not the case. The students Math Madness scores are not
graded; they are used to motivate students to learn their times tables and earn the
reward. In my opinion, there are no political factors or interpersonal dynamics that could
affect the program evaluation.
Overall, Math Madness is perceived as a very positive program. I conducted
email interviews with the three other sixth grade teachers on my team at BAMS to find
out their perceptions and opinions of Math Madness. Here are the programs strengths
as noted by the teachers: students work together, gives teacher chance to review Math
concepts, can be used as formative assessment, students enjoy it, can measure growth
in own classroom, students support one another, students like the competition aspect of
it, and because of the reward, it doesnt seem like work to the students (D. Davis, T.

Program Evaluation Plan

Trexler, & S. Conlon, personal communication, July 1, 2015). Here are the programs
weaknesses as noted by the teachers: each teacher does Math Madness differently and
the reward is scheduled weeks after the completion of the program (D. Davis, T.
Trexler, & S. Conlon, personal communication, July 1, 2015).
References
D. Davis. Personal communication. July 1, 2015 and June 30, 2015.
D. Wagner. Personal communication. July 7, 2015.
S. Conlon. Personal communication. July 1, 2015.
T. Trexler. Personal communication. July 1, 2015.

Program Evaluation Plan

Evaluation Model Table

EXPERTISE AND CONSUMERORIENTED APPROACHES

PROGRAM-ORIENTED
EVALUATION APPROACHES

DECISION-ORIENTED
EVALUATION APPROACHES

PARTICIPANT-ORIENTED
EVALUATION APPROACHES

-emphasize the central role of


expert judgment, experience, and
human wisdom in evaluation
process
-audience is not directly known to
evaluator
- explicit criteria valued by the
consumer are used in judging the
product
-easily understood, easy to follow
and implement, produces
information that program
directors agree is relevant to their
mission
-program being held accountable
for what its designers said it was
going to accomplish
-requirement for dialogue and
achieving an understanding and
clear articulation of the reasoning
behind the program
-provide information that helps
people make decisions
-works closely with stakeholders
to identify information needs and
conduct the study
-personal factor
- valid data results from involving
stakeholders in thinking about the
meaning of various constructs to
be measured, conceptualizing
them, and considering how to
measure them or collect
information
-stakeholders gain trust in the
evaluation, begin to understand
it, and consider how they might
use it

-presumed expertise of experts


-evaluators make judgments that
reflect little more than personal
biases

-focus on objectives can cause


other important outcomes to be
ignored
-neglects program description
and gaining understanding of
program context
-desire to test theory as a whole
may prompt evaluators to
neglect values or information
needs of stakeholders

-evaluator can become the hired


gun
-social equity and equality not
addressed
-programs that lack decisive
leadership not likely to benefit
-feasibility of implementing a
successful participative study
-credibility of results to those
who do not participate in process
-competence of stakeholders to
perform certain tasks is
questionable
-add a political element
inasmuch as they foster and
facilitate the activism of
recipients of program services

Explain your choice of model for your program evaluation: Participant-orientedThroughout this
course, I have asked for my principals input often, as well as my fellow sixth grade team of teachers. In
my opinion, participant-oriented fits the mold for this particular evaluation based on the advantages listed
above in the table. It is necessary to have an understanding of the program context (which programoriented neglects) and the purpose of this evaluation is not decision-oriented. Expertise and consumeroriented wouldnt work because I am not relying on expertise and the audience certainly is known.

Program Evaluation Plan

Evaluative Criteria
Here are the five evaluation questions that determine specifically what this
program evaluation is going to answer:
Has Math Madness improved students automaticity with multiplication facts?
Has Math Madness improved students attitude toward Math?
Has Math Madness improved students scores on the PSSA?
Does Math Madness lead to improved automaticity in relation to students who do
not participate (control group)?
5. Does Math Madness lead to improved attitude in relation to students who do not
participate (control group)? (D. Wagner, personal communication, July 29, 2015)
1.
2.
3.
4.

In reference to The Program Evaluation Standards, I believe the following standards are
reflected in the evaluation questions: U2- Attention to Stakeholders, U5-Relevant
Information, F2-Practical Procedures, P4-Clarity and Fairness (Fitzpatrick et al, 2010).
In addition, scores from previous PSSA tests will be established as benchmarks to
measure success.
In short, the purpose of this evaluation is to find out if the program is having the
desired effect (Do students who participate in Math Madness improve basic math skills
and their attitudes toward Math?). Let me explain how these evaluation questions will
impact the program evaluation. Students in the Math Madness program may improve
math skills and attitude; but, if they dont improve to a greater degree than those who do
not participate (students in the general curriculum), it may not be worth the effort. If
they do show greater growth and the program appears to have merit, then additional
studies may be conducted to identify strengths and weaknesses within the program (do
students work better on a team or individually?; does the competition aspect lead to
greater gains?; do improved attitudes correlate with improved scores?; the better the

Program Evaluation Plan

attitude, the higher the score?). Based on the results, we can use this information to
improve efficiency/effectiveness.
This evaluation is limited as to what exactly is being evaluated. For example,
teacher effectiveness in facilitating Math Madness or students scores compared from
different classes are not being evaluated. It was decided (through dialogue with
stakeholders) the purpose of the evaluation is to measure outcomes (automaticity and
attitude change), so I will not be assessing needs and challenges or evaluating
processes. As stated on page 314 by Fitzpatrick et al (2010), The evaluation questions
provide the direction and foundation for the evaluation. Therefore the examples I just
provided would take away from the focus of the evaluation based on the five questions
listed at the beginning of this essay.
Evaluators primary responsibility is to work with stakeholders and to use their
own knowledge of and expertise in research and evaluation to develop questions that
are meaningful, important, feasible to answer within the given resources, and likely to
provide useful information to primary intended users and stakeholders (Fitzpatrick et al,
2010, p. 314). I think the stakeholders that should be involved in determining evaluation
questions are the principal and fifth and sixth grade teachers. I think Dr. Wagner should
contribute because he was the one who created and started the program. He may also
be aware of other studies that could be used for comparison. The teachers facilitate the
program and can provide valuable input based on their experience with it in the
classroom. I do not think involving the students, parents, community, or superintendent
is necessary for this step.

Program Evaluation Plan

References
D. Wagner. Personal communication. July 29, 2015.
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative
approaches and practical guidelines (4th ed.). Boston, MA: Pearson.

Program Evaluation Plan

10

Data Collection Design and Sampling Strategy


Here are the five evaluation questions that determine specifically what this program
evaluation is going to answer:
Has Math Madness improved students automaticity with multiplication facts?
Has Math Madness improved students attitude toward Math?
Has Math Madness improved students scores on the PSSA?
Does Math Madness lead to improved automaticity in relation to students who do
not participate (control group)?
5. Does Math Madness lead to improved attitude in relation to students who do not
participate (control group)? (D. Wagner, personal communication, July 29, 2015)
1.
2.
3.
4.

I would use a mixed methods approach for my data collection, including both qualitative
and quantitative design.
I could measure success/gather data by doing pre and post-tests. Since the
program would only involve approximately 100 students per grade, I could test them all.
As Math Madness is to hopefully improve automaticity, I would give students a timed
pre-test to determine how many multiplication problems they could do correctly in a set
amount of time, then the same test at the halfway point and at the end of the program to
measure improvement. This would address Evaluation Question #1 and #4.
Another way I could gather data is by looking at PSSA results specific to those
skills before and after the program of students who participated in Math Madness and
students who did not. I could use scores from previous PSSA tests to establish
benchmarks for measuring success. I could randomly select fifth and sixth grade
students as my sample. I would consider using about 30 students from fifth grade who
participated in Math Madness and 30 students from fifth grade who did not participate. I
would do the same for sixth grade. This would help answer Evaluation Question #3.

Program Evaluation Plan

11

I could also conduct interviews with students and teachers. Qualitative interviews
are useful for learning the perspectives, attitudes, behaviors, and experiences of others.
Interviews would allow for clarification and probing. This would allow me to gain insight
into the program and students and teachers opinions on it. Interviews would help
answer Evaluation Question #2.
Limitations would be the willingness of students to work on their skills outside the
school-day, lack of funds to purchase a pre-packaged program to deliver to students,
and limited instructional time available to spend on basic math skills (due to the need to
address all state standards). Other possible limitations would be student attendance or
student apathy.
If I were to conduct a survey, the students would be my sources. Again, because
there are about 100 students in each grade, I could conduct a survey with all of them. It
could be completed during homeroom or during Math class, time permitting. I would not
have the students put their names on the surveys, but I may use a coding system to
determine which Math class students are in. The survey would address students
attitudes toward Math Madness and Math in general. This would help answer Evaluation
Question #2 and #5. The item types in the survey would be Likert-scale items because
the questions would be about attitudes. In order to receive honest feedback, the surveys
would need to be clear and concise to hold the students attention and interest.
Questions would not be lengthy and I would not include any open ended items.
I could also conduct a survey of the fifth and sixth grade teachers who facilitate
Math Madness. This would be a total of eight teachers so all of them could participate in
the survey. I would have the teachers put their names on the surveys as long as they

Program Evaluation Plan

12

felt comfortable doing so. Teachers observe students everyday in the classroom and
observe them as they participate in Math Madness. The survey would focus on teacher
observations of students automaticity and attitude. This would address Evaluation
Questions #1, #2, #4, and #5.The item types in the survey would be multiple choice
because the questions would be about teachers opinions on students attitudes and
behaviors. I could have a few open ended questions on this survey because the
teachers may have more comments to share that could be useful to the collection of
data.
References
D. Wagner. Personal communication. July 29, 2015.
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative
approaches and practical guidelines (4th ed.). Boston, MA: Pearson.
Audio--Wk7.mp3

Program Evaluation Plan

13

Reporting Strategy
Stakeholder

Dr. Wagner

Reporting Strategy

Implications

Stakeholder
Involvement

One on One Meeting


with an Oral Report (use
of visual aids, dialogue,
and interaction) and
Written Report

Interested in details of
program operations,
outputs, and outcomes

Opportunities for
interaction,
questions/answers, and
comments

Personal Discussion

Interested in outcomes
and impact

Opportunities for
interaction,
questions/answers, and
comments

Oral Report at Team


Meeting (Grade levels
have team meetings
with Dr. Wagner once a
week)
School Newsletter

Interested in outcomes
and impact

Opportunities for
interaction,
questions/answers, and
comments

Interested in brief
program description,
outcomes and impact

Discuss at
parent/teacher
conferences

School Newsletter and


Oral Report at School
Board Meeting

Interested in brief
program description,
outcomes and impact

Opportunities for
interaction,
questions/answers, and
comments

Oral Report at a School


Board Meeting

Interested in outcomes
and impact

Opportunities for
interaction,
questions/answers, and
comments

Fifth and Sixth Grade


Students

Fifth and Sixth Grade


Teachers

Students Parents

Community

Superintendent

Values, Standards, and Criteria: Values: Audience analysis completed for stakeholders so the
information of interest is conveyed clearly and in a way that establishes its credibility.
Program Evaluation Standards: U2- Attention to Stakeholders, U5-Relevant Information, F2-Practical
Procedures, P4-Clarity and Fairness
Criteria: The success of the program will be measured using the Evaluation Questions.

Potential ethical issues: Evaluator and stakeholder biases must be mitigated. Program evaluation must
be objective, credible, and fair. Report will be reviewed by a neutral third party to ensure objectivity and
validity.

Program Evaluation Plan

14

References
Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative
approaches and practical guidelines (4th ed.). Boston, MA: Pearson.

You might also like