The Cipp Model: Mclemore, A. (2009) - Retrieved 10/9/2010 From

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

McLemore, A. (2009). The CIPP Model.

Retrieved 10/9/2010 from


www.americanchronicle.com/articles/89917 .

The CIPP Model


Amy McLemore
February 04, 2009
In the mid 1960s, Daniel L.Stufflebeam recognized the shortcomings of available
evaluation approaches. Working to expand and systemized thinking about
administrative studies and education decision making, he and others built on concepts
only hinted at the much earlier work of educational leaders such as Henry Bernard,
Horace Mann, William Tory Harris, and Carleton Washburne (Fitzpatrick, Sanders, &
Worthen, 2004). Stufflebeam and his colleagues developed the CIPP Model in the
1960s. The CIPP Model is a comprehensive framework for guiding formative and
summative evaluations of projects, programs, personnel, products, institutions, and
systems (Stufflebeam, 1968). The model is configured for use in internal evaluations
conducted by an organization´s evaluators; self-evaluations conducted by project
teams or individual service providers, and contracted or mandated external
evaluations. According to Stufflebeam (1999), the model has been employed
throughout the United States and around the world in short-term and long-term
investigations both small and large.

The CIPP framework was developed as a means of linking evaluation with


programmed decision-making. It aims to provide an analytic and rational basis from
programmed decision-making, based on a cycle of planning, structuring,
implementing and reviewing and revising decisions, each examined through a
different aspect of evaluation such as context, input, process and product evaluation.
Stufflebeam (1999) viewed evaluation in terms of the types of decisions it served and
categorized it according to its functional role within a system of planned social
change. The CIPP model is an attempt to make evaluation directly relevant to the
needs of decision-making during the different phases and activities of a program.

In the CIPP approach, in order for an evaluation to be useful, it must address those
questions which key decision-makers are asking, and must address the questions in
ways and language that decision-makers will easily understand (Cronbach, 1982). The
approach aims to involve the decision-makers in the evaluation planning process as a
way of increasing the likelihood of the evaluation findings having relevance and being
used. Stufflebeam thought that evaluation should be a process of delineating,
obtaining and providing useful information to decision-makers, with the overall goal
of program or project improvement (Cronbach, 1982).

There are many different definitions of evaluation, but one which reflects the CIPP
approach. Program evaluation is the systematic collection of information about the
activities, characteristics, and outcome of programs for use by specific people to
reduce uncertainties, improve effectiveness, and make decisions with regard to what
those programs are doing and affecting (Patton, 2004). Stufflebeam sees evaluation´s
purposes as establishing and providing useful information for judging decision
alternatives, assisting an audience to judge and improve the worth of some
educational program or object, and assisting the improvement of policies and
programs (Stufflebeam, 1983).
Based on Stufflebeam theory, there are four aspects of CIPP evaluation which assist
decision-making. Context evaluations determines what needs are addressed by a
program and what program already exist helps in defining objectives for the program.
Input evaluation determines what resources are available, what alternative strategies
for the program should be considered, and what plan seems to have the best potential
for meeting needs facilitates design of program procedures. Process evaluation asses
the implementation of plans to help staff carry out activities and later help the broad
group of users judge program performance and interpret outcomes. Product
evaluations identify and asses outcomes (intended and unintended), short term and
long term to help staff keep an enterprise focused on achieving important outcomes
and ultimately to help the broader group of users gauge the effort´s success in meeting
target needs (Stufflebeam, 1999).

One of the problems with evaluation in general is getting its findings used. Though its
focus is on decision-making, CIPP aims to ensure that its findings are used by the
decision-makers in a project. CIPP also takes a holistic approach to evaluation, aiming
to paint a broad picture of understanding of a project and its context and the processes
at work. It has potential to act in formative, as well as summative way, helping to
shape improvements while the project is in process, as well as providing a summative
or final evaluation overall. The formative aspect of it should also, in theory, be able to
provide a well-established archive of data for a final or situations within the whole
project (Stufflebeam, 2003).

Critics of CIPP have said that it hold an idealistic notion of what the process should
be rather that its actuality and is too top-down or managerial in approach, depending
on an ideal of rational management rather than recognizing its messy reality. In
practice, the informative relationship between evaluation and decision-making has
proved difficult to achieve and perhaps does not take into account sufficiently the
politics of decision-making within and between organizations (Stufflebeam, 2003).

As a way of overcoming the top-down approaches to evaluation, those stakeholders


and participative approaches have developed (for example, the approaches of Richard
Stake and Lincoln and Guba). These argue that all stakeholders have a right to be
consulted about concerns and issues and to receive reports which respond to their
information needs, however in practice, it can be difficult to serve or prioritize the
needs of a wide range of stakeholders (Worthen & Sanders, 1987). In stakeholder or
participative approaches, evaluation is seen as a service to all involved in contrast to
the administrative approach (such as CIPP), where the focus is on rational
management and the linkage is between researchers and managers or decision-
makers. In the stakeholders approach, decisions emerge through a process of
accommodation (or democracy based on pluralism and the diffusion of power). So the
shift in this type of approach is from decision-maker to audience. Cronbach (1982)
argues that the evaluator´s mission is to facilitate a democratic, pluralist process by
enlightening all the participants´. However, some of the commissioning agencies who
receive the reports from participative evaluation say they do not always find then
helpful in decision-making, because of the nature of the reports produced and lack of
clear indications for decision-making or conflicting conclusions.

The CIPP Model treats evaluation as an essential concomitant of improvement and


accountability within a framework of appropriate values and quest for clear,
unambiguous answers. It responds to the reality that evaluations of innovative,
evolving efforts typically cannot employ controlled, randomized experiments or work
from publish evaluation instruments both which yield far too little information
anyway. It cannot be overemphasized, however, that the model is and must be subject
to continuing assessment and further development.

References

Cronbach, L. J. 1982. Designing Evaluations of Educational and Social Programs. San


Francisco: Jossey-Bass.

Fitzgerald, J., Sanders, J., & Worthen, B. (2004). Program Evaluation: Alternative
Approaches and Practical Guidelines (3rd.ed). Boston: Allyn & Bacon.

Patton, M. Q. (2004). On evaluation use: Evaluative thinking and process use. The
Evaluation Exchange IX(4).

Stufflebeam, D.L. (1968). Evaluation as enlightenment for decision-making.


Columbus, Ohio: Evaluation Center, Ohio State University.

Stufflebeam, D.L. (1983). The CIPP Model for Program Evaluation. In G.F. Madaus,
M. Scriven, and D.L. Stufflebeam (Eds.), Evaluation Models: Viewpoints on
Educational and Human Services Evaluation. Boston: Kluwer Nijhof.

Stufflebeam, D. (1999). Foundational models for 21st century program evaluation.

Stufflebeam, D. (2003). The CIPP Model for Evaluation: An update, a review of the
model´s development, a checklist to guide implementation. Paper read at Oregon
Program Evaluators Network Conference, at Portland, OR.
http://www.wmich.edu/evalctr/pubs/CIPP-ModelOregon10-03.pdf

Worthen, B.R. & Sanders, J.R. (1987). Educational Evaluation: Alternative


Approaches and Practical Guidelines.New York: Longman.

You might also like