Unit 9
Unit 9
Unit 9
LEADERSHIP AND
MANAGEMENT
Evaluating
Management System
Introduction
The major purpose of all schools and other
educational institutions should be to contribute
to the development of a dynamic, self renewing
society by assuming a major role in the
preparation of the citizens, and especially the
children and youth, to participate in and
contribute effectively and constructively to the
orderly development of the society.
1.1 Need And Importance Of
Evaluation In Management
Evaluating is often overlooked in the day
to day affairs of the school system. In
reality, the ongoing evaluation of programs
personnel, and activities maybe one of the
more important aspects of the quality of
effort being extended by the organization.
Evaluation Process
The following questions should be examined:
Is the target population being served?
Is the program producing the desired results?
Is the program cost effective?
Is the program compatible with other programs?
Does the program support the mission of the
school?
1.2 System of Evaluation
(CIPP)
According to Stufflebeam’s theory, the four types
serve general decision making categories:
Context evaluation
Input evaluation
Process evaluation
Product evaluation
Context Evaluation
context evaluation is most important to a
practicing school administrator. In general, its
importance focuses on three factors, which
oftentimes affect the success, or failure of
decisions related to school programmes. First,
context evaluation serves short and long-range
planning decisions.
Planning in many school districts become an
academic exercise of exchanging ideas between
colleagues, which leads towards re-enforcement
like the key decision maker’s position of any one
of many issues.
Secondly,context evaluation is ongoing or
continues throughout the life of an
educational programme or service.
Thirdly, context evaluation continues to
provide a reference point or baseline of
information designed to examine to initial
programme goals and objectives.
Input Evaluation
As one moves from context evaluation, the focus
shifts from planning decision to allocation of
resources in order to meet programme.
Such careful evaluation will provide important
data on what is in terms of existing programmes
and activities. It also provides a good analysis of
the efficacy of die existing programmes. If, for
example, a schools input analysis shows a great
emphasis on highly academic, advanced
instructional programmes while the context
evaluation identifies a great need for basic skill
emphasis.
Process Evaluation
Once a course of action has been approved and
implementation has begun, process evaluation is
necessary to provide periodic feedback to persons
responsible for implementing plans and
procedures. Process evaluation has three main
objectives: the first is to detect or predict defects
in the procedural design or its implementation
stage the implementation stage, the second is to
provide information for programmed decisions,
and the third is to maintain a record of the
procedure as it occurs (Stufflebeam et al.
1971.229).
Product Evaluation
The fourth type of evaluation is product
evaluation. Its purpose is to measure and
interpret attainments not only at the end of a
programme cycle, but as often as necessary
during the project term.
While product evaluation gives an understanding
of what is policy setters often use product
expectations to establish goals and objectives for
particular projects and programmes. The
establishment of a product objective or
expectation by a board of trustees or board of
education certainly adds a dimension to the
reality of context, input, and process evaluation.
Summary of CIPP Model
Tosummarize, the CIPP model for decision-
making provides the best utilization of the
data and the most flexible parameters for
adjustments while maintaining the
integrity of the evaluation process. CIPP
also allows for decision alternatives to be
explored and for the decision-maker to
project cost effectiveness of a particular
project. The use of CIPP model can
simplify the planning process, while
strengthening the result.
Criteria of Evaluation
The key consideration of organizational chart is
the direct line relationship between the
director of evaluation services and the
superintendent. The implication is not that the
evaluator does not have working relationships
with other central office and district
administrators, but the evaluator must have
the freedom to focus, gather, and report useful
information as close as possible to the
individual having ultimate responsibility for
decision effecting school.
Information Criteria
1. Practical Criteria
Stufflebeam et al. (1971, 28) identifies five practical
criteria in addition tocredibility for judging the value
of worth of evaluative information:
1. Relevance evaluative data are collected to meet
certain purposes, and, the data does not relate to
those purposes, they are useless.
2. Importance a great deal of information can be
collected which is nominally relevant for some purpose
...evaluative information must be culled to eliminate
or disregard the least important information and
highlight the most important information.
3. Scope-information may be relevant and
important but lack sufficient breadth depth to be
useful.
4. Timeliness- the best information is useless if it
comes too late (or too soon) providing perfect
information late has no utility, but providing
reasonably good information at the time it is
needed can make a great deal of difference.
5. Pervasiveness-evaluation designs should contain
provisions to disseminate the evaluation findings to
all persons who need to know them.
2. Scientific Criteria
The following scientific criteria, according to Stufflebeam et
al. (1971, 27-28), are equally important:
1. Internal validity: the information must he “true”.
A more accurate way is to state that there must be
a close, if not one-to-one, correspondence between
the information and phenomena it represents.
2. External validity: refers to the “generalize
ability: of the information. Does the information
hold only for the sample from which it was collected
or for other groups for the same group at other
times as well?
3. Reliability: refers to the consistency of
the information. If new data were gathered,
would the same finding result? Reliability
depends to a great extent on the nature of
the instruments (used in gathering
information).
4. Objectivity: is concerned with the “public
ness” of the information. Would every one
competent to judge agree on the meaning of
the data?
In general the use of practical and scientific
criteria is ensuring that an evaluation
process (CIPP) will reveal and communicate
accurate information about the programme
service being studied. The degree of
accuracy by which information is judged in
not only the responsibility, of the evaluator,
but also the decision maker who uses or
chooses not to use any or all of the
information provided.
Improving Management
Through Evaluation
Evaluation is itself a methodological activity
which is essential similar weather we are trying to
evaluate coffee machines or teaching machines,
plans for a house or plans for a curriculum.
One of the values of this concept of evaluation is
the emphasis on goals and goal justification.
Must Be Goal Oriented