Models of Curriculum Evaluation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50
At a glance
Powered by AI
The document discusses different models of curriculum evaluation including Tyler's model, CIPP model, and Kirkpatrick's model. It also discusses classification and types of models.

The document discusses conceptual models, procedural models, mathematical models and graphical models like diagrams and flow charts.

The CIPP model concentrates on the context, input, process and product of a program. It focuses on different types of decisions like planning decisions, recycling decisions, means structuring decisions and implementing decisions.

Models of Curriculum

Evaluation
Developed by
Dr. G.A. Rathy, Assistant Professor, Electrical Engineering Department, NITTTR, Chennai

Models of Curriculum Evaluation


1. Concept of Model
2. Need for Models
3. Models of Curriculum Evaluation
1. Tylers Model
2. CIPP Model
3. Stakes Model
4. Rogers Model
5. Scrivens Model

6. Krikpatricks model
4. Criteria for judging evaluation studies

Concept of a model
Theory:
Explains a process
(Why?)
Model
Describes a process

(How?)
Model is a representation of reality
presented with a degree of structure and
order.

Classification of Models
Models

Mathematical

Graphical Models

Diagrams
Flow charts

Static
model

Three
dimensional
models

working Small Large


model scale scale

Types of Models
Models

Conceptual
Models

Procedural
Models

Mathematical
Models

Describes:

Describes:

Describes:

What is meant by
the concept?

How to perform a
task?

The relationship
between the
various elements
of a situation or
process

Why do we need a model for curriculum


evaluation?

To provide a conceptual framework for


designing
a
particular
evaluation
depending on the specific purpose of the
evaluation.

1. Tylers Model (1949)


Key Emphasis:

Instructional Objective
Purpose:
To measure students progress towards objectives
Method
1. Specify Instructional Objectives
2. Collect performance Data
3. Compare performance data with the
objectives/standards
specified

Limitation of Tylers Model


1. Ignores process
2. Not useful for diagnosis of reasons why a curriculum
has failed

Tylers Planning Model(1949)


What educational goals
should the school seek to
attain?

Objectives

Selecting learning experiences

Organising learning
experiences

Evaluation of students
performance

How can learning experiences


be selected which are likely to
be useful in attaining these
objectives?
How can learning experiences
be organised for effective
instruction?
How can the effectiveness of
learning experiences be
evaluated?

[Print, M. (1993) p 65]

2. CIPP Model (1971)


The CIPP model of evaluation concentrates on:

Context of the programme


Input into the programme

Process within the programme


Product of the programme

Focuses on Decision making

Ends

Intended

Actual

Planning Decisions
To determine
objectives
(Policy makers and
Administrators)

Recycling Decisions
To judge and react to
attainments
(Policymakers,Administrators
Teachers,HODs and
Principals)

Structuring Decisions
Means To design procedures
(Administrators,
Principals and HODs)

Implementing Decisions
To utilise control and refine
procedures
(Teachers,HODs and
Principals)

Types of Decisions
Intended Ends(goals)
Intended means(procedural designs)
Actual means (procedures in use)

Actual ends (attainments)

CIPP
Intended
ENDS

Actual

Context
Evaluation
Qn: What?

Product Evaluation
Qn:Have we?
Attainments

Environment &
Needs

Outcomes - both quality


and
significance

Input Evaluation
MEANS Qn: How?
Procedural Designs
Strategies &
Resources

Process Evaluation Qn: Are


we?
Procedures in use
Monitoring & implementation

Types of Evaluation
Context Evaluation

Input Evaluation
Process Evaluation
Product Evaluation

Context Evaluation
Objective:
To determine the operating context
To identify and assess needs and opportunities in the context

To diagnose problems underlying the needs and opportunities


Method:

By comparing the actual and the intended inputs and outputs


Relation to decision making:
For deciding upon settings to be served
For changes needed in planning

Needs of Industry,Society
Future Technological developments
Mobility of the students

Input Evaluation
Objective: To identify and assess system capabilities,
available input strategies and designs for implementing the
strategies
Method: Analysing resources, solution strategies,
procedural designs for relevance,feasibility and economy
Relation to decision making: For selecting sources of
support solution strategies and procedural designs for
structure changing activities

Entry behavior of students


Curriculum Objectives
Detailed contents
Methods and media
Competencies of teaching faculty
Appropriateness of teaching / learning resources

Process evaluation:

Objectives: To identify process defects in the procedural


design or its implementation

Method: By monitoring the procedural barriers and


remaining alert to unanticipated ones and describing the
actual process

Relation to decision making: For implanting and


refining the programme design and procedure for
effective process control

Feedback to judge
The effectiveness of teaching learning methods
Utilisation of physical facilities

Utilisation of teaching learning process


Effectiveness of system of evaluation of students
performance

Product evaluation:

Objectives: To relate outcome information to objectives


and to context input and process information

Method: Measurement Vs Standards interpreting the


outcome

Relation to decision making: For deciding to continue,


terminate, modify, build or refocus a change of activity.

Employability of technician engineers


Social status of technician engineers
Comparability of wage and salary structures
Job adaptability and mobility

STUFFLEBEAMS CIPP Model(1971)


Context Input Process and Product evaluation
Key Emphasis : Decision-making
Purpose

: To facilitate rational and continuing


decision-making

Strengths

: a) Sensitive to feedback

b) Rational decision making among


alternatives
Evaluation
activity

: Identify potential alternatives,set up


quality control systems

Limitations of CIIP Model


1. Over values efficiency
2. But undervalues students aims

CIPP View of Institutionalized Evaluation

CIPP approach recommends

Multiple observers and informants


Mining existing information
Multiple procedures for gathering data; cross-check
qualitative and quantitative
Independent review by stakeholders and outside groups
Feedback from Stakeholders

3. STAKEs MODEL
(1969)
Antecedent is any condition existing prior to teaching and
learning which may relate to outcome.

Transactions are the countless encounters of students with


teacher, student with student, author with reader, parent
with counsellor

Outcome include measurements of the impact of instruction


on learners and others

Description Matrix
Rationale

Intents

Observation

Judgement Matrix
Standards Judgement

Antecedents

Transactions

Outcomes

ANTECEDENTS
Conditions Existing prior to Curriculum Evaluation
Students interests or prior learning
Learning Environment in the Institution
Traditions and Values of the Institution

TRANSACTIONS
Interactions that occur between:
TEACHERS

STUDENTS

STUDENTS

STUDENTS

STUDENTS

CURRICULAR MATERIALS

STUDENTS

EDUCATIONAL ENVIRONMENT

TRANSACTIONS = PROCESS OF EDUCATION

OUTCOMES

Learning outcomes
Impact of curriculum implementation on
Students
Teachers
Administrators
Community
Immediate outcomes Vs Long range outcomes

Three sets of Data


1. Antecedents

Conditions existing before implementation

2. Transactions

Activities occurring during implementation

3. Outcomes

Results after implementation

Describe the program fully

Judge the outcomes against external standards

STAKEs Model
Key Emphasis:
Description and judgement of Data

Purpose:
To report the ways different people see curriculum
Focus is on Responsive Evaluation
1.Responds to audience needs for information
2.Orients more toward program activities than results
3. Presents all audience view points(multi perspective)
Limitations:
1.Stirs up value Conflicts
2.Ignores causes

4. KAUFMAN ROGERS MODEL


Need Assessment
Where are we now?

Where are we to be?

Discrepancy
Discrepancy between current status and Desired status
Discrepancies should be identified in terms of products of
actual behaviours (Ends)
Not in terms of processes (Means)

Deduction

The drawing of a particular truth from a general,


antecedently known
Rule examples

Induction

Rising from particular truths to a generalisation


Examples - rules

GOAL FREE EVALUATION (1973)

Proponent

: Michael Scriven

Goals are only a subset of anticipated effects


Intended effects
Effects
Unintended effects

Roles of curriculum evaluation:


Scriven differentiates between two major roles of
curriculum evaluation:
the formative and the
summative
Formative evaluation during the development of the
programme
Summative evaluation at its conclusion

Formative evaluation
It is carried out during the process of curriculum
development

The evaluation results may contribute to the modification


or formation of the curriculum

For example, results of formative evaluation may help in


1.Selection of programme components
2.Modification of programme elements

Summative evaluation at its conclusion

Summative evaluation
It is carried out after offering the curriculum once or
twice. Such an evaluation will summarise the merits
and demerits of the programme.
A curriculum that operates satisfactorily over a period
time may become obsolete.

To prevent this from occurring a permanent follow up of


curriculum and quality control of the programme should
be maintained

Methodology:

1. Determine what effects this curriculum had, and


evaluate them whether or not, they were intended
2. Evaluate the actual effects against a profile of
demonstrated needs
3. Notice something that everyone else overlooked or
produce a novel overall perspective
4. Do not be under the control of the Management.
Choose
the
variables
of the
evaluation
independently.

Criteria for judging evaluation studies:

1. Validity
2. Reliability
3. Objectivity / Credibility
4. Importance / Timeliness
5. Relevance
6. Scope
7. Efficiency

Kirkpatrick's Four Levels of Evaluation


In Kirkpatrick's four-level model, each successive
evaluation level is built on information provided by
the lower level.

ASSESSING TRAINING EFFECTIVENESS often


entails using the four-level model developed by
Donald Kirkpatrick (1994).

According to this model, evaluation should


always begin with level one, and then, as
time and budget allows, should move
sequentially through levels two, three, and
four. Information from each prior level serves
as a base for the next level's evaluation.

Level 1 - Reaction
Evaluation at this level measures how
participants in a training program react to
it.
It attempts to answer questions regarding
the participants' perceptions - Was the
material relevant to their work? This type
of
evaluation
is
often
called
a
smilesheet.
According to Kirkpatrick, every program
should at least be evaluated at this level
to provide for the improvement of a
training program.

Level 2 - Learning
Assessing at this level moves the evaluation
beyond learner satisfaction and attempts to
assess the extent students have advanced in
skills, knowledge, or attitude.

To assess the amount of learning that has occurred due


to a training program, level two evaluations often use
tests conducted before training (pretest) and after
training (post test).

Level 3
Evaluation - Transfer
This level measures the transfer that has
occurred in learners' behavior due to the
training program.
Are the newly acquired skills, knowledge, or
attitude being used in the everyday
environment of the learner?

Level 4

Evaluation- Results
This level measures the success of the
program in terms that managers and
executives can understand -increased
production, improved quality, decreased
costs, reduced frequency of accidents,
increased sales, and even higher profits or
return on investment.

Level four evaluation attempts to assess


training in terms of business results. In
this case, sales transactions improved
steadily after training for sales staff
occurred in April 1997.

Methods for Long-Term Evaluation


Send post-training surveys
Offer ongoing, sequenced training and coaching
over a period of time
Conduct follow-up needs assessment
Check metrics to measure if participants achieved
training objectives
Interview trainees and their managers, or their
customer groups

You might also like