Journal oflnstitutional Research in South East Asia- Vol. 15 No.1 May/June 2017
INTERNAL BENCHMARKING SYSTEM FOR
HEis' PERFORMANCE EXCELLENCE
Teay Shawyun
King Saud University, Saudi Arabia
Abstract
Benchmarking has long been held as a potentially important and inherent
process whereby academic programs or HEis compare their progress or
performance relative to a comparable external entity equitable or better or the
best in the same or across different entities. While this type of benchmarking
has long been practiced, they are practically external to the institution, and this
is the nonn of benchmarks that are favored and very widely used and practiced.
Though external benchmarking is the norm and used widely, there appears to
be a lack of research into an internal benchmarking system, whereby academic
programs within the same schools, or across the different schools, or schools
across the whole institution can be benchmarked within the same institution.
The call for internal benchmarking is for continuous improvements, sharing
and learning across the different units for competitive collaborations towards
the institution ' s mission collectively. While most accreditation bodies
emphasized the importance of the use of external benchmarks, the NCAAA
(National Commission for Accreditation and Assessment) of Saudi Arabia
dictates that all academic programs or HEis seeking national accreditation
needs to provide evidence of both external and internal benchmarks. This has
led to a serious dilemma for all HE Is as most IQA (Internal Quality Assurance)
were not established with internal benchmarks nor most lack an inherent IQA
for the HEI or programs for its internal quality management. Based on this
requirement, this paper advocates an IQA system that focuses on its processes
and result base criteria and its inherent performance indicators as a strong and
valid composite set of fundamentals and approach in incorporating an internal
benchmarking system within its IQA management system. This paper provides
a broad literature review of benchmarking applications in HEis and uses a case
study of the IQA system of a leading Middle East university to exemplify and
realize this internal benchmarking mechanism.
Key words: IQA (Internal Quality Assurance) System, Internal Benchmarking,
HEis (Higher Education Institutions)
63
Journal of institutional Research in South East Asia- Vol. 15 No.I May 'June 2017
Introduction
While it is norn1al practices for any institutions or organizations to know of their performance,
how well they are doing, the top executives would also like to know how well they are doing
relative to others. What are their comparatives? How well do they measure up? These are key
essential questions that the University Councils, Board of Trustees, or Board of Governors, or for
that matter, all key executives of the HEis seek an answer to their performance relative to others.
This is where comparisons of performance leading to the ultimate '·ranking .. games are still the
name of the game. But effectively, it will be too idealistic for any institutions to compare with
those in the top tiers while an institution is still muddling around, and they need to find their best
next to comparatives, which is where benchmarking profiles highly.
What exactly is benchmarking? Webster's Dictionary defines it as ·'A standard or reference by
which others can be measured or judged". Dew and Nearing (2004) provided a more definitive
statement as .. Benchmarking means finding out who is the best in an area, studying how they
work, and adopting the best practices that are suitable to your organization". Benchmarking is
normally used for improving core processes important to educational value deliveries and
accomplishment through the examination of these processes or models relative and comparing
with other schools and adapting their techniques and approaches (Camp, 1989 and 1995; Chaffee
and Scherr 1992; Clark, 1993). Kempner's ( 1993) concise statement was "benchmarking is an
ongoing process for measuring and comparing the work processes of one organization to another,
by bring an external focus to internal activities, functions or operations" and bring about cost
improvements (Shafer &Coate, 1992; Watson, 1993). McNair and Leibfried(1992) analogized
benchmarking as a human learning process and are described as a method of teaching an
institution or program how to improve.
Innovation Network (1994) emphatically stated that ideally benchmarking is not just
comparative analysis of how one measure up to another in terms of some indicators as they do
not '·drive change'' and .. does not specifically focus on practices that create superior
performances''. In addition, it is not process reengineering where process are examined and
improved on, nor just a survey where data is collated and aggregated nor is it a "three hour show
and tell" comparison with another institution. It is not a one-off event (Spendolini, 1992) as it is
a continuous process that provides valuable information rather than simple answers: it is a
process of learning from others rather than copying, adopting or adapting: it is a time consuming
and labor intensive process as against being quick and easy; and a viable tool for improvements
and innovations rather than a fad or buzzword. These definitions practically mean that these
.. processes'' should be evaluated based on a set of criteria that can determine their "progress in
performance" as an active and dynamic longitudinal time bound continuously improving set of
actions as opposed to a static proxy one off performance indicators as a snap shot of
performance.
Practices of Benchmarking in HE Is
Schofield (1998) stated that the key reasons for benchmarking included, 1) greater international
competitiveness; 2) development of interest in enhancing quality and growth of quality
64
Journal of Institutional Research in South East Asia -Vol. 15 No.1 May 'June 2017
··movement'": and rapid growth of IT making data collection and management possible.
Practically, the key rationale for benchmarking in HEis is that benchmarking allows an HEI to
"leapfrog" through learning from others and bringing about innovative changes or adaptations of
best practices which offers a way towards cost containment and enhanced quality and value
added educational offers in a cost-effective and quality oriented framework (APQC, 1993;
Shafer &Coate, 1992). Typical mistakes to avoid when implementing benchmarking includes:
ineffective leadership, poor team selection and preparation, inadequate resources and support
mechanisms, imprecise objectives, unrealistic time and cost expectations as benchmarking is
time consuming and costly in capturing and managing these data, inadequate understanding of
both data and practices and inappropriate follow-through (Innovation Network, 1997) which are
supported by Fielden (1997) who observed the misconception that benchmarking is a quick and
inexpensive process.
According to Tuominen (1993; 1997), they are four types of benchmarking which are: 1)
Strategic benchmarking (which examines how organizations compete by analyzing strategic
goals in search for alternative activities as part of the strategic planning process through
comparison of strategic choices and dispositions made by other companies/organizations, for the
purpose of collecting information to improve one's own strategic planning and positioning)
(Andersen &Pettersen 1996, 5; Watson, 1993); 2) Perfonnance benchmarking which compare
organizational key processes, products and services that focuses on elements of price, technical
quality, product or service features, speed, reliability and other performance characteristics
leading to the assessment of competitive positions (Bogan& English, 1994, 7-9);3) Process
benchmarking deals with learning to improve one's own selected processes by identifying the
most effective operating practices from several organizations performing similar operational
functions with the comparison and analysis aimed at focusing and describing the methods and
activities that lie behind the identified performance improvement; and 4) Competence
benchmarking advocates the idea that the foundation of organizational change processes lies in
the change of actions and behavior of individuals and teams where KarlOf&Ostblom (1996, 197)
use the term ''bench/earning" which also refers to a cultural changes in efforts to become a
learning organization''.
Most literature identified five main techniques of benchmarking: 1) Intemal (where comparisons
are made of different schools, programs, campuses, depattments within the same university to
identity best practices in the institution; 2) Extemal competitive (where comparison of
performance in key areas is based on information from institutions seen as competitors); 3)
Extemal collaborative functional/industry (involving comparisons with a large group of
institutions who are not competitors) and 4) External trans industry or generic or best in class
(that seeks to look across multiple industry for new and innovative ''best practices" regardless of
types or source);and 5) Implicit Benchmarking (where benchmarking variant are applied
through market pressures of privately produced data, central funding or coordinating agencies
within individual systems) (Alestete, 1995; Camp, 1989; Schofield, 1998).
Though there are several types of benchmarking, basically there are five main approaches in
tenus of methodologies of benchmarking: 1) Ideal type or "Gold" Standards (whereby a model
like the MBNQA (Malcolm Baldrige National Quality Awards) is created based on best practices
and used as the basis to assess institutions on the extent of "fit" with the model; 2) Activity based
65
Joumal of Institutional Research in South East Asia -Vol. 15 No.1 May 'June 2017
(where a selected number of activities typical of or representative of the range of institutional
provisions are analyzed and compared against selected institutions; 3) Vertical (which seeks to
quantify costs, workloads, productivity and performance in a defined functional area; 4)
Horizontal (seeks to analyze costs, workloads, productivity and performance of a single process
that cuts across different functional areas in an institution; and 5) Use of comparative
iiUlicators(which are based on privately or centrally collected and published datasets (Camp,
1989 and 1995; Spendolini, 1992; Schofield, 1998).
Underlying these methodologies are basically a five-steps process of: 1) determining what to
benchmark; 2) forming a benchmarking team and planning the benchmarking study;3)
identifying benchmarking partners and conducting the research;4) analyzing the data and
information; and 5) taking actions and adapting the findings to the home institution (Watson,
1989; Spendolini, 1992). On the other hand Zairi (1996) presented a 16 steps two phase
approaches which distinguished those actions to ensure effectiveness' of existing process and
those that gain or benefits from: 1) transfer of knowledge from and to employees/organizations;
2) acquisition of specific knowledge lacking in the organization; and 3) development of the
performance of managers by providing them with increasingly demanding tasks (Zairi, 1992, 6).
While benchmarking has made some inroads into HEis. in USA and Canada. '·benchmarking'' in
its true sense is not benchmarking, but is ''typically a systematic generation of management
infonnation of performance indicators that can lead to identification of benchmarks, but do not
extend to the identification of best practices within the institutions context, or "outside of the
box'' and arc sporadic practices and continue to be ''halting" (Schofield, 1998). The Australian
experience is limited and confined to individual practices, and it needs more commitment to be
seen as part of the core business of the HEI, and overcome problems arising from cultural and
defmitional differences implied in comparisons as interpreted by professionals (Massaro, 1998).
In UK, while benchmarking is alive, it is still in its infancy that suggests worthwhile benefits
(Lund, 1998). In Europe, benchmarking in HE is not common nor do they show a strict pattern of
benchmarking as described in textbooks (Schreitcrer, 1998). In summary, "benchmarking" in
HEis has diverse definitional and practical meaning to different institutions or countries
depending on their context of "what is important" and what they wane which are influenced by
the socio-political context of differing needs and expectations.
Performance Measurement and Benchmarking in HEI
In most HEls, it is undisputed that "management through measurement" is an imperative
perfmmance measurement and management approach that accentuates infonned decisions or
actions of status of performance ''as its stands at a specific point of time when measured''
normally based on a set of measures that includes standards, criteria or performance indicators .
These institutional, collegiate or programmatic performance measures are assumed to provide
pre or post performance status to determine increased or deteriorating performance over a period
of time. These should be SMART (Specific, Measurable, Achievable, Relevant and Time-bound)
measure of efficiencies or effectiveness of inputs, processes, outputs and outcomes. As
highlighted by Serraf s "The Perils of Performance Measurement" (20 I 0), performance
indicators can be misinterpreted, misunderstood or misused, due to ( 1) diverse definitions of
indicators ranging from objective numerical measure of the degree to which an objective is being
66
Journal oflnstitutional Research in South East Asia- Vol. 15 No .1 May.'June 2017
achieved'' to a more subjective ·'observable change or event that provides evidence that
something has happened"; (2) complicated cause-effect relationships which can be caused by
real world complexity beyond the control of agency's processes and outcomes focused
approaches of outcome mapping (Earl, Carden, and Smutylo. 201 0) which is a "methodology
that can be used to create planning, monitoring, and evaluation mechanisms enabling
organizations to document, learn from, and report on their achievements" or the BSC (Balanced
Scorecard) of Kaplan and Norton (1992). and (3) according to Behn (2003)'s performance
which must have can have specified and specific purposes to evaluate, control, budget, motivate,
promote, celebrate, learn, and improve, thereby, there is no single metrics for each of these 8
arch purposes for performance measures.
Stated or unstated, most HEis aim for performance excellence or desired excellence. In
Performance Excellence, there are two main proponent models of MBNQA or EFQM and other
adapted I hybrid versions which inherently are based on two sets of performance criteria. These
underlining perfonnance criteria are the processes or enablers criteria (that subscribe more to
Deming's PDCA- Plan, Do, Check and act, in its ADLI- Approach, deployment, Learning and
Integration evaluators of its processes or enablers) and the result criteria (LeTCI- Level, Trend,
Comparison and Integration) that normally are performance indicators or statistical performance
data that are inevitably benchmarked through its comparatives.
As discussed above, while there are perils in the use of performance indicators, they should be
defined within the context of the HEI with an identified purpose to best measure what they are
supposed to measure. In addition, key perfonnance indicators can be defined for their key
educational and service support processes that create and deliver on educational value and results
based criteria aimed for excellence. This inherently means that the performance measures are
based on the performance scoring of the ADLI for evaluation of processes and LeTCI for
evaluation for results or the numerical values of the results indicators itself.
These standpoints will open up other dimensions of internal benchmarking in HEis, that needs
re-thinking and re-design that can be used to compare the performances of different colleges
within the institution or programs within the college based on a generic set of process and results
performance criteria. This, as such, is the premise of the proposed approach for internal
benchmarking which can be used for units within the same institutional umbrella or that
collectively works towards a generic set of high level institutional or organizational mission as
advocated in the following sections of this paper. The underlying assumption is that all units
within the same institution or organization works towards the same institutional or organizational
mission, and can be measured on the same internationally best practice yardstick as selected for
that institution. These are evaluated based on a set of generic process and results based criteria
applied generically across board all units, and are evaluated based on the same ADLI and LeTCI
performance scoring system of the same process and results yardstick.
IQA Benchmarking and CQI (Continuous Quality Improvements)
At the core of any accreditation is CQI (Continuous Quality Improvements), which is a key
outcome, and this outcome needs to be compared albeit with peer programs/institutions, to
understand how well one is faring. As such, another key rationale for benchmarking is to ensure
67
Joumal of Institutional Research in South East Asia- Vol. 15 No.I May,' June 2017
that the CQI which is the foci and loci of learning how these learning and educational service
support systems function and can be improved on to bring about better performance . It allows
HEis to identify, strategize, and implement activities and improvements or innovations to
delivering educational value, on par or above par as compared to its peers and knowing that they
are moving in the right direction. As such, CQI is basically the heart of any IQA of any HEI.
Basically, most IQA quality management approaches evaluation can be approached in four ways
of: measuring against defined objectives or standards set internally or externally; against measure
of customer satisfaction; against expert and professional judgment; and against comparator
institutions all over a defined time-scale (Schofield, 1998).
Dew and Nearing (2004) stated that changing the CQI ··system" of an institution requires the
implementation of:
• a scientific process of "developing an accurate description of the process, via
flowcharting, the collection of data regarding the performance of the process, and the
development of theories as to how the process can be changed to achieve the desired
result"',
• a socio-political process of "assembling the right group of people to study and modify
the process, giving their efforts legitimacy on the campus, and providing them with the
necessary time and access to data", and it is affected by the social cultural aspect of the
university systems, its transforming leadership, staff empowerment, organizational
learning (Schofield, 1998), organizational and individual capacity and capability (Teay,
2008, 2009) its resources and so on, and
• a technical process of "QMS (Quality Management System) which is a set of
coordinated activities to direct and control an organization in order to continually
improve the effectiveness and efficiency of performance'" (Department of Trade and
Industry UK, 1998) .
The QMS of a HEI normally constitutes the IQA of the HEI, and to support better performance,
the internal processes can also be internally benchmarked across the different programs within a
school or different schools within an institution. While benchmarking as discussed earlier is
mostly external in nature, there is paucity in researching or the development of the internal
benchmarking system for a HEI based on its processes and results based criteria. As such, this
paper advocates that setting up an internal benchmarking system will contribute to better
learning and sharing within the HEI itself towards better coordination and cooperation and
sharing and learning towards more efficient and effective performance in the creation and
delivery of education value of its processes and results criteria.
The IQA Case Study
KSA (Kingdom of Saudi Arabia) launched its national accreditation requirement in 2007 that
was strongly enforced in 2011. Illustrating this mandatory accreditation, the case study is the
leading and oldest in KSA and a top ranked institution (in the 200 to 300 groupings) on all the
world ranking systems, and was the first to be accredited in 2010 and re-accredited in 2017. To
68
Joumal of Institutional Research in South East Asia- Vol. 15 No.1 Ma 'June 2017
comply with the previous NCAAA (National Commission on Academic Accreditation and
Assessment) and present EEC-HES (Education Evaluation Commission - Higher education
Sector), the institution established its own unique IQA system, the KSU - QMS (King Saud
University Quality Management System). It follows the 11 Standards and 58 Sub-Standards of
NCAAA that was adapted to a set of "process-based" criteria and "result-based" criteria of the
MBNQA (Malcolm Baldrige National Quality Award) and the EFQM (European Forum for
Quality Management.
The resulting KSU- QMS model (KSU, 2012) was a set of 58 process criteria and 56 results
criteria (which is composed of 42 quantitative KPis and 14 qualitative KPis). It also adopted the
MBNQA performance evaluation approach of using the ADLI (Approach, Deployment,
Learning and Integration) for the "process" criteria and the LeTCI (Level , Trend Comparison
and Integration) for the '·results" criteria. The ··process·· and ·'results" criteria were assigned
weights and the scoring is based on a 100% scoring range, all of which led to a weighted score
that is based on I 000 points. This literally meant that there is a '·performance score .. for each of
the process and result criteria. The protocol is for program to assess its own processes and results
performance scoring i its SSR (Self-Study Report). The SSR was then submitted to the
independent university appointed KSU- BOA(Board of Assessors) for perfonnance assessment
to ascertain the variances or differentials in perfonnance scoring assessment. This ultimately led
to the QPAR (Quality Performance Assessment Report) developed by the KSU - BOA which
identified the performance scores for each of the criteria, the overall perfonnance score out of
1000 points, the strengths and OFI (opportunities for improvements). Later the assessed program
will close the CQI loop through the development of action plans, implementation of the action
plans to address the OFis, update the SSR which then leads to the second cycle of internal audit
and assessment by the KSU- BOA. This is recursive and nurtured through a mentoring system
of an appointed mentor for each of the school.
The Proposed Internal Benchmarking System
This methodology provides KSU with an objective performance assessment of the good practices
within the 58 core processes in the 11 Standards, and a set of quantifiable institution prescribed
56 KPis. This serves as the foundation and fundamentals of the proposed Internal Benchmarking
System (Teay, 20 13). With these quantifiable and objective performance scores of the processes
and results based criteria based on the ADLI and LeTCI evaluation scheme and with the actual
KPis as computed, these can be used as the basis for performance comparisons based on a
generic set of Standards and Criteria of:
•
•
•
All Colleges within KSU as an institution;
All programs within the same College as an entity;
Colleges and its programs within a categorical grouping of Health Science Group,
Humanities Group and Science Group.
The use of the 1000 points perf01mance scoring system and the 22 sets of Results Criteria of its
56 KPis (42 quantitative and 14 'qualitative Key Performance Indicators) are the basis of the
internal benclunarking system aimed at:
69
Journal of Institutional Research in South East Asia -Vol. 15 No.1 May 'June 2017
1. Providing an objective performance scoring of its process and results criteria which can
be used as a set of internal benclm1ark to compare the performance of the Colleges in the
Institution or the programs within the college as a whole.
2. Providing an objective set of 56 institution prescribed KPI that can be used as a
comparison of the performance of the Colleges in the Institution or the programs within
the college as a whole.
3. Providing a composite of comparative performance based on certain Standards and KPI
as a comparison of the perfonnance of the Colleges in the Institution or the programs
within the college as a whole.
4. Providing an objective quality management system based on (1 ), (2) and (3) for
continuous improvements and as a ·'internal ranking system'' for the allocation of
resources or awards or capital resources of:
a. Financial support for quality initiatives
b. Financial incentives for quality motivations
Based on the above rationale, the 5 different types of computation and analysis as proposed
(combined with the purpose of the above clustering) can be used for internal benchmarking or as
the basis of "ranking'' of programs I schools within KSU (depending on the selection ofthe KPis
or areas or perfonnance of its processes or results perforn1ance) based on objective performance
evaluation as follows:
•
Type 1 Analysis and internal benchmarks (with 3 sub-types) are based on all the
performance scoring of the 11 Standards and 58 processes and 56 KPis;
•
Type 2 Analysis and internal benchmarks (with 3 sub-types) are based on the
performance scoring of selected group of Standards which the institution I college I
categorical group desire, e.g. only the use of Criteria of Standards 1, 2 3, and 4, 10 only;
•
Type 3 Analysis and internal benchmarks (with 3 sub-types) are based on selected
institutional KPI of Standards and does not use the performance scores of the processes,
e.g. 14 selected KPis from Standard 4 (Teaching and Learning) and 10 (Research) ;
•
Type 4 Composite Analysis and internal benchmarks (with 3 sub-types) are on
performance scoring of selected Standards as a related categorical grouping and selected
prescribed Institutional KPI, e.g. Process Criteria and KPis of Standards 4 (Teaching and
Learning) and 10 (Research);
•
Type 5 Composite Analysis and internal benchmarks (with 4 sub-types) are based on
perfonnance scoring of all Standards or selected Standards for a related categorical
grouping and selected prescribed Institutional KPI in a group of similar College category
e.g. only colleges in Health Science Group.
70
Journal of Institutional Research in South East Asia- Vol. 15 No.I Ma y 'June 2017
Using the analysis and approaches above, KSU can use this Internal Benchmarking System to
detern1ine the performance of each College I Programs I Categorical group and us e their
performance scoring and KPis as the basis for ·'ranking"' and ·'allocation of resources··. the basis
of informed decision making by the institution I college management. This illustration will only
provide a sample discussion of a set of internal benchmarking system based on Type 1 and Type
4 analysis and discussion. Though the types of analysis as discussed are different, they use the
same set of sample data for illustration purposes. For all types of analysis, there are 16 sub-types.
a) Type 1 Analysis based on all performance scoring Standards
Table 1: Sample of Performance Scoring Comparisons of INSTITUTION I COLLEGE I
PROGRAM AS A WHOLE across different academic years (or trend analvsis
Scaled Scoring Performance
Standards
Standard 1: Mission and Objectives
0
Standard 2: Governance and
0
Administration
Management of quality
0
Standard 3:
assurance and improvement
0
Standard 4: Learning and Teaching
Weights
40
so
Performance Achievement
(Institution I College I Program)
2010
2016
2015
2014
21
s
2S
29
10
21
2S
32
70
12
26
3S
40
2SO
4S
6o
90
120
40
4S
70
23
33
0
Standards: Student administration and
su ort set"\ices
Standard 6: Learning resources
6o
26
32
3S
3S
0
Standard 7: Facilities and equipment
6o
22
35
37
37
Standard S: Financial planning and
mana ement
Standard 9: Employment processes
40
15
19
20
20
So
2S
200
So
61
36
So
11
40
110
45
130
30
3S
374
487
568
0
0
0
Standard 10: Research
Standard 11: Institutional relationships
0
\\ith the communitv
Standards Overall Performance Score
0
1000
s
261
Discussion: This analysis is based on all the 11 Standards over a period of 4 years to provide a
trend analysis of the comparative perfonnance of the institution as a whole. It shows that there
are progressive improvements in all the standards across the whole institution as a whole over a
period of 4 years.
Table 2: Sample o(Per(ormance Scoring Comparisons o(ALL COLLEGES
WITHIN THE INSTITUTION in an academic year
Scaled Scoring Performance
Standards
0
0
0
0
Standard 1: Mission and Objectives
Standard 2: GO\·ernance and
Administration
Standard 3: Management of quality
assura nce and improvement
Standard 4: Learning and Teaching
Weights
Performance Achievement forAY 201512016
College 1 College 2
College3
College 4
s
40
so
10
21
21
2S
25
29
32
70
12
26
3S
40
2SO
4S
6o
90
120
71
Journal of Institutional Research in South East Asia - Vol. 15 No. 1 May'June 2017
Scaled Scoring Performance
Standards
Weights
Performance Achievemen t forAY 2015/ 2016
College 1 College 2
College3
College 4
70
23
33
40
0
Standard s : Student administ ration
and SUQQOrt sef\ices
Standard 6: Learning resources
6o
26
32
35
35
0
Standard 7: Facilities and equipment
6o
22
35
37
37
0
40
15
19
20
20
0
Standard S: Financial planning and
mana ement
Sta ndard 9: Employment processes
So
2S
0
Standard 10: Research
200
61
36
So
40
110
45
130
So
s
11
30
35
374
487
568
0
Standard 11 : Institutional relationships
\\ith the community
Standards Overall Performance Score
0
1000
261
45
Discussion: This analysis is based on all the 11 Standards to provide a snap shot of the annual
comparative performance of all the colleges in the institution as a whole in an academic year
2013/2014. It shows that of the 4 colleges analyzed, compared and internal benchmarked; the
best performing college is College 4, while the worse performing college is College 1. Based on
this internally benchmarked perfonnance on the same 11 Standards, the institution can take
corrective or remedial actions, or for the allocation of resources and financial incentives.
Table 3: Sample o(Per(ormance Scoring Comparisons WITHIN SAME COLLEGE OF ITS
DIFFERENT PROGRAMS in an academic vear
Scaled Scoring P e rforma nce
Standards
Standard 1: Mission and Objectives
0
Standard 2: Governance and
0
Administration
Standard 3: Management of quality
0
assurance and im12rovement
Standard 4: Learning and Teaching
0
Weights
40
so
Performance Achievement forAY 2015/ 2016
Program 1 Program 2
Program3
Program4
s
21
25
29
21
10
25
32
70
12
26
250
35
40
4S
6o
90
120
70
23
33
40
45
0
Standards: Student administration
and SUJ2]20rt services
Standard 6: Learnin g resources
6o
26
32
35
35
0
Standard 7: Facilities and equipment
6o
22
35
37
37
Standard S: Financial planning and
mana ement
Standard 9: Employment processes
Standard 10 : Research
40
15
19
20
20
0
0
0
0
Standard 11: Institutional
relationsh iQS with the communitv
Standards Overall Performance Score
0
So
2S
200
61
36
So
40
110
45
130
So
s
11
30
35
374
487
568
1000
261
Discussion: This analysis is based on all the 11 Standards to provide a snap shot of the annual
comparative perfonnance of all the programs in the College as a whole in an academic year
2013/2014. It shows that of the 4 programs analyzed, compared and internal benchmarked; the
best performing college is Program 4, while the worse performing college is Program 1. Based
on this internally benchmarked performance on the same 11 Standards, the institution or college
72
Journal of Institutional Research in South East Asia - Vol. 15 No.1 May.'June 2017
administration can take corrective or remedial actions, or for the allocation of resources and
financial incentives.
b) Type 4 Composite Analysis based on performance scoring of selected Standards as a
related categorical grouping and selected prescribed Institutional KPI
Table 4: Sample o(Per(ormance Scoring Comparisons and Institution I College I program
related to Govemance and quality of educational offers of INSTITUTION I COLLEGE I
PROGRAM AS A WHOLE across different academic years (or trend analysis
Weights
Scaled Scoring Performance
Per formance Achievement
(Institution I College I Program)
2013
2014
2015
2016
Standards related to quality of educational offers
o
o
Standard 4: Learning
Tea chin
Standard 10: Research
and
Performance scores of quality of
educational offers
Select ed prescribed Institutional KPI
o
Proportion of full-time equivalent
students in proportion to the to tal
number of full-time facu lty members
o Percentage of the full-time faculty
members obtaining academic or
professional awards at th e national
or interna tional level.(%)
o Propor tion of studen ts entering
undergraduate
programs
who
complete
those
programs
in
minimum time
o Proportion of full time member of
teaching s taff with at least on
refereed publications during the
ear
N umber of cita tions in refereed
0
journals in the previous year per full
time equivalent teaching staff.
Performance scores of quality of educational
offers and research
Overall Performance Score
250
48
6o
200
61
So
450
109
140
90
120
110
130
200
250
2003
2014
2015
2016
3
ScoreiKPI
0.08160 :1
Scorei KPI
1.09150:1
ScoreiKPI
1.5130:1
ScoreiKPI
1.89110:1
3
o .6oi0 .02
0.6210 .03
0.6210.03
1.2810.05
3
0.0810 .12
5
0.910.55
1.7710.85
0.6910.25
0.6910.25
2.1910.35
5
0.3110.60
1.1911.20
1.3111.30
2.7511.90
19
1.45
4.08
5.02
9.88
469
110.45
144-08
205.02
259-88
Discu ssion: This type of analysis is a composite of select Standards and selected prescribed KPis
over a period of 4 years to provide a trend analysis of the comparative perfonnance of the
institution as a whole. It shows that there are progressive improvements in these selected
Standards 4 and I 0 that relates to "quality of educational offers" across the whole institution as a
whole over a period of 4 years. It is also supported by the analysis of the selected prescribed
institutional KPis to provide an objective set of analysis and internal benchmark based on
selected Standards and KPis , which in this case are the KPis related to "quality of teaching,
learning and research". In this case, its shows that there arc progressive improvements over a
period of 4 years trend analysis as a whole.
73
Joumal of Institutional Research in South East Asia- Vol. 15 No.I Ma 'June 2017
Table 5: Sample ofPer(ormance Scoring Comparisons related to Governance and qualitr of
educational offers ofALL COLLEGES WITHIN THE INSTITUTION in academic year
Scaled Scoring Performance
Weights
Performance Achievement forAY 2015/2016
College 1
Standards related to quality of educational offers
o Standard 4: Learning and Teaching
250
200
o Standard 10: Research
Performance scores of quality of
450
educational offers
Selected prescribed Institutional KPI
o
Proportion of full-tim e equivalent
students in proportion to the total
number of full-time faculty members
o Percentage of the full-time faculty
members
o btaining
academic
or
professional awards at the national or
internationa1level. (%)
o Proportion
of
students
entering
undergraduate programs who complete
those programs in minimum time
o Proportion of full time member of
teaching s ta ff with at leas t on refereed
publications during the previous year
o Number of citations in refereed journals
in the previous year per full time
equivalent teaching staff.
Performance scores of quality of educational
offers and research
Overall Performance Score
College 2
College 3
College 4
120
13 0
6o
90
61
8o
110
109
140
200
3
College 1
Score/KPI
0 .08/60 :1
College 2
Score/KPI
1.09/50:1
College 3
Score/KPI
1.5/3 0 :1
College4
Score/KPI
1.89/ 10 :1
3
0.60/0.02
0.62/0.03
0.62/0.03
1.28/0.05
3
0.08/0.12
0-49/0.33
5
0.38 / 0.05
0 .69/0.25
5
0.31/0.60
1.19/ 1.20
19
1.45
110.45
144-08
1.77 j o.85
0.69/0.25
2.19/0.35
5.02
9.88
205.02
259-88
Discussion: This type of analysis is a composite of select Standards and selected prescribed KPis
to provide a snapshot of the 2013/2014 academic year trend analysis of the comparative
performance of all the colleges in the institution as a whole. It shows that College 4 is the "best
performing·· while College I is the '·worst performing·· based on these selected Standards 4 and
I 0 that relates to "quality of educational offers" across the whole institution as a whole. It is also
supported by the analysis of the selected prescribed institutional KPis to provide an objective set
of analysis and internal benchmark based on selected Standards and KPis, which in this case are
the KPls related to '·quality of teaching. learning and research'' . In this case, it supports the
comparative performance of the Colleges based on the KPis, and a combination of both the
selected Standards and KPis analysis as a whole.
74
Journal of Institutional Research in South East Asia- Vol. 15 No.1 Ma .'June 2017
Table 6: Sample ofPer(ormance Scoring Comparisons related to Governance and quality of
educational offers WITHIN SAME COLLEGE OF ITS DIFFERENT PROGRAMS in an
academic vear
Scaled Scoring Performance
Weights
Standards related to quality of educational offers
o Standard 4: Learning and Teaching
200
o Standard 10: Research
Performance scores of quality of
450
educational offers
Selected prescribed Institutional KPI
o
o
Proportion of full-time equivalent
students in proportion to the total
number of full-time faculty members
Percentage of the full-tim e faculty
members
obtaining
academic
or
professional awards at the na tional or
international level. H セ I@
Proportion
of
students
entering
undergraduate programs who complete
those programs in minimum time
Proportion of full time member of
teaching s taff with at least on refereed
publications during the previous year
Number of citations in refereed journals
in the previous year per full time
equivalent teaching staff.
Performance Achievement forAY 2015/2016
Program 1
Program 2
Program 3
Program 4
6o
90
61
So
110
120
130
109
140
200
250
Program 1
Score/KPI
Program 2
Score/KPI
Program 3
Score/KPI
Program 4
Score/KPI
3
0.08/ 60 :1
1.09/ 50:1
1.5/ 30 :1
1.89/10:1
3
0.60/0.02
0.62/0.03
3
0.08/0.12
5
0.38 / 0.05
0.69/ 0.25
5
0.31/0.60
1.19/1.20
19
1.45
0.62/0.03
1.28/0.05
P
o
o
o
Performance scores of quality of educational
offers and research
Overall Performance Score
110.45
0.9 / 0.55
0 .69/ 0.25
5.02
144-08
1.77/ 0 .85
9.88
205-02
Discussion: This type of analysis is a composite of select Standards and selected prescribed KPis
to provide a snapshot of the 2013/2014 academic year trend analysis of the comparative
performance of all the programs in the college as a whole. It shows that Program 4 is the "best
performing'' while Program I is the ·'worst performing" based on these selected Standards 4 and
10 that relates to "quality of educational offers·· across the college as a whole. It is also supported
by the analysis of the selected prescribed institutional KPis to provide an objective set of
analysis and internal benchmark based on selected Standards and KPis, which in this case are the
KPis related to ·'quality of teaching, learning and research''. In this case, it supports the
comparative performance of the programs within the same College based on the KPis, and a
combination of both the selected Standards and KPis analysis as a whole.
Implications and Discussion
Quality in HEis have mostly tended towards "qualitative'' rather than ""quantitative" evaluation
especially on the processes in most of the areas of governance, leadership, management, teaching
and learning, research, learning resources, infrastructure and facilities , IQA system, financial and
risk management, human resources management and societal responsibility, partnerships and
75
Journal of Institutional Research in South East Asia- Vol. 15 No. I Ma 'June 2017
collaborations (Teay, 2012; CUC, 2006 and 2008) where most areas of quality perforn1ance are
more similar than dissimilar. This is in addition to certain " proxy measures·· that were developed
to provide a measure of performance in terms of KPis like progression rates, faculty-student
ratio, research publications per faculty, number of societal responsibility projects, etc. etc. Teay
(2014) conducted a comprehensive review of strategic and operational KPis, as used in HEI and
as identified in strategic plans, which were supported by CUC (2006) and Pollard, et.al. (20 13)
which identified 29 key areas where KPis for HEis perfonnance measures could be developed.
But practically, though standards for academic performance and management were defined in
most accreditation system, its performance assessment are most likely to be qualitative supported
by evidence rather than an objective based evaluation of processes or results as used in MBNQA
or EFQM for perfonnance excellence assessment. Only one case of the Commonwealth
University Benchmarking and Management Club (Wragg. 1998) usage of a scoring process for
benchmarking was found. But these accreditation systems are still externalized systems, and
normal ly most HEis' IQAs are a mirror of the externalized accreditation Standards without a
fonnalized internal system and is mostly qualitative in nature.
While aiming for a world-class .. education in its KSU 2030 Strategic Plan, KSU has also set up
an IQA system (KSU - QMS)that integrates the national standards and sub-standards with the
MBNQA's approach in performance evaluation of ADLI and LeTCI and its scoring
methodology aimed at comparison with .. best practices .. which is a fundamental foundation of
MBNQA. Inadvertently, this lQA approach provided a "good practice", objective framework
with strong fundamentals in evaluation and assessment of its processes and results and
quantifiable approach towards an internal benchmarking system as discussed earlier. Though this
paper has advocated a unique approach in developing an internal benchmarking system, there are
key fundamentals that any HEis applying such need to consider the following:
1. The HE/'s socio-political system - Fundamentally, all HEis are like any organization
powered by and led by human (McNair and Leibfried. 1992). Capitalizing on the HEis '
human capitals means strengthening and sustaining its organizational and individual
capacity and capabilities (Teay, 2008 and 2009), its strategic context and content of its
direction and commitment towards society and delivering on its educational value. This
calls for a close look into the leadership, its human empowerment(Schofield, 1998), its
social-cultural dimensions of humanizing the teaching and learning, its social
environments of collaborations and cooperation, sharing and learning, all of which are
fundamentally the organizational and individual learning. It also calls for a strong and
sustainable CQI (Dew and Nearing, 2004) . The HEis need to avoid the typical mistakes
made in benchmarking as discussed earlier (Innovation Network, 1997). The very failure
of this due to these mistakes or downplaying the importance and imperatives of the sociopolitical aspects of the HEI can undermine any initiatives or endeavors towards an
efficient and effective IQA system for education sustainability.
2. The HE/'s scientific セケウエ・ュHdキ@
and Nearing, 2004) - Education value creation and
delivery is generic regardless of its discipline. As discussed earlier and evidenced by
CUC (2006 and 2008) key areas of focus in all HEis ' educational system revolve around
the same focused areas. This inadvertently mean that most of the core processes (Camp,
76
Joumal of Institutional Research in South East Asia- Vol. 15 No.1 May'June 2017
1989 and 1995; Chaffee and Scherr 1992; Clark, 1993) defining, supporting and
accomplishing educational value creation and delivery should be designed and developed
to maximize value and minimize cost in delivering educational values to the students by
the academics and support staffs(Shafer &Coate, 1992; Watson, 1993). The education
·'value chain'' that cuts across academic and administrative units could be re-focused or
re-engineered to bring about the maximal processes value.
3. The HEI's technical system(Dew and Nearing, 2004) -If the strong fundamentals of (1)
and (2) above are developed, the technical aspects is less of an issue where information
technology (APQC, 1993; Shafer &Coate, 1992) can make the number crunching, the
academic processes management and the data and information management less of a
burden and headache. This has been accomplished when KSU developed the electronic
version of its IQA s stem in the form of the E-QMS which incorporates most aspects of
course specifications and report, program specifications and report, the faculty portfolio
system, the performance metrics system, the internal audit and assessment system, the
SSR development system, all of which are interfaced with the university data warehouse,
in the upcoming four years, KSU will move into the performance management system
targeted at 2020. This will be the hallmark where the proposed internal benchmarking
system will underscore the performance comparatives and management across the whole
university.
Conclusion
While benchmarking is recognized as a must, there are still disparate and sporadic activities that
systematically provides a benchmarking system across the different education systems in the
world. Though much progress in benchmarking has been made in the US due to its institutional
research approach and its central data system, it is still progressing when most HEis are asked to
provide benchmarking data or information. In developing countries, much need to be done
especially at the central or national level to politically and socially instill this benchmarking
culture and practice.
Though benchmarking is nonnally externalized in nature, there is no reason why a HEI do not or
will not set up its own internal benchmarking system to drive perfonnance through sharing and
learning within its own system. This paper has illustrated that such a system can be established
albeit the socio-political and the scientific aspects are dealt with conscientiously as the teclmical
aspects supporting the internal benchmarking system is technology based. Such an internal
benchmark system can be created through "out of the box" thinking and being open to innovative
changes. It calls for strong leadership and commitment from one and all within the system.
In conclusion, the very fundamentals of the internal benchmarking system are not much different
from the external benchmarks as both can offer ''good o best practices'' to be shared and learned.
Both the internal and external benchmarking systems, used collectively, can represent a very
strong approach towards CQI, which is what quality management in HEI is about.
77
Journal of Institutional Research in South East Asia - Vol. 15 No.I May 'June 2017
References
Alestete, J.W., (1995), Benchmarking in Higher Education: Adapting Best Practices to Improve
Qualitv, ASHE-ERIC Higher Education Report No. 5
Andersen, P. &Pettersen P.G., (1996), The Benchmarking Handbook: Step-by-Step Instructions.
London: Chapman & Hall.
APQC ( 1993), American Productivity & Quality Center - Th e Benchmarking Management
Guide, - Productivity Press - Cambridge - MA - USA
Bogan, C.E. & English, M.J., (1994), Benchmarking for Best Practices: Winning through
Innovative Adaptation, McGraw-Hill., August 1, 1994
Behn, R., (2003), Why Measure Performance? Different Purposes Require Different Measures.
Public Administration Review.September-October. Vol. 63, No.5, pp. 586-606.
Camp, R.C . (1989), Benchmarking- the search for industry best practices that lead to superior
performance, Milwaukee, WI, The American Society for Quality Control Press.
Camp, R.C. (1995), Business Process Benchmarking: Finding and Implementing Best Practices,
Milwaukee, WI, The American Society for Quality Control Press.
Chaffee, E. &Sherr, L., (1992), Quality: transforming postsecondary education, ASHE-ERIC
Higher Education Report no. 3 (Washington DC, ERIC Clearinghouse on Higher Education).
CUC (Conunittee of University Chairs) (2006), CUC Report on the Monitoring of Institutional
Perfom1ance and the Use of Key Performance Indicators (November 2006)
CUC (Committee of University Chairs) (2008), Report on the Implementation of Key
Perfonnance Indicators: Case Study Experiences(June 2008)
Department of Trade and
www.dti.gov.uk/guality/qms
Industry
UK,
( 1998),
From
Quality
to
Excellence,
Dew, J.R. &Nearing, M.M., (2004), Continuous Quality Improvement in Higher Education,
American Council on Education I Praeger Series on Higher Education, Praeger Publishers,
Westport CT, August 30, 2004, pp 101
Earl, S., Carden, F., and Smutylo, T., (2010), Outcome Mapping Building Learning and
Reflection into Development Programs, International Development Research Centre, Ottawa,
Canada
78
Journal of Institutional Research in South East Asia- Vol. 15 No.1 May 1June 2017
Fielden, J., (1997) Benchmarking University Performance, CHFMS monograph, 1997
hmovation Network, (1994), Applying Benchmarking in Higher Education, Insight Series,
Brentwood Tennessee, 1994
Pollard, E., Williams, M., Williams, Joy., Bertram, C. and Buzzeo, J., of IES (Institute for
Employment Studies) and Drever, E., Griggs, J. and Coutinho, S., of NatCen (National Center
for Social and Economic Research, entitled:
a. How should we measure higher education? A fundamental review of the
Performance Indicators Part One: The Synthesis Report (November 20 13)
b. Hmv should vve measure higher education? A fundamental review of th e
Performance Indicators Part Two : The Evidence Report (November 2013)
McNair, Carol J, and Leibfried, Kathleen H J, (1992), Benchmarking: A Tool for Continuous
Improvement, HarperCollins, 1992
Kaplan, R.S and Norton, D.P. (1992). "The Balanced Scorecard- Measures That Drive
Performance". Harvard Business Review (January-February): 71-79
Karl Of, B. &Ostblom, S. ( 1995). Benchmarking.Tuottavuudellajalaadullamestariksi. Gummerus.
Kempner, D.E., (1993), The Pilot Years: The Growth of the NACUBO Benchmarking Project,
NACUBO Business Officer 27(6), pp. 21-31
KSU (King Saud University) (20 12), KSU- QMS King Saud University Quality Management
System, 3rd Edition, King Saud University, Riyadh, Kingdom of Saudi Arabia, 2012
Serrat, 0. (20 10). The perils ofpoformance measurement. Washington, DC: Asian Development
Bank.
Shafer, B.S. &Coate, L.E., (1992), Benchmarking in Higher Education: A Tool for Improving
Quality and Reducing Cost, Business Officer, 26(5), pp. 28-35
Spendolini, M, (1992), The benchmarking book, New York :Amacom, c1992
UNESCO, (1998), Benchmarking in Higher Education, A Study conducted by the
Commonwealth Higher Education Management Service, UNESCO, Paris.
Schofield, A., An Introduction to Benchmarking in Higher Education
Schofield, A., Benchmarking: An Overview of Approaches and Issues in
Implementation
Farquhar, R., Higher Education Benchmarking in Canada and the United States
Massaro, V., Benchmarking in Australian Higher education, Lund, h.,
Benchmarking in the UK Higher Education
79
Journal oflnstitutional Research in South East Asia- Vol. 15 No.I May'June 2017
Schreiterer, U., Benchmarking in European Higher Education
Wragg, C., The Commonwealth University Management Benchmarking Club
Teay, S., (2008), Sufficiency and Sustainability: Institutional Capacity Building for HE,
Proceedings of the 8th Annual SEAAIR Conference on Institutional Capacity Building toward
HE Competitive Advantage in Surabaya, Indonesia, 4th- 6th November, 2008.
Teay, S., (2008), Sufficiency and Sustainability: Individual Capacity Building for HE, scholarly
paper presented at the 48th Annual 2008 Association for Institutional Research Forum, Seattle,
Washington, 24th May- 29th May, 2008
Teay S., (2009), Institution and Individual Sufficiency and Sustainability of the future HEI,
Proceedings of 9th Annual SEAAIR (South East Asia Association for Institutional Research, 13th
- 15th October 2009, Penang, Malaysia
Teay, S., (2012), Commonalities in Diversity, Proceedings in APQN 2012 Conference in Siem
Reap, Cambodia from 28th February to 2"d March 2012.
Teay, S. (2013), KSU Internal Benchmarking System, Internal Research Paper of Deanship of
Quality DoQ 112013, King Saud University, Riyadh, Kingdom of Saudi Arabia, 2013
Teay, S., (2014), Developing Strategic KP!s of King Saud Universizy, King Saud University
Research Report- Deanship of Quality DoQ # 1/20 14,January- April 2014 G
Tuominen, K., (1993), Benchmarkingprosessiopas. Opijakehitakilpailijoitanopeammin.
Metalliteollisuudenkustannus.TammerPaino.
Tuominen, K., (1997), Muutoshallinnanmestari.
Kuinkatoteuttaastrategisetsuunnitelmatkilpailijoita-nopeammin?SuomenLaatuyhdistys ry.
Watson, G.H., (1993), Strategic Benchmarking - how to rate your company 's pel:formance
against the world's best, John Wiley, New York, USA
Zairi, M., (1992), Competitive Benchmarking- An Executive Guide, Technical Communications
(Publishing) Ltd., Letchworth, UK.
Zairi, M., (l996).E.ffective Benchmarking. Learningfrom the best, London: Chapman & Hall.
80