Quantifier Guided Aggregation Using OWA Operators: Ronald R. Yager

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Quantifier Guided Aggregation Using

OWA Operators
Ronald R. Yager
Machine Intelligence Institute; lona College, New Rochelle,
New York 10801

We consider multicriteria aggregation problems where, rather than requiring all the
criteria be satisfied, we need only satisfy some portion of the criteria. The proportion
of the critera required is specified in terms of a linguistic quantifier such as most. We
use a fuzzy set representation of these linguistic quantifiers to obtain decision functions
in the form of OWA aggregations. A methodology is suggested for including importances
associated with the individual criteria. A procedure for determining the measure of
“omess” directly from the quantifier is suggested. We introduce an extension of the
OWA operators which involves the use of triangular norms. 0 19% John Wiley & Sons, Inc.

I. INTRODUCTION
Starting with the classic work of Bellman and Zadeh,’ fuzzy set theory
has been used as a tool to develop and model multicriteria decision problems.
In this framework the criteria are represented as fuzzy subsets over the space of
decision alternatives and fuzzy set operators are used to aggregatethe individual
criteria to form the overall decision function. As originally suggested by Bellman
and Zadeh, the criteria are combined by the use of an intersection operation
which implicitly implies a requirement that all the criteria be satisfied by a
solution to the problem. As noted by Yager,2J this condition may not always
be the appropriate relationship between the criteria. For example, a decision
maker may be satisfied if most of the criteria are satisfied. In this work we look
at the issue of the formulation of these softer decision functions which we call
quantifier guided aggregations. In Ref. 4 we suggested the use of the Ordered
Weighted Averaging (OWA) operators as a tool to implement these kinds of
aggregations. Here we develop this approach further by considering environ-
ments in which the individual criteria have importances associated with them.
We also provide for an extension of the basic OWA aggregation which allows
us to include triangular norm operations. A number of other related issues such
as the measure of orness and determination of weights in OWA aggregation
are discussed.

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, VOL. 11,49-73 (1996)


0 1996 John Wiley & Sons, Inc. CCC 0884-8173/%/010049-25
50 YAGER

11. LINGUISTIC QUANTIFIERS


In natural language we find many examples of what ZadehScalled linguistic
quantifiers. These objects are exemplified by terms such as most, many, at
least hav, some, and few. Classical logic uses only two of these terms; the
existential quantifier, there exists, and the universal quantifier, all, in forming
logical propositions. In Ref. 5 Zadeh suggested a formal representation of
these linguistic quantifiers using fuzzy sets. Furthermore, Zadeh distinguished
between two classes of linguistic quantifiers absolute and relative. Absolute
quantifiers can be represented as a fuzzy subset of the non-negative reals. In
particular, if we have an absolute quantifier, such as about 10, we can represent
it as a fuzzy subset Q of the non-negative reals, R + . In this representation for
any X E R+ we use Q(x) to indicate the degree to which x satisfies the concept
conveyed by the linguistic quantifier. A relative or proportional linguistic quanti-
fier indicates a proportional quantity such as most, few, or about half. Zadeh
suggested that any relative quantifier can be expressed as a fuzzy subset Q of
the unit interval, I . Again in this representation for any proportion y E I, Q(y)
indicates the degree to which y satisfies the concept conveyed by the term Q.
In Ref. 6 Yager further distinguished three categories of these relative
quantifiers. A fuzzy subset Q of the real line is called a Regular Increasing
Monotone (RIM) quantifier if

1. Q(0) = 0; 2. Q(1) = 1; 3. Q(x) 2 Q ( y )if x > y .

Examples of this kind of quantifier are all, most, many, at least a .


A fuzzy subset Q of the real line is called a Regular Decreasing Monotone
(RDM) quantifier if

1. Q(0) = 1; 2. Q(1) = 0; 3. Q ( y ) 2 Q(x) if x > y

Examples of these kinds of quantifier are at most one, few, at most a.


A fuzzy subset Q of the real line is called a Regular UniModal (RUM)
quantifier if

1. Q(0) = 0
2. Q(1) = 0
3. there exists two values a and b E I, where a < b, such that
(i) For y < Q, Q(x) IQ ( y )if x < y .
(ii) For x E [ a , b ] , Q(x) = 1.
(iii) For x > b , Q(x) 1 Q ( y )if x < y .
An example of this class ia about a .
Some interesting relationships exist between these three classes of relative
quantifiers. Noting that the antonym P of a fuzzy F [7]on the real line is defined
as P(x) = F(l - x ) , we see that if Q is a RIM quantifier, then its antonym is a
RDM quantifier and vice versa. Examples of these antonym pairs arefew and
many, and at least a and at most a.
QUANTIFIER GUIDED AGGREGATION 51

Furthermore, any RUM. quantifier can be expressed as the intersection of


a RIM and RDM quantifier. Assume Q is a unimodal quantifier such that
Q(x) = 1 when x E [a, b ] . Let Q , ( x ) be a quantifier such that

(1) Q,(x) = Q(x) for x 5 b; ( 2 ) Q , ( x ) = 1 for x > 6.


It is easily seen that Q , ( x ) is monotone increasing quantifier. Let Qz(x) be
a quantifier such that

( 1 ) Q2(x) = 1 for x < a; ( 2 ) Q2(x) = Q ( x ) for x 2 a.

This can be easily seen to be monotone decreasing quantifier. Furthermore, if


we define
Q+ = QI n Q2
where
Q+b>
= T [ Q , ( x ) , Qz(x)l
and T is any t-norm [ 8 ] , i.e., min or product, then Q+(x) = Q ( x ) .
The quantifier for all is represented by the fuzzy subset Q, where
&(1) = 1 and Q*(x) = 0 for all x # 1 .
The quantifier there exist, not none, is defined as
Q*(O) = 0 and Q*(x) = 1 for all x # 0.
Both of these are examples of RIM quantifiers.
The antonym of all is the quantifier Q with Q(0) = 1 and $(x) = 0 for all
x # 0. It is semantically equivalent to the linguistic term none.
Consider the parameterized fuzzy subset defined on Z such that
Q(r) = ra cx 2 0
We can see that this formulation defines a family of RIM quantifiers. Three
special cases of these family are worth noting:

1. For a = 1 we get Q ( r ) = r . This is called the unitor quantifier


2. For a + 03 we get Q*, the universal quantifier.
3. For a + 0 we get Q*,the existential quantifier.

Another parameterized class of fuzzy subsets, that provide a family of


RIM quantifiers is
2
QW = - 1,
1 + e-h (L)l - r
A E [0, w ] . Again here as A + w we get Q* while A + 0 gives us Q*. The
following theorem shows that all RIM quantifiers are bound by Q* and Q,.
52 YAGER

THEOREM.
Assume Q is any RIM quanti$er than for all x E I

Proof. Since 0 IQ(x) I1 the result follows from the definitions of RIM, Q*
and Q*.

111. QUANTIFIER GUIDED AGGREGATION


Assume we are faced with a decision problem in which we have a collection
of n criteria of interest. We denote these criteria as A , , . . . , A,. For any
solution x we can evaluate the degree to which it satisfies any of the criteria,
we shall denote this as Ai(x) E [0, 11. In this framework Ai can be viewed as
a fuzzy subset over the set of alternatives. In order to determine the appropriate-
ness of a particular alternative x as the solution to our problem, we must
aggregate its scores to the individual criteria to find some overall single value
to associate with the alternative. In order to obtain this overall evaluation,
implement the aggregation, some information must be provided on the relation-
ship between the criteria that are to be aggregated. In their classic work Bellman
and Zadeh' suggested an approach to this problem which uses
Agg(A&), A&), . . . . . A,(x)) = Min,[Ai(x)l.
Essentially, this approach is assuming that we desire all the criteria be satisfied
by an acceptable solution. One then selects, as the best solution, the alternative
with the highest aggregated value.
In Ref. 2 Yager suggested a generalization of this approach which he called
quant$er guided aggregation. In order to formally express this technique we
must first recall the OWA aggregation operator [4,91.

DEFINITION.
An aggregation operator F ,
F : I" -+ I
is called an Ordered Weighted Averaging (0WA) operator of dimension n if it
has associated with it a weighting vector W ,

such that 1. wi E [ O , I] and 2. Z:=, wi = 1 and where

F ( a , ,. .... ,a,) = wjbj


j= 1

in which bj is the j t h largest of the a;.


QUANTIFIER GUIDED AGGREGATION 53

An essential feature of this aggregation is the reordering operation, a nonlin-


ear operator, that is used in the process. Thus in the OWA aggregation the
weights are not associated with a particular argument but with the ordered
position of the arguments.
In Ref. 4 Yager shows that OWA aggregation has the following properties:

(1) Commutativity: The indexing of the arguments is irrelevant


(2) Monotonicity: If a, 2 di for all i then F ( q , . . . . . . , a,) 2
F ( d * , . . . . . . , d,).
(3) Idempotency: F(a, . . . a ) = a.
(4) Bounded: Maxi[ail 2 F ( a l , . . . . . . , a,) 2 Mini[ail

We note that these conditions imply that the OWA operator is a mean operator
[lo, 111.
The form of the aggregation is very strongly dependent upon the weighting
vector used. In Ref. 12 Yager investigates various different families of OWA
aggregation operators. A number of special cases of weighting vector are worth
noting. The weighting vector W* defined such that
w1 = 1 and wj = 0 for a l l j # 1
gives us the aggregation F * ( a l , . . . , a,) = Maxi[ai]. Thus W* provides the
largest possible aggregation.
The weighting vector W , defined such that
0, = 1 and wi = 0 for i # n
gives us the aggregation F * (al, . . . a,) = Mini[ai]. This weighting provides
the smallest aggregation of the arguments.
1
The weighting vector W , defined such that wi = - for all i gives us the
n
simple average
1 "
F,(a,, . . . a,) = -2
n i=l
a,.

The weighting vector W k defined such that wk = 1 and wi = 0 for i # k gives


us F ( a , , . . . a,) = bk where bk is the kth largest of the ai.
One other weighting worth noting, is one we shall call the Olympic aggre-
gate. In this case
w, =0

0, =0

0. = - fori+ l o r n
' n-2
Having introduced the OWA aggregation operator we are now in a position
to describe the process of quantifier guided aggregation. Again consider that
54 YAGER

we have a collection of A l , . . . ,A,, of criteria. These criteria are represented


as fuzzy subsets over the set of alternatives X.In the process of quantifier guided
aggregation the decision maker provides a linguistic quantifier Q indicating the
proportion of criteria he feels is necessary €or a good solution. Essentially in this
framework the decision maker is providing an agenda indicating the structure to
be used to aggregate the individual criteria to get an overall decision function.
The form of decision function implicit in this approach is
Q criteria are satisfied by a good solution.
The formal procedure used to evaluate this decision function is expressed
in the following. The quantifier is used to generate an OWA weighting vector
W of dimension n. This weighting vector is then used in an OWA aggregation
to determine the overall evaluation for each alternative. For each alternative
the argument of this OWA aggregation is the satisfaction of the alternative to
each of the criteria, A,(x), i = 1 . . . . n. Thus the process used in quantifier
guided aggregation is as follows

(1) Use Q to generate a set of OWA weights, w l , . . . . o,,.


(2) For each alternative x in X calculate the overall evaluation
W X )= F(Al(x),A&), . - . - - A,,(x))
where F is an OWA aggregation using the weights found in 1.
The procedure used for generating the weights from the quantifier depends
upon the type of quantifier provided. We shall here consider the case in which
Q is a RIM quantifier. In this case the weights are generated as

w i= Q (a) (9)
-Q
fori= 1 . . . .n
Because of the nondecreasing nature of Q it follows that wi > 0. Furthermore,
from the regularity of Q , Q(1) = 1 and Q(0) = 0, it follows that &ai= 1. Thus
we see that the weights generated are an acceptable class of OWA weights.
The use of a RIM quantifier to guide the aggregation essentially implies
that the more criteria satisfied the better the solution. This condition seems to
be one that is naturally desired in criteria aggregation. Thus most quantifier
guided aggregation would seem to be based upon the use of these types of
quantifiers. Not withstanding the above observation the technique of quantifier
guided aggregation can be applied to other types of quantifiers. In Refs. 9 and
13 Yager describes the process used in the case in which we have RDM and
RUM quantifiers. We shall not pursue this issue here and in the following
assume all quantifiers are RIM.

IV. MEASURE OF ORNESS OF A QUANTIFIER


As we previous noted the class of RIM quantifiers are bound by the quanti-
fiers “there exists,” Q*,and “for all,” Q*. Thus for any quantifier of this type
Q * (r) IQ(r) IQ*(r). As we have indicated Q* leads to the weighting vector
QUANTIFIER GUIDED AGGREGATION 55

w * = [ ]

which results in the OWA aggregation D(x) = Max,[AAi(x)]. This can be seen as
an oring of the criteria. At the other extreme Q* leads to the weighting vector

which results in the OWA aggregation D(x) = Min,[AAi(x)]. This can be seen as
an “anding” of the criteria. Thus we see that this family of quantifiers provide
for aggregation of the satisfaction to the criteria lying between an “anding”
and “oring.”
In Ref. 4 Yager associated with any OWA aggregation a measure of its
degree of orness. In particular, if we have a weighting vector W of dimension
n then the measure of orness is defined as
. n

It is easy to show that this measure lies in the unit interval. Furthermore, it
was shown in Ref. 4 that orness(W*) = 1, orness(W,,,) = 0.5 and orness

e)
(W,) = 0 . We can easily extend this measure to the case in which the weights
are generated by any quantifier. Given a linguistic quantifier Q if we generate
the weights by wj = Q - Q pq), then we can associate with this
quantifier a degree of orness as
1 ”
orness(Q) = -2 (n - j )
n - I,,,
Algebraic manipulation of the formula leads to the form
1 n-1
orness(Q) = -

Furthermore, if we let n + 00 then we can show that

orness(Q) = I 1

0
Q(r)dr.
56 YAGER

o g 1
Figure 1. Quantifier.

Thus the nominal degree of orness associated with a RIM linguistic quantifier
is equal to the area under the quantifier.
This standard definition for the measure of orness of quantifier provides a
simple useful method of obtaining this measure. Consider, for example (see
Fig. l), the class of quantifiers defined by

Q(r) = 0 r5g
Q(r) = 1 r C g
In this case calculating the degree of orness as the area under the quantifier
we get
orness(Q) = 1 - g .
We note in the special case when g = 0, we get the pure “or” with orness
( Q ) = 1 and when g = 1 we get the pure “and” with orness(Q) = 0.
If we consider the quantifier

then
1 1
ra dr = -ra+llo==
1
orness(Q) =
0 a+l

A number of interesting properties can be associated with this measure of


orness. If Q , and Q, are two quantifiers such that Q,(x) 2 Qz(x)for all x then
orness(Q,)2 orness(Q,). In addition, since any regular quantifier is normal we
see that orness(Q) = 0 iff Q = Q*.
In Refs. 14 and 15 Yager associated with any fuzzy subset a measure called
the degree of specificity. The specificity measures the degree to which the fuzzy
subset consists of exactly one element. In Ref. 16 he describes a number of
properties required of any measure of specificity. Using these properties it can
QUANTIFIER GUIDED AGGREGATION 57

be shown that for the class of RIM quantifiers the measure of orness is inversely
related to the measure of specificity. Thus
SJQ) = 1 - orness(Q),
increasing the orness decreases the specificity. In particular the pure “or”
quantifier, Q*, has maximal orness and minimal specificity. On the other hand,
the pure “and” quantifier, Q* , has minimal orness and maximal specificity. In
this way we can introduce an idea of specificity of aggregation with the “and”
being the most specificity and the “or” the least specific.
In some applications of fuzzy subsets, particularly in the theory of approxi-
mate considerable use is made of the principle of minimal speci-
This principle says that if A is some operation on fuzzy subsets which
fi~ity.’~-’’
can be implemented in various different ways, then select the implementation
that leads to a resulting fuzzy subset with minimal specificity. Under the impera-
tive of this principle when no further requirements are made on a quantifier
other than it be RIM, the preferred choice is the quantifier Q* in that it leads
to the minimal specificity.

V. ALTERNATIVE GENERATION OF WEIGHTS


In the framework of this article we are mainly focusing on the issue of
developing aggregation structures in situations in which we have at our disposal
some linguistic quantifier to guide the aggregation. In this framework, as we
indicated, we use the information wj = Q (d - Q f?) to provide the
weights associated with the quantifier. In other situations where a quantifier is
not specified we may have to use different methods to generate the weights.
One very useful approach to this problem was suggested by O’Hagan.’l In this
approach rather than starting with a quantifier Q we have at our disposal a
measure of orness, a,associated with the aggregation process and we generate
the weights by solving the following constraint optimization problem:
n
Max: - wi In w i
i= 1

such that

2. cn

i= 1
oi = 1

3. -x 1
n - li=’

w,(n - i) = a!
58 YAGER

In this approach constraints one and two assures us that the weights satisfy
the OWA conditions. Constraint three assures us that the weights have an
orness value of a. The objective function is a measure of entropy or dispersion
associates with the weights O’Hagan calls the weights generated by the tech-
nique ME-OWA weights, indicating maximal entropy weights. In choosing this
objective function we are essentially selecting the weights in a manner that
makes maximal use of the information in the arguments. Consider a situation
in which a = 0.5. This degree of orness can be obtained in a number of different
wn + 1 1
ways among these are: w1 = 0.5 and w, = 0.5; -- 1; and wi = - for all
2 n
1
i. The case in which w i= -for all i, the one selected by the ME-OWA algorithm,
n
is the one which most uniformly uses the information in arguments.
Another class of problems involving the generation of the weights are
situations in which we have observations in which we have a collection of
arguments and an associated aggregated value and we want to use these to
generate the weights. In Ref. 22 Filev and Yager suggested an approach to
solving this problem. In the following, we shall consider some alternatives to
that approach.
Assume we have a observation consisting of a set of arguments,
A,(x), . . . . ,AJx) and an associated aggregated value D(x) = d. We can now
consider the ordering of these scores to give us b , , . . . , b, along with d. In
the spirit of the approach suggested by O’Hagan, we can consider the generation
of the weights underlying this aggregation process by the following mathematical
program problem:
n
Max: - wi In wi
i= 1

such that

1. w; >0
2. c ; = , w i = 1
3. x;=,wibi = d .
In the above we replaced condition three by the requirement that the aggregation
equals d.
In situations in which we have a collection of observations of the above
type we can proceed as follows. Assume we have a collection of data each of
the type (Bj,dj) where Bj is an ordered vector Bj = Ibj,,bj2,. . . . bj,] consisting
of the arguments and dj is its aggregated value. For each observation we can
solve the preceding mathematical program problem to obtain a weighting vector
W j . We can apply some further procedure to obtain a weighting vector W that
best matches this collection of vectors. For example, we could use a least
squares fit.
QUANTIFIER GUIDED AGGREGATION 59

Another approach could be to convert each of these W j into its associated


measure of orness, orness(Wj) = aj. We can then find the average of these
degrees of orness. Assuming we have K samples then

Using O’Hagan’s original approach we can then use & to generate an OWA
weight W which can be considered as representative of the aggregation process
generating the data.
As we noted above in many applications of fuzzy set theory an often
used imperative for furnishing missing information is the principle of minimal
specificity. In Refs. 19 and 20 Dubois and Prade discuss the use of this principle
in considerable detail. Motivated by this principle we can suggest another
procedure for generating weights from observations. Assume we have an obser-
vation b,, . , , . b, and d. The bis are the ordered argument and the d the
aggregated value. As we have already indicated, the measure of orness is
inversely related to the measure of specificity. Using this relationship we can
consider the following mathematical programming problem to generate the
underlying weight

Max: -Z
I
wi(n - i)
n - 1 i=l
such that

2. ci”=,oi=
1

In previous section we described a procedure for obtaining the OWA


weights of dimension n from a given RIM quantifier, the inverse problem is that
of determining a quantifier associated with a given weighting vector. Assume W
is an OWA vector of dimension n we shall associate with this vector a quantifier
Q.Furthermore, we can assign to this quantifier some values.

We must now provide for the values of Q(r) between these fixed points. One
approach in the spirit of maximal entropy is to use a piecewise linear construc-
tion of Q. Thus in this case

i- 1 i
Q(r) = oi(nr - i) +Q for-
n
5 y I -,
n
60 YAGER

A second approach in the spirit of minimal specificity is to generate Q from


the weights as

One important application of the generation of quantifiers from weighting


vectors is in the extension of OWA operators. Assume we have an OWA
operator of dimension n with weighting vector W. Consider now we are inter-
ested in extending this aggregation to the case in which our dimension of
aggregation is rn. In this case we can proceed as follows. Assume there exists
some underlying quantifier Q. Generate the form of the quantifier by the preced-
ing approach. For example
i
i- 1 i
Q(r) = oi(nr - i) + oi for -Ir I-
j= 1 n n
We then can generate the weights associated with the aggregation of dimension
m as

M. IMPORTANCE WEIGHTED QUANTIFIER


GUIDED AGGREGATION
In the section we turn to the problem of quantifier guided aggregation in
environments in which the criteria to be aggregated have importances associated
with them. Related approaches can be found in Refs. 4 and 23. In this environ-
ment we shall again assume we have a set of n criteria expressed as fuzzy
subsets over the space of alternative solutions X.We again denote these criteria
as A i , where Ai(x)is the satisfaction of alternative x to ith criteria. In introducing
quantifier guided aggregation we essentially considered as our overall decision
function the statement Q criteria are satisfied by x . Where Q is some RIM
linguistic quantifier. We now additionally assume that we can associate with
each criteria a value Viindicating the importance of that criteria. We shall
consider the V,s to lie in the unit interval Vi E [0, 11, with the understanding
that the larger the value the more important the criteria. We make no restrictions
on the total value of importances, that is they need not sum to one.
Again considering Q to be some RIM quantifier we now assume the overall
evaluation function to be Q important criteria are satisfied by x . In the following
we describe the procedure to evaluate the overall satisfaction of alternative x.
First, we note for a given alternative x we have a collection of n pairs ( V ; ,
Adx)). The first step in this process is to order the A,(x)s in descending order.
Thus we let bj be thejth largest of Ai(x). Furthermore, we let uj denote the
QUANTIFIER GUIDED AGGREGATION 61

importance associated with the criteria that has the jth largest satisfaction to
x. Thus ifA,(x) is the largest of the Ai(x)then b, = A&) and u, = V,. At this point
we can consider our information regarding the alternative x to be a collection of
n pairs (uj, bj) where the bjs are in descending ordering.
Our next step is to obtain the OWA weights associated with this aggrega-

(F)(
tion. To obtain these weights we proceed as follows
oj(x) = Q %,\ 7) -Q u&

in the above T = Z;=, u&,the total sum of importances. Having obtained the
weights we can now calculate the evaluation associated with xj, which we
denote as D(x):
D(x) = FJb,, . . , b,)
*

thus

We emphasize that the weights used in this aggregationwill generally be different


for each x. This is due to the fact that the ordering of the Ais will be different
and in turn lead to different ujs. The following example illustrates the application
of the above method.

EXAMPLE.
Assume we have two alternatives x and y. We shall assume four
criteria A , , A,, A3, A,. The importances associated with these criteria are:
V, = 1 , V2 = 0.6, V , = 0.5, and V, = 0.9. Furthermore, the satisfaction of each
of the criteria by the alternatives is given by the following:
A,(x) = 0.7 A~(x)= 1 A,(x) = 0.5 A,(x) = 0.6
A,(y) = 0.6 A ~ ( Y=) 0.3 A3(y) = 0.9 A4(y) = 1
We shall assume the quantifier guiding this aggregation to be “most” which is
defined by Q(r) = r 2 . We first consider the aggregation €or x. In this case the
ordering of the criteria satisfaction give us
bi Ui
442 1 0.6
A, 0.7 .1
A4 0.6 0.9
A3 0.5 0.5
We note T = Xf=l uj = 3. Calculating the weights associated with x , which we
denote oi(x), we get

q(x) = Q (y)- (:) Q = (0.2)2- 0 = 0.04


62 YAGER

w,(x) = Q (y) (y) -Q = .28 - .04 = .24

w3(x) = Q (7) (y) -Q = .69 - .28 = .41

w4(x) = (i)- (y)Q = 1 - .69 = .31

To obtain D(x) we calculate


4
D(x) = wi(x)bi = (.04)(1) + (.24)(.7) + (.41)(.6) + (.31)(.5) = 0.609
i= 1

To calculate the evaluation for y we proceed as follows. In this case the ordering
of the criteria satisfaction is

4 uj
A4 1 0.9
A3 0.9 0.5
A, 0.6 1
A, 0.3 U.6

The weights associated with the aggregation are:

w,(y) = Q );( -Q (i) = .09 - 0 = .09

w,(y) = Q (y)- Q (f)= .22 - .09 = .13


=.64-.22=.42

w4(y) = Q(1) -Q

To obtain D ( y ) we calculate
4
D(y)= 2 wi(y)bi = (.09)(1) + (.13)(.9) + (.42)(.6) + (.36)(.3) = 0.567
i= I

Hence in this example x is the preferred alternative.


It is important to observe as we previously noted the weights are different
for the two aggregations.
QUANTIFIER GUIDED AGGREGATION 63

In the calculation of the weights we make considerable use of summations


of the form uk we shall find it convenient to note these as Sj, thus

j
sj= k=2I u k *
Using this notation we get

wj = Q ($)- (9) Q

where

VII. CHARACTERISTICS OF IMPORTANCE WEIGHTED


AGGREGATION
In using this approach for the inclusion of importances associated with the
criteria, we should point out that any criteria having zero importance plays no
role in the formulation of the overall evaluation function. We see this from the
following. Assume that Ai is a criteria with importance Vi and assume that Ai(x)
is thejth largest of the satisfactions. In this case bj = Ai(x)and uj = V i . The
weights wj associated with this component in the OWA aggregation is

..=Q(+(+).
J s.- 1

Since Sj - Sj - 1 = uj and in the case of zero importance uj = 0, we get Sj =


Sj-, and hence oj= 0. Thus a component with zero importance has zero weight
and provides no contribution to the aggregation. As a matter of fact we can
remove it from the whole process.
A second technical consideration we must investigate is the situation in
which two criteria have the same satisfaction. Without loss of generality assume
A,(x) = A,(X) = a . Assume that in the ordering process these turn out to be
the j and j + 1 biggest of the satisfactions. Thus we have bj = CY and bjT,= a.
However, if these two criteria have different importances we have two different
ways of assigning the uj,

uj = V, and uj+, = V ,
or

uj = V , and uj+, = V,.


64 YAGER

In the following we shall show it does not make a difference which of these
we use. Consider the first assignment. In this case

S,,l Sj-, + v, + ") - (Sj-, + v,


)
-Q(+)=Q(
" j + 1 = ~ ( T ) T T
In calculating the overall satisfaction the sum of contributions we get from
these two components is
wjbj + wj+lbj+l,
since bj = bj+,= a we get
wja + wj+p = (Wj + Oj+,)ff.

However,

Thus we see that the choice does not make any difference.
We shall now look at some special cases of this importance weighted
quantifier guided aggregation in order to get a better understanding of the
process. We first consider the special case when all the criteria have the same
importances. In this situation we have Vi = a for all i. In this case independent
of the ordering of the A,(x), the resulting uj, will all be equal to a,thus uj = a
for all j. Hence in this case

wj(x) = Q @) - Q (+)
and hence

wj(x) = Q (d - Q (T).j - 1

Thus the wj are independent of x and are obtained in the same manner as those
obtained when we did not include the importances. khus the case in which we
have equal importances for all the arguments results in the same structure as
the case in which we do not consider importances at all. In this case the weights
are the same for all x.
We now consider the special case when the linguistic quantifier Q is the
unitor quantifier Q(r) = r. Again starting with the satisfaction to the criteria,
QUANTIFIER GUIDED AGGREGATION 65

Ai(x),and their importances Vi we order these satisfactions to give us bj and


uj.We then calculate the associated weights,

Since Q(r) = r we get


q:=lui 2 - l u . =3
wj(x) = --
T T T
Calculating the OWA aggregation using these weights we obtain

Recalling that uj is that importance associated with the criteria that provided
the value for bj we see that in this case we get

the ordinary weighted average. Thus the ordinary weighted average, the sum
of the product of the relative importance times the score, is a special case of
the above method when the quantifier is the unitor quantifier.
We now consider the special case when the quantifier guiding the aggregation
is defined as
Q(r)=O r<g
Q(r)= 1 g s r s 1
Semantically this quantifier corresponds to at least g percent. To find the
OWA weights we again order the arguments and bring along their associated
importances, this process gives us bj and uj. We then calculate the associated

OWA weights as oj = Q @) - Q (9).


From the form of the quantifier Q
we see that
S.
wj = 0 for allj for which <g
T
S.
wj = 1 for the first j for which A 2 g
T
wj = 0 for all other j
We note that in this case our OWA weighting vector always consist of one
value equal to one and all other values equal to zero. Since the aggregated
value for alternative x is D(x) = xJtl~ ( x ) ~ we
b , see that D(x) = bj, wherej* is
S.
the first value of j for which A 2 g .
T
66 YAGER

The effective aggregation process in this case can be seen to be the following
simple process. We order the criteria satisfactions and carry along their associ-
ated importances. This results in a table of the kind shown below.
Score Importance Proportion
b, UI SIIT
b2 U2 S21T
b3 u3 S3/T
bn Un Sn/T
We then select as our aggregated score the value bj*for which the value S j J T
exceeds or is equal to g for the first time. A number of special cases of this
situation are worth noting. In this case when g = 1 we effectively have
Q(r) = O r< 1
Q(r)= 1 r= 1
This is a situation corresponding to the quantifier for all. In this case we see
that our approach has as its evaluation the smallest criteria satisfaction that
has nonzero importance. Thus in this case
D(x) = Min Ai(x).
all i s.t. Vi#O

We now consider the case for which


Q(r) = 1 for all r # 0
Q(0) = 0
This corresponds to the case in which g = E + 0. In this case we get

D(x) = Max Ai(x)


all i s.t. V,#O

This in this case our value is the largest satisfaction that has any nonzero
importance.
Another special case of the above is the situation in which
Q(r) = 0 r c 112
Q(r) = 1 r z 112
This case, which is a kind of median aggregation, has g = 112. Thus we select
the ordered criteria bj. for which SjJT equals or exceeds 0.5 for the first time.
Thus we begin adding up the importances of the ordered satisfactions and stop
as soon as the total equals or exceeds one half the total importance. The value
of ordered satisfaction that occur at this point is our overall evaluation. In this
situation we have provided a methodology for obtaining a weighted median
aggregation.
QUANTIFIER GUIDED AGGREGATION 67

In the earlier section we introduced a measure of orness associated with


a given weighting vector W defined as

orness(w) = (J-)
n-
2 wj(n
1 j=l
-j)

Since in the situation in which we have importances associated with a quantifier


the weighing vector is dependent upon the ordering of the objects the measure
of orness is different for each alternative, then in this case

Using the formulation

where Sj(x) is the sum of importances for t h e j highest scoring criteria under
alternative x we get for the orness associated with the quantifier Q for the
alternative x

orness(Qlx) = -x1
n - I,,,
(y)
(r)
" ( Q Sj(x)> - Q
(n - j )

Doing some simple algebraic manipulations we can show that


1 n-1
orness(Qlx) = -

VIII. TRIANGULAR NORM TYPE OWA OPERATORS


In using quantifier guided aggregation technique we are essentially building
a decision function by balancing two factors. The first factor is related to our
quantifier Q . This factor stipulates that Q is the degree of satisfaction we
attain if we satisfy any i of the criteria. With Q a RIM quantifier this is an
increasing function of i. Furthermore, wi is the increase in satisfaction we get
in going from satisfying i - 1 to i criteria. The second concern in the construction
of a quantifier guided decision function is that we can find i criteria that are
satisfied. We shall let G(i)be the degree to which i criteria are satisfied. Combin-
ing these two factors we obtain an aggregation function of the form
n
D(x) = w,G(i).
i= 1
68 YAGER

In the above the value of the G(i) term is obtained from the satisfaction of the
individual criteria,
G(i) = J;(A,(x), . . . . . . . A,(x)).
Since G(i) just stipulates we need find any i criteria that are satisfied we only
need consider the i most satisfied criteria. If we assume that bj is thejth largest
of the criteria scores then
G(i) = F(b,, . . , , . bi).
Since G(i) requires that all the i criteria considered are satisfied the appropriate
formulation for the construction of G(i) is to use a t-norm aggregation of the i
most satisfied criteria scores, b,, . . . . bi. Thus
G(i) = T(b,, . . . . bi)
where T is some t-norm operator. We recall that a t-norm is a mapping
T : [O, 11 x [O, 11 + [O, 11
s.t.
(1) T(a, b) = T @ , a ) Commutativity
(2) T(a, 6 ) 2 T(c, d) if a 2 c and b 2 d Monotonicity/
(3) T(a, T(b, 4) = T(a, T(b,4) Associativity
(4) T(1,a) = a Identity of One
As extensively discussed in the literature the t-norm provides a general class
of and aggregation operators.
Using these operators we can provide a general class of OWA operators.

DEFINITION.
An aggregation operator
F,: I" -+ I
is called a T type ordered weighted averaging operator of dimension n if it has
associated with it a weighting vector W

such that
QUANTIFIER GUIDED AGGREGATION 69

and where
n
FT(al,. . . . . . a,) = wjT(Bj)
j- 1

.
where T is any t-norm and !3j = (b,, b2, . . , bj) and where bj is the jth largest
of the ai. We note T(B,) is defined as b,. In the above we call Bj the top j
dimension ordered bag of (a,, . . . . , an).
A number of special cases of this operator are worth noting. Assume T(a,
b) = Min[a, b ] . In this case
= Min[bl,
TMi,,(Bj) . . . . , 41 = bj
and thus

. . , a , ) = jc
n
FMin(al,.. ojbj
=I

which is the ordinary OWA operator.


If T(a, b) = a b then

and

Fn(a,, . . . .a,) = j2
=l
wj (fi bK)
K=l

This product type OWA aggregation can be expressed as


Fn(al, . . . . a,) = bl[w, '+ bz(w2 + b3(wj + b4(* * * * * + b,an)))l
Another special case is when we use the t-norm
T,-(a,b) = '(a + b - 1) V 0.
In this case

For this operator we note that if for some m T,(B,) = 0 , then T,(Bj) = 0 for
a l l j =r m . Thus if m* is the smallest integer for which

then
70 YAGER

Furthermore, we note that


a + b - 1 = a - (1 - b) = a - F
and hence we can express TL(Bj)as

In this case we see that T(Bj) = 0 for all j such that &=2 6k 2 b, . Thus if m*
is the smallest integer for which xr12F k 2 6 , , then

FL(a,?* * * 7 a,) = c wj(b, - Rj)


m*

j= 1

where R~= x=2 6k.


It is well established in the literaturelothat Min is the largest of the t-norm.
Thus for any t-norm T
Min[Bj] 2 T(Bj)
and hence any weighting vector W
FMi"(U,, . . . . a,) z F,(a,, . . . U J .
Thus the ordinary type OWA is the largest of the class of Ttype OWA operators.
If we consider the weighting vector W* where w , = 1 and oi= 0 for all
other i, then for any T
F,(a,, . . . . a,) = T(B,) = 6 , = Max[a,].
If we consider the weighting vector W* where o, = 1 and wi = 0 for all other
i, then for any T
F,(a,, . . . . a,) = T(Bj) = T(a,, . . . a,).
1
If we consider the weights vector W, where oi = - for all i then for any T
n
1 "
F,(a,,. .. .a,)=-ZT(Bj).
nj = l
The following theorem puts some bounds on T type OWA operators.

Assume T is any t-norm then for any weighting vector W


THEOREM.
Maxi[ai] 2 F,(a,, . . . . a,) z T(al, . . . a,).
Proof. Since for any t-norm T
T ( x , ,~ 2 ., . . x,) 2 T(x,, ~2 * * * * x,, X,+1)
it follows that
T(Bj) 2 T(Bi) for j > j .
QUANTIFIER GUIDED AGGREGATION 71

From this it follows that


F,(al, . . . a,) 5 F*,(a,, . . . . a,) 5 Max(aI, . . . a,)
and
F,(u,, . . a,) 2 F * , ( u ~ ,. . . . a,) 2 T ( U ~. ,. . . a,)
where F*, and F,T use the weighting vectors W" and W* , respectively

THEOREM. Assume T is any t-norm then for any weighting vector W,


F,(a, , . . . . a,) is monotonic with respect to the a,s.

This theorem follows directly from the monotonicity of the t-norm oper-
ators.
Inspired by the T type OWA operators we can consider a class of S type
OWA operators. Let S be any t-conorm,'o we recall that S satisfies the same
properties as a t-norm accept instead of (4)it has S(0, a) = a. The prototypical
example of a t-conorm is the Max operator. Assume (a,, . . . , a,) is a bag of
arguments to be aggregated and again let bj be thejth largest of these. We shall
let Dj = ( b j , b j + l ,. . . , b,). Thus Djis the subbag of the n + 1 - j smallest
values to be aggregated. We now consider the aggregation.

where S(Dj)is a t-conorm aggregation of the elements in Dj.We shall call this
S type OWA aggregation.
If we consider the special case when S is equal to the Max we note that
S(Dj) = Max[bj, b j + l ,,b,I = bj
Thus for S = Max

Fs(a,,. .. ,a,) = 2 wjbj


j= I

which is the ordinary OWA operator. Recalling that Max is the smallest
t-conormloit follows that for any S
Max(Bj) 5 S(Bj)
and hence for any W and S
FMa,(al 9 .. * 9 a,) 5 Fs(a,, . .. , a,).
The following theorem mirrors the one for T type OWA aggregation operators.

THEOREM.
Assume S is any t-conorm then for any weighting vector W
Mini[ai] 5 F s ( a l , . . . , a,) 5 S ( a , , . . . , a,).
Furthermore, it can be easily shown that these operators are monotonic.
72 YAGER

IX. CONCLUSION
We have looked at the issue of quantifier guided multicriteria decision
making. We have suggested that the OWA operators provide an appropriate
tool for the construction of these types of decision functions. We have suggested
a method for including importances associated with the different criteria to be
aggregated.

References
1. R.E. Bellman and L.A. Zadeh, “Decision-making in afuzzy environment,” Manage.
Sci., 17(4), 141-164 (1970).
2. R.R. Yager, “Quantifiers in the formulation of multiple objective decision func-
tions,” Znf Sci., 31, 107-139 (1983).
3. R.R. Yager, “Aggregating criteria with quantifiers,” in Proceedings of the Interna-
tional Symposium on Methodologies f o r Intelligent Systems, Knoxville, TN, pp.
183-189, 1986.
4, R.R. Ytiger, “On ordered weighted averaging aggregation operators in multi-criteria
decision making,” IEEE Trans. Syst. Man Cybern., 18, 183-190 (1988).
5. L.A. Zadeh, “A computational approach to fuzzy quantifiers in natural languages,”
Cornput. Math. A p p l . , 9, 149-184 (1983).
6. R.R. Yager, “Connectives and quantifiers in fuzzy sets,” Fuzzy Sets Syst., 40,
39-76 (1991).
7. L. Zadeh, “The concept of a linguistic variable and its application to approximate
reasoning: Part 1,” Inf. Sci., 8, 199-249 (1975).
8. E.P. Klement, “Characterization of fuzzy measures constructed by means of trian-
gular norms,” J. Math. Anal. Appl., 86, 345-358 (1982).
9. R.R. Yager and D.P. Filev, Essentials of Fuzzy Modeling and Control, Wiley, New
York, 1994.
10. D. Dubois and H. Prade, “A review of fuzzy sets aggregation connectives,” Inf.
Sci., 36, 85-121 (1985).
11. R.R. Yager, “On mean type aggregation,” IEEE Transactions on Systems, Man
and Cybernetics, to appear.
12. R.R. Yager, “Families of OWA operators,” Fuzzy Sets Syst., 59, 125-148 (1993).
13. R.R. Yager, “Some extensions of constraint propagation of label sets,” Int. J.
Approximate Reasoning, 3,417-436 (1989).
14. R.R. Yager, “Entropy and specificity in a mathematical theory of evidence,” Int.
J . General Syst. 9, 249-260 (1983).
15. R.R. Yager, “Measuring the quality of linguistic forecasts,” Int. J. Man Mach.
Stud.. 21, 253-257 (1984).
16. R.R. Yager, “On the specificity of a possibility distribution,” Fuzzy Sets Syst., 50,
279-292 (1992).
17. L.A.-Zadeh, “A theory of approximate reasoning,” in Machine Intelligence, Vol.
9, J. Hayes, D. Michie, and L. I. Mikulich, Eds., Halstead Press, New York, 1979,
pp. 149-194.
18. R.R. Yager, “Deductive approximate reasoning sfrstems,” IEEE Trans. Knowl.
Data Eng., 3, 399-414 (1991).
19. D. Dubois and H. Prade, “Fuzzy sets in approximate reasoning Part I: Inference
with possibility distributions,” Fuzzy Sets Syst., 40, 143-202 (1991).
20. D. Dubois and H. Frade, “The principle of minimum specificity as a basis for
evidential reasoning,” in Uncertainty in Knowledge-Based Systems, B. Bouchon
and R. R. Yager, Eds., Springer-Verlag, Berlin, Germany, 1987, pp. 75-84.
21. M. O’Hagan, “Aggregating template or rule antecedents in real-time expert systems
QUANTIFIER GUIDED AGGREGATION 73
with fuzzy set logic,” in Proceedings of rhe 22nd Annual IEEE Asilomar Conference
on Signals, Systems and Computers, Pacific Grove, CA, pp. 681-689, 1988.
22. D.P. Filev and R.R. Yager, “Learning OWA operator weights from data,” in Pro-
ceedings of the Third IEEE International Conference on Fuzzy Systems, Orlando,
FL, pp. 468-473, 1994.
23. R.R. Yager, “Fuzzy quotient operators for fuzzy relational data bases,” in Proceed-
ings of the International Fuzzy Engineering Symposium, Yokohama, Japan, pp.
289-296, 1991.

You might also like