Quantifier Guided Aggregation Using OWA Operators: Ronald R. Yager
Quantifier Guided Aggregation Using OWA Operators: Ronald R. Yager
Quantifier Guided Aggregation Using OWA Operators: Ronald R. Yager
OWA Operators
Ronald R. Yager
Machine Intelligence Institute; lona College, New Rochelle,
New York 10801
We consider multicriteria aggregation problems where, rather than requiring all the
criteria be satisfied, we need only satisfy some portion of the criteria. The proportion
of the critera required is specified in terms of a linguistic quantifier such as most. We
use a fuzzy set representation of these linguistic quantifiers to obtain decision functions
in the form of OWA aggregations. A methodology is suggested for including importances
associated with the individual criteria. A procedure for determining the measure of
“omess” directly from the quantifier is suggested. We introduce an extension of the
OWA operators which involves the use of triangular norms. 0 19% John Wiley & Sons, Inc.
I. INTRODUCTION
Starting with the classic work of Bellman and Zadeh,’ fuzzy set theory
has been used as a tool to develop and model multicriteria decision problems.
In this framework the criteria are represented as fuzzy subsets over the space of
decision alternatives and fuzzy set operators are used to aggregatethe individual
criteria to form the overall decision function. As originally suggested by Bellman
and Zadeh, the criteria are combined by the use of an intersection operation
which implicitly implies a requirement that all the criteria be satisfied by a
solution to the problem. As noted by Yager,2J this condition may not always
be the appropriate relationship between the criteria. For example, a decision
maker may be satisfied if most of the criteria are satisfied. In this work we look
at the issue of the formulation of these softer decision functions which we call
quantifier guided aggregations. In Ref. 4 we suggested the use of the Ordered
Weighted Averaging (OWA) operators as a tool to implement these kinds of
aggregations. Here we develop this approach further by considering environ-
ments in which the individual criteria have importances associated with them.
We also provide for an extension of the basic OWA aggregation which allows
us to include triangular norm operations. A number of other related issues such
as the measure of orness and determination of weights in OWA aggregation
are discussed.
1. Q(0) = 0
2. Q(1) = 0
3. there exists two values a and b E I, where a < b, such that
(i) For y < Q, Q(x) IQ ( y )if x < y .
(ii) For x E [ a , b ] , Q(x) = 1.
(iii) For x > b , Q(x) 1 Q ( y )if x < y .
An example of this class ia about a .
Some interesting relationships exist between these three classes of relative
quantifiers. Noting that the antonym P of a fuzzy F [7]on the real line is defined
as P(x) = F(l - x ) , we see that if Q is a RIM quantifier, then its antonym is a
RDM quantifier and vice versa. Examples of these antonym pairs arefew and
many, and at least a and at most a.
QUANTIFIER GUIDED AGGREGATION 51
THEOREM.
Assume Q is any RIM quanti$er than for all x E I
Proof. Since 0 IQ(x) I1 the result follows from the definitions of RIM, Q*
and Q*.
DEFINITION.
An aggregation operator F ,
F : I" -+ I
is called an Ordered Weighted Averaging (0WA) operator of dimension n if it
has associated with it a weighting vector W ,
We note that these conditions imply that the OWA operator is a mean operator
[lo, 111.
The form of the aggregation is very strongly dependent upon the weighting
vector used. In Ref. 12 Yager investigates various different families of OWA
aggregation operators. A number of special cases of weighting vector are worth
noting. The weighting vector W* defined such that
w1 = 1 and wj = 0 for a l l j # 1
gives us the aggregation F * ( a l , . . . , a,) = Maxi[ai]. Thus W* provides the
largest possible aggregation.
The weighting vector W , defined such that
0, = 1 and wi = 0 for i # n
gives us the aggregation F * (al, . . . a,) = Mini[ai]. This weighting provides
the smallest aggregation of the arguments.
1
The weighting vector W , defined such that wi = - for all i gives us the
n
simple average
1 "
F,(a,, . . . a,) = -2
n i=l
a,.
0, =0
0. = - fori+ l o r n
' n-2
Having introduced the OWA aggregation operator we are now in a position
to describe the process of quantifier guided aggregation. Again consider that
54 YAGER
w i= Q (a) (9)
-Q
fori= 1 . . . .n
Because of the nondecreasing nature of Q it follows that wi > 0. Furthermore,
from the regularity of Q , Q(1) = 1 and Q(0) = 0, it follows that &ai= 1. Thus
we see that the weights generated are an acceptable class of OWA weights.
The use of a RIM quantifier to guide the aggregation essentially implies
that the more criteria satisfied the better the solution. This condition seems to
be one that is naturally desired in criteria aggregation. Thus most quantifier
guided aggregation would seem to be based upon the use of these types of
quantifiers. Not withstanding the above observation the technique of quantifier
guided aggregation can be applied to other types of quantifiers. In Refs. 9 and
13 Yager describes the process used in the case in which we have RDM and
RUM quantifiers. We shall not pursue this issue here and in the following
assume all quantifiers are RIM.
w * = [ ]
which results in the OWA aggregation D(x) = Max,[AAi(x)]. This can be seen as
an oring of the criteria. At the other extreme Q* leads to the weighting vector
which results in the OWA aggregation D(x) = Min,[AAi(x)]. This can be seen as
an “anding” of the criteria. Thus we see that this family of quantifiers provide
for aggregation of the satisfaction to the criteria lying between an “anding”
and “oring.”
In Ref. 4 Yager associated with any OWA aggregation a measure of its
degree of orness. In particular, if we have a weighting vector W of dimension
n then the measure of orness is defined as
. n
It is easy to show that this measure lies in the unit interval. Furthermore, it
was shown in Ref. 4 that orness(W*) = 1, orness(W,,,) = 0.5 and orness
e)
(W,) = 0 . We can easily extend this measure to the case in which the weights
are generated by any quantifier. Given a linguistic quantifier Q if we generate
the weights by wj = Q - Q pq), then we can associate with this
quantifier a degree of orness as
1 ”
orness(Q) = -2 (n - j )
n - I,,,
Algebraic manipulation of the formula leads to the form
1 n-1
orness(Q) = -
orness(Q) = I 1
0
Q(r)dr.
56 YAGER
o g 1
Figure 1. Quantifier.
Thus the nominal degree of orness associated with a RIM linguistic quantifier
is equal to the area under the quantifier.
This standard definition for the measure of orness of quantifier provides a
simple useful method of obtaining this measure. Consider, for example (see
Fig. l), the class of quantifiers defined by
Q(r) = 0 r5g
Q(r) = 1 r C g
In this case calculating the degree of orness as the area under the quantifier
we get
orness(Q) = 1 - g .
We note in the special case when g = 0, we get the pure “or” with orness
( Q ) = 1 and when g = 1 we get the pure “and” with orness(Q) = 0.
If we consider the quantifier
then
1 1
ra dr = -ra+llo==
1
orness(Q) =
0 a+l
be shown that for the class of RIM quantifiers the measure of orness is inversely
related to the measure of specificity. Thus
SJQ) = 1 - orness(Q),
increasing the orness decreases the specificity. In particular the pure “or”
quantifier, Q*, has maximal orness and minimal specificity. On the other hand,
the pure “and” quantifier, Q* , has minimal orness and maximal specificity. In
this way we can introduce an idea of specificity of aggregation with the “and”
being the most specificity and the “or” the least specific.
In some applications of fuzzy subsets, particularly in the theory of approxi-
mate considerable use is made of the principle of minimal speci-
This principle says that if A is some operation on fuzzy subsets which
fi~ity.’~-’’
can be implemented in various different ways, then select the implementation
that leads to a resulting fuzzy subset with minimal specificity. Under the impera-
tive of this principle when no further requirements are made on a quantifier
other than it be RIM, the preferred choice is the quantifier Q* in that it leads
to the minimal specificity.
such that
2. cn
i= 1
oi = 1
3. -x 1
n - li=’
”
w,(n - i) = a!
58 YAGER
In this approach constraints one and two assures us that the weights satisfy
the OWA conditions. Constraint three assures us that the weights have an
orness value of a. The objective function is a measure of entropy or dispersion
associates with the weights O’Hagan calls the weights generated by the tech-
nique ME-OWA weights, indicating maximal entropy weights. In choosing this
objective function we are essentially selecting the weights in a manner that
makes maximal use of the information in the arguments. Consider a situation
in which a = 0.5. This degree of orness can be obtained in a number of different
wn + 1 1
ways among these are: w1 = 0.5 and w, = 0.5; -- 1; and wi = - for all
2 n
1
i. The case in which w i= -for all i, the one selected by the ME-OWA algorithm,
n
is the one which most uniformly uses the information in arguments.
Another class of problems involving the generation of the weights are
situations in which we have observations in which we have a collection of
arguments and an associated aggregated value and we want to use these to
generate the weights. In Ref. 22 Filev and Yager suggested an approach to
solving this problem. In the following, we shall consider some alternatives to
that approach.
Assume we have a observation consisting of a set of arguments,
A,(x), . . . . ,AJx) and an associated aggregated value D(x) = d. We can now
consider the ordering of these scores to give us b , , . . . , b, along with d. In
the spirit of the approach suggested by O’Hagan, we can consider the generation
of the weights underlying this aggregation process by the following mathematical
program problem:
n
Max: - wi In wi
i= 1
such that
1. w; >0
2. c ; = , w i = 1
3. x;=,wibi = d .
In the above we replaced condition three by the requirement that the aggregation
equals d.
In situations in which we have a collection of observations of the above
type we can proceed as follows. Assume we have a collection of data each of
the type (Bj,dj) where Bj is an ordered vector Bj = Ibj,,bj2,. . . . bj,] consisting
of the arguments and dj is its aggregated value. For each observation we can
solve the preceding mathematical program problem to obtain a weighting vector
W j . We can apply some further procedure to obtain a weighting vector W that
best matches this collection of vectors. For example, we could use a least
squares fit.
QUANTIFIER GUIDED AGGREGATION 59
Using O’Hagan’s original approach we can then use & to generate an OWA
weight W which can be considered as representative of the aggregation process
generating the data.
As we noted above in many applications of fuzzy set theory an often
used imperative for furnishing missing information is the principle of minimal
specificity. In Refs. 19 and 20 Dubois and Prade discuss the use of this principle
in considerable detail. Motivated by this principle we can suggest another
procedure for generating weights from observations. Assume we have an obser-
vation b,, . , , . b, and d. The bis are the ordered argument and the d the
aggregated value. As we have already indicated, the measure of orness is
inversely related to the measure of specificity. Using this relationship we can
consider the following mathematical programming problem to generate the
underlying weight
Max: -Z
I
wi(n - i)
n - 1 i=l
such that
2. ci”=,oi=
1
We must now provide for the values of Q(r) between these fixed points. One
approach in the spirit of maximal entropy is to use a piecewise linear construc-
tion of Q. Thus in this case
i- 1 i
Q(r) = oi(nr - i) +Q for-
n
5 y I -,
n
60 YAGER
importance associated with the criteria that has the jth largest satisfaction to
x. Thus ifA,(x) is the largest of the Ai(x)then b, = A&) and u, = V,. At this point
we can consider our information regarding the alternative x to be a collection of
n pairs (uj, bj) where the bjs are in descending ordering.
Our next step is to obtain the OWA weights associated with this aggrega-
(F)(
tion. To obtain these weights we proceed as follows
oj(x) = Q %,\ 7) -Q u&
in the above T = Z;=, u&,the total sum of importances. Having obtained the
weights we can now calculate the evaluation associated with xj, which we
denote as D(x):
D(x) = FJb,, . . , b,)
*
thus
EXAMPLE.
Assume we have two alternatives x and y. We shall assume four
criteria A , , A,, A3, A,. The importances associated with these criteria are:
V, = 1 , V2 = 0.6, V , = 0.5, and V, = 0.9. Furthermore, the satisfaction of each
of the criteria by the alternatives is given by the following:
A,(x) = 0.7 A~(x)= 1 A,(x) = 0.5 A,(x) = 0.6
A,(y) = 0.6 A ~ ( Y=) 0.3 A3(y) = 0.9 A4(y) = 1
We shall assume the quantifier guiding this aggregation to be “most” which is
defined by Q(r) = r 2 . We first consider the aggregation €or x. In this case the
ordering of the criteria satisfaction give us
bi Ui
442 1 0.6
A, 0.7 .1
A4 0.6 0.9
A3 0.5 0.5
We note T = Xf=l uj = 3. Calculating the weights associated with x , which we
denote oi(x), we get
To calculate the evaluation for y we proceed as follows. In this case the ordering
of the criteria satisfaction is
4 uj
A4 1 0.9
A3 0.9 0.5
A, 0.6 1
A, 0.3 U.6
w4(y) = Q(1) -Q
To obtain D ( y ) we calculate
4
D(y)= 2 wi(y)bi = (.09)(1) + (.13)(.9) + (.42)(.6) + (.36)(.3) = 0.567
i= I
j
sj= k=2I u k *
Using this notation we get
wj = Q ($)- (9) Q
where
..=Q(+(+).
J s.- 1
uj = V, and uj+, = V ,
or
In the following we shall show it does not make a difference which of these
we use. Consider the first assignment. In this case
However,
Thus we see that the choice does not make any difference.
We shall now look at some special cases of this importance weighted
quantifier guided aggregation in order to get a better understanding of the
process. We first consider the special case when all the criteria have the same
importances. In this situation we have Vi = a for all i. In this case independent
of the ordering of the A,(x), the resulting uj, will all be equal to a,thus uj = a
for all j. Hence in this case
wj(x) = Q @) - Q (+)
and hence
wj(x) = Q (d - Q (T).j - 1
Thus the wj are independent of x and are obtained in the same manner as those
obtained when we did not include the importances. khus the case in which we
have equal importances for all the arguments results in the same structure as
the case in which we do not consider importances at all. In this case the weights
are the same for all x.
We now consider the special case when the linguistic quantifier Q is the
unitor quantifier Q(r) = r. Again starting with the satisfaction to the criteria,
QUANTIFIER GUIDED AGGREGATION 65
Recalling that uj is that importance associated with the criteria that provided
the value for bj we see that in this case we get
the ordinary weighted average. Thus the ordinary weighted average, the sum
of the product of the relative importance times the score, is a special case of
the above method when the quantifier is the unitor quantifier.
We now consider the special case when the quantifier guiding the aggregation
is defined as
Q(r)=O r<g
Q(r)= 1 g s r s 1
Semantically this quantifier corresponds to at least g percent. To find the
OWA weights we again order the arguments and bring along their associated
importances, this process gives us bj and uj. We then calculate the associated
The effective aggregation process in this case can be seen to be the following
simple process. We order the criteria satisfactions and carry along their associ-
ated importances. This results in a table of the kind shown below.
Score Importance Proportion
b, UI SIIT
b2 U2 S21T
b3 u3 S3/T
bn Un Sn/T
We then select as our aggregated score the value bj*for which the value S j J T
exceeds or is equal to g for the first time. A number of special cases of this
situation are worth noting. In this case when g = 1 we effectively have
Q(r) = O r< 1
Q(r)= 1 r= 1
This is a situation corresponding to the quantifier for all. In this case we see
that our approach has as its evaluation the smallest criteria satisfaction that
has nonzero importance. Thus in this case
D(x) = Min Ai(x).
all i s.t. Vi#O
This in this case our value is the largest satisfaction that has any nonzero
importance.
Another special case of the above is the situation in which
Q(r) = 0 r c 112
Q(r) = 1 r z 112
This case, which is a kind of median aggregation, has g = 112. Thus we select
the ordered criteria bj. for which SjJT equals or exceeds 0.5 for the first time.
Thus we begin adding up the importances of the ordered satisfactions and stop
as soon as the total equals or exceeds one half the total importance. The value
of ordered satisfaction that occur at this point is our overall evaluation. In this
situation we have provided a methodology for obtaining a weighted median
aggregation.
QUANTIFIER GUIDED AGGREGATION 67
orness(w) = (J-)
n-
2 wj(n
1 j=l
-j)
where Sj(x) is the sum of importances for t h e j highest scoring criteria under
alternative x we get for the orness associated with the quantifier Q for the
alternative x
orness(Qlx) = -x1
n - I,,,
(y)
(r)
" ( Q Sj(x)> - Q
(n - j )
In the above the value of the G(i) term is obtained from the satisfaction of the
individual criteria,
G(i) = J;(A,(x), . . . . . . . A,(x)).
Since G(i) just stipulates we need find any i criteria that are satisfied we only
need consider the i most satisfied criteria. If we assume that bj is thejth largest
of the criteria scores then
G(i) = F(b,, . . , , . bi).
Since G(i) requires that all the i criteria considered are satisfied the appropriate
formulation for the construction of G(i) is to use a t-norm aggregation of the i
most satisfied criteria scores, b,, . . . . bi. Thus
G(i) = T(b,, . . . . bi)
where T is some t-norm operator. We recall that a t-norm is a mapping
T : [O, 11 x [O, 11 + [O, 11
s.t.
(1) T(a, b) = T @ , a ) Commutativity
(2) T(a, 6 ) 2 T(c, d) if a 2 c and b 2 d Monotonicity/
(3) T(a, T(b, 4) = T(a, T(b,4) Associativity
(4) T(1,a) = a Identity of One
As extensively discussed in the literature the t-norm provides a general class
of and aggregation operators.
Using these operators we can provide a general class of OWA operators.
DEFINITION.
An aggregation operator
F,: I" -+ I
is called a T type ordered weighted averaging operator of dimension n if it has
associated with it a weighting vector W
such that
QUANTIFIER GUIDED AGGREGATION 69
and where
n
FT(al,. . . . . . a,) = wjT(Bj)
j- 1
.
where T is any t-norm and !3j = (b,, b2, . . , bj) and where bj is the jth largest
of the ai. We note T(B,) is defined as b,. In the above we call Bj the top j
dimension ordered bag of (a,, . . . . , an).
A number of special cases of this operator are worth noting. Assume T(a,
b) = Min[a, b ] . In this case
= Min[bl,
TMi,,(Bj) . . . . , 41 = bj
and thus
. . , a , ) = jc
n
FMin(al,.. ojbj
=I
and
Fn(a,, . . . .a,) = j2
=l
wj (fi bK)
K=l
For this operator we note that if for some m T,(B,) = 0 , then T,(Bj) = 0 for
a l l j =r m . Thus if m* is the smallest integer for which
then
70 YAGER
In this case we see that T(Bj) = 0 for all j such that &=2 6k 2 b, . Thus if m*
is the smallest integer for which xr12F k 2 6 , , then
j= 1
This theorem follows directly from the monotonicity of the t-norm oper-
ators.
Inspired by the T type OWA operators we can consider a class of S type
OWA operators. Let S be any t-conorm,'o we recall that S satisfies the same
properties as a t-norm accept instead of (4)it has S(0, a) = a. The prototypical
example of a t-conorm is the Max operator. Assume (a,, . . . , a,) is a bag of
arguments to be aggregated and again let bj be thejth largest of these. We shall
let Dj = ( b j , b j + l ,. . . , b,). Thus Djis the subbag of the n + 1 - j smallest
values to be aggregated. We now consider the aggregation.
where S(Dj)is a t-conorm aggregation of the elements in Dj.We shall call this
S type OWA aggregation.
If we consider the special case when S is equal to the Max we note that
S(Dj) = Max[bj, b j + l ,,b,I = bj
Thus for S = Max
which is the ordinary OWA operator. Recalling that Max is the smallest
t-conormloit follows that for any S
Max(Bj) 5 S(Bj)
and hence for any W and S
FMa,(al 9 .. * 9 a,) 5 Fs(a,, . .. , a,).
The following theorem mirrors the one for T type OWA aggregation operators.
THEOREM.
Assume S is any t-conorm then for any weighting vector W
Mini[ai] 5 F s ( a l , . . . , a,) 5 S ( a , , . . . , a,).
Furthermore, it can be easily shown that these operators are monotonic.
72 YAGER
IX. CONCLUSION
We have looked at the issue of quantifier guided multicriteria decision
making. We have suggested that the OWA operators provide an appropriate
tool for the construction of these types of decision functions. We have suggested
a method for including importances associated with the different criteria to be
aggregated.
References
1. R.E. Bellman and L.A. Zadeh, “Decision-making in afuzzy environment,” Manage.
Sci., 17(4), 141-164 (1970).
2. R.R. Yager, “Quantifiers in the formulation of multiple objective decision func-
tions,” Znf Sci., 31, 107-139 (1983).
3. R.R. Yager, “Aggregating criteria with quantifiers,” in Proceedings of the Interna-
tional Symposium on Methodologies f o r Intelligent Systems, Knoxville, TN, pp.
183-189, 1986.
4, R.R. Ytiger, “On ordered weighted averaging aggregation operators in multi-criteria
decision making,” IEEE Trans. Syst. Man Cybern., 18, 183-190 (1988).
5. L.A. Zadeh, “A computational approach to fuzzy quantifiers in natural languages,”
Cornput. Math. A p p l . , 9, 149-184 (1983).
6. R.R. Yager, “Connectives and quantifiers in fuzzy sets,” Fuzzy Sets Syst., 40,
39-76 (1991).
7. L. Zadeh, “The concept of a linguistic variable and its application to approximate
reasoning: Part 1,” Inf. Sci., 8, 199-249 (1975).
8. E.P. Klement, “Characterization of fuzzy measures constructed by means of trian-
gular norms,” J. Math. Anal. Appl., 86, 345-358 (1982).
9. R.R. Yager and D.P. Filev, Essentials of Fuzzy Modeling and Control, Wiley, New
York, 1994.
10. D. Dubois and H. Prade, “A review of fuzzy sets aggregation connectives,” Inf.
Sci., 36, 85-121 (1985).
11. R.R. Yager, “On mean type aggregation,” IEEE Transactions on Systems, Man
and Cybernetics, to appear.
12. R.R. Yager, “Families of OWA operators,” Fuzzy Sets Syst., 59, 125-148 (1993).
13. R.R. Yager, “Some extensions of constraint propagation of label sets,” Int. J.
Approximate Reasoning, 3,417-436 (1989).
14. R.R. Yager, “Entropy and specificity in a mathematical theory of evidence,” Int.
J . General Syst. 9, 249-260 (1983).
15. R.R. Yager, “Measuring the quality of linguistic forecasts,” Int. J. Man Mach.
Stud.. 21, 253-257 (1984).
16. R.R. Yager, “On the specificity of a possibility distribution,” Fuzzy Sets Syst., 50,
279-292 (1992).
17. L.A.-Zadeh, “A theory of approximate reasoning,” in Machine Intelligence, Vol.
9, J. Hayes, D. Michie, and L. I. Mikulich, Eds., Halstead Press, New York, 1979,
pp. 149-194.
18. R.R. Yager, “Deductive approximate reasoning sfrstems,” IEEE Trans. Knowl.
Data Eng., 3, 399-414 (1991).
19. D. Dubois and H. Prade, “Fuzzy sets in approximate reasoning Part I: Inference
with possibility distributions,” Fuzzy Sets Syst., 40, 143-202 (1991).
20. D. Dubois and H. Frade, “The principle of minimum specificity as a basis for
evidential reasoning,” in Uncertainty in Knowledge-Based Systems, B. Bouchon
and R. R. Yager, Eds., Springer-Verlag, Berlin, Germany, 1987, pp. 75-84.
21. M. O’Hagan, “Aggregating template or rule antecedents in real-time expert systems
QUANTIFIER GUIDED AGGREGATION 73
with fuzzy set logic,” in Proceedings of rhe 22nd Annual IEEE Asilomar Conference
on Signals, Systems and Computers, Pacific Grove, CA, pp. 681-689, 1988.
22. D.P. Filev and R.R. Yager, “Learning OWA operator weights from data,” in Pro-
ceedings of the Third IEEE International Conference on Fuzzy Systems, Orlando,
FL, pp. 468-473, 1994.
23. R.R. Yager, “Fuzzy quotient operators for fuzzy relational data bases,” in Proceed-
ings of the International Fuzzy Engineering Symposium, Yokohama, Japan, pp.
289-296, 1991.