Jahanshahloo2006 (Topsis)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Applied Mathematics and Computation 175 (2006) 1375–1384

www.elsevier.com/locate/amc

An algorithmic method to extend TOPSIS


for decision-making problems
with interval data
G.R. Jahanshahloo, F. Hosseinzadeh Lotfi, M. Izadikhah *

Department of Mathematics, Science and Research Branch,


Islamic Azad University, P.O. Box 14515-775, Tehran, Iran

Abstract

In this paper, from among multi-criteria models in making complex decisions and
multiple attribute models for the most preferable choice, technique for order preference
by similarity ideal solution (TOPSIS) approach has been dealt with. In some cases,
determining precisely the exact value of the attributes is difficult and that, as a result
of this, their values are considered as intervals. Therefore, the aim of this paper is to
extend the TOPSIS method for decision-making problems with interval data. By exten-
sion of TOPSIS method, an algorithm to determine the most preferable choice among
all possible choices, when data is interval, is presented. Finally, an example is shown to
highlight the procedure of the proposed algorithm at the end of this paper.
Ó 2005 Elsevier Inc. All rights reserved.

Keywords: MCDM; TOPSIS; Interval data; Positive ideal solution; Negative ideal solution

*
Corresponding author.
E-mail address: [email protected] (M. Izadikhah).

0096-3003/$ - see front matter Ó 2005 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2005.08.048
1376 G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384

1. Introduction

Decision-making problem is the process of finding the best option from all
of the feasible alternatives. In almost all such problems the multiplicity of cri-
teria for judging the alternatives is pervasive. That is, for many such problems,
the decision maker wants to solve a multiple criteria decision making (MCDM)
problem. Multiple criteria decision making may be considered as a complex
and dynamic process including one managerial level and one engineering level
[4]. The managerial level defines the goals, and chooses the final ‘‘optimal’’
alternative. The multi-criteria nature of decisions is emphasized at this mana-
gerial level, at which public officials called ‘‘decision makers’’ have the power to
accept or reject the solution proposed by the engineering level. These decision
makers, who provide the preference structure, are ‘‘off line’’ from the optimi-
zation procedure done at the engineering level. A MCDM problem can be con-
cisely expressed in matrix format as

W ¼ ½w1 ; w2 ; . . . ; wn 
where A1, A2, . . . , Am are possible alternatives among which decision makers
have to choose, C1, C2, . . . , Cn are criteria with which alternative performance
are measured, xij is the rating of alternative Ai with respect to criterion Cj,
wj is the weight of criterion Cj.
The main steps of multiple criteria decision making are the following:

(a) Establishing system evaluation criteria that relate system capabilities to


goals.
(b) Developing alternative systems for attaining the goals (generating
alternatives).
(c) Evaluating alternatives in terms of criteria(the values of the criterion
functions).
(d) Applying a normative multi-criteria analysis method.
(e) Accepting one alternative as ‘‘optimal’’(preferred).
(f) If the final solution is not accepted, gather new information and go into
the next iteration of multi-criteria optimization.

Steps (a) and (e) are performed at the upper level, where decision makers
have the central role, and the other steps are mostly engineering tasks. For step
(d), a decision maker should express his/her preferences in terms of the relative
G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384 1377

importance of criteria, and one approach is to introduce criteria weights. This


weights in MCDM do not have a clear economic significance, but their use pro-
vides the opportunity to model the actual aspects of decision making (the pref-
erence structure).
In classical MCDM methods, the ratings and the weights of the criteria are
known precisely [5,6]. A survey of the methods has been presented in Hwang
and Yoon [6]. Technique for order performance by similarity to ideal solution
(TOPSIS) [7], one of known classical MCDM method, was first developed by
Hwang and Yoon [6] for solving a MCDM problem. It based upon the concept
that the chosen alternative should have the shorter distance from the positive
ideal solution and the farthest from the negative ideal solution. A similar con-
cept has also been pointed out by Zeleny [8]. In the process of TOPSIS, the per-
formance ratings and the weights of the criteria are given as exact values.
Recently, Abo-sinna and Amer [1] extend TOPSIS approach to solve multi-
objective nonlinear programming problems. Chen [2] extends the concept of
TOPSIS to develop a methodology for solving multi-person multi-criteria deci-
sion-making problems in fuzzy environment.
Under many conditions, exact data are inadequate to model real-life situa-
tions. For example, human judgements including preferences are often vague
and cannot estimate his preference with an exact numerical data, there for
these data may be have some structures such as bounded data, ordinal data,
interval data, and fuzzy data. In this paper, by considering the fact that, in
some cases, determining precisely the exact value of the attributes is difficult
and that, as a result of this, their values are considered as intervals, therefore,
we extended the concept of TOPSIS to develop a methodology for solving
MCDM problems with interval data.
The rest of the paper is organized as follows: next section briefly introduces
the original TOPSIS method. In Section 3, first, we introduce MCDM prob-
lems with interval data, then, we present an algorithm to extend TOPSIS to
deal with interval data. In Section 4 we illustrate our proposed algorithmic
method with an example. The final section concludes.

2. TOPSIS method

TOPSIS (technique for order preference by similarity to an ideal solution)


method is presented in Chen and Hwang [3], with reference to Hwang and
Yoon [6]. TOPSIS is a multiple criteria method to identify solutions from a fi-
nite set of alternatives. The basic principle is that the chosen alternative should
have the shortest distance from the positive ideal solution and the farthest dis-
tance from the negative ideal solution. The procedure of TOPSIS can be ex-
pressed in a series of steps:
1378 G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384

(1) Calculate the normalized decision matrix. The normalized value nij is cal-
culated as
,vuX
ffiffiffiffiffiffiffiffiffiffiffiffiffi
u m 2
nij ¼ xij t xij ; j ¼ 1; . . . ; m; i ¼ 1; . . . ; n.
j¼1

(2) Calculate the weighted normalized decision matrix. The weighted nor-
malized value vij is calculated as
vij ¼ wi nij ; j ¼ 1; . . . ; m; i ¼ 1; . . . ; n;
Pn
where wi is the weight of the ith attribute or criterion, and i¼1 wi ¼ 1.
(3) Determine the positive ideal and negative ideal solution.
   
þ þ þ
A ¼ fv1 ; . . . ; vn g ¼ max vij ji 2 I ; min vij ji 2 J ;
j j
   
A ¼ fv 
1 ; . . . ; vn g ¼ min vij ji 2 I ; max vij ji 2 J ;
j j

where I is associated with benefit criteria, and J is associated with cost


criteria.
(4) Calculate the separation measures, using the n-dimensional Euclidean
distance. The separation of each alternative from the ideal solution is
given as
( )12
Xn
2

j ¼ ðvij  vþ
i Þ ; j ¼ 1; . . . ; m.
i¼1

Similarly, the separation from the negative ideal solution is given as


( )12
Xn
  2
dj ¼ ðvij  vi Þ ; j ¼ 1; . . . ; m.
i¼1

(5) Calculate the relative closeness to the ideal solution. The relative close-
ness of the alternative Aj with respect to A+ is defined as
Rj ¼ d  þ 
j =ðd j þ d j Þ; j ¼ 1; . . . ; m.
Since d
P 0 and
j dþ
P 0, then, clearly, Rj 2 [0, 1].
j
(6) Rank the preference order. For ranking DMUs using this index, we can
rank DMUs in decreasing order.

The basic principle of the TOPSIS method is that the chosen alternative
should have the ‘‘shortest distance’’ from the positive ideal solution and the
‘‘farthest distance’’ from the negative ideal solution. The TOPSIS method
introduces two ‘‘reference’’ points, but it does not consider the relative impor-
tance of the distances from these points.
G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384 1379

3. TOPSIS method with interval data

Considering the fact that, in some cases, determining precisely the exact
value of the attributes is difficult and that, as a result of this, their values are
considered as intervals, therefore, now we try to extend TOPSIS for these inter-
val data. Suppose A1, A2, . . . , Am are m possible alternatives among which deci-
sion makers have to choose, C1, C2, . . . , Cn are criteria with which alternative
performance are measured, xij is the rating of alternative Ai with respect to
criterion Cj and is not known exactly and only we know xij 2 ½xLij ; xUij . A
MCDM problem with interval data can be concisely expressed in matrix
format as

W ¼ ½w1 ; w2 ; . . . ; wn 
where wj is the weight of criterion Cj.

3.1. The proposed algorithmic method

A systematic approach to extend the TOPSIS to the interval data is pro-


posed in this section.
First we calculate the normalized decision matrix as follows:
The normalized values  nLij and  nUij are calculated as
,v
uX
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
m
u
nLij ¼ xLij t ðxLij Þ þ ðxUij Þ ; j ¼ 1; . . . ; m; i ¼ 1; . . . ; n;
2 2
 ð1Þ
j¼1
,v
uX
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
m
u
nUij ¼ xUij t ðxLij Þ2 þ ðxUij Þ2 ;
 j ¼ 1; . . . ; m; i ¼ 1; . . . ; n. ð2Þ
j¼1

nLij ; n
Now interval ½ Uij  is normalized of interval ½xLij ; xUij . The normalization
method mentioned above is to preserve the property that the ranges of normal-
ized interval numbers belong to [0, 1].
Considering the different importance of each criterion, we can construct the
weighted normalized interval decision matrix as
vLij ¼ wi 
nLij ; j ¼ 1; . . . ; m; i ¼ 1; . . . ; n; ð3Þ
vUij ¼ nUij ;
wi  j ¼ 1; . . . ; m; i ¼ 1; . . . ; n; ð4Þ
1380 G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384

P
where wi is the weight of the ith attribute or criterion, and ni¼1 wi ¼ 1.
Then, we can identify positive ideal solution and negative ideal solution as
   
þ þ þ U L
A ¼ fv1 ; . . . ; vn g ¼ max vij ji 2 I ; min vij ji 2 J ; ð5Þ
j j
   

A ¼ fv v
1 ;...;ng ¼ min vLij ji 2 I ; max vUij ji 2 J ; ð6Þ
j j

where I is associated with benefit criteria, and J is associated with cost criteria.
The separation of each alternative from the positive ideal solution, using the
n-dimensional Euclidean distance, can be currently calculated as
( )12
þ
X 2 X  2
dj ¼ vLij  vþ
i þ vUij  vþ
i ; j ¼ 1; . . . ; m. ð7Þ
i2I i2J

Similarly, the separation from the negative ideal solution can be calculated
as
( )12

X 2 X  2
dj ¼ vUij  v
i þ vLij  v
i ; j ¼ 1; . . . ; m. ð8Þ
i2I i2J

A closeness coefficient is defined to determine the ranking order of all alter-


þ 
natives once the dj and dj of each alternative Aj has been calculated. The rel-
þ
ative closeness of the alternative Aj with respect to A is defined as
 þ 
Rj ¼ dj =ðdj þ dj Þ; j ¼ 1; . . . ; m. ð9Þ
þ 
Obviously, an alternative Aj is closer to the A and farther from A as Rj
approaches to 1. Therefore, according to the closeness coefficient, we can deter-
mine the ranking order of all alternatives and select the best one from among a
set of feasible alternatives.

3.2. The presented algorithm

In sum, an algorithm to determine the most preferable choice among all pos-
sible choices, when data is interval, with extended TOPSIS approach is given in
the following:

Step 1: Establishing system evaluation criteria that relate system capabilities


to goals (identification the evaluation criteria).
Step 2: Developing alternative systems for attaining the goals (generating
alternatives).
Step 3: Evaluating alternatives in terms of criteria (the values of the criterion
functions which are intervals).
G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384 1381

Step 4: Identifying the weight of criteria.


Step 5: Construct the interval decision matrix and the interval normalized
decision matrix (using the formulas (1) and (2)).
Step 6: Construct the interval weighted normalized decision matrix (using the
formulas (3) and (4)).
Step 7: Determine positive ideal solution and negative ideal solution(identifi-
þ 
cation of A and A , using the formulas (5) and (6)).
Step 8: Calculate the separation of each alternative from positive ideal solu-
þ
tion and negative ideal solution, respectively (identification of dj and

dj , using the formulas (7) and (8)).
Step 9: Calculate the relative closeness of each alternative to positive ideal
solution (identification of Rj , using the formula (9)).
Step 10: Rank the preference order of all alternatives according to the close-
ness coefficient.

4. Numerical example

In this section, we work out a numerical example to illustrate the TOPSIS


method for decision-making problems with interval data. A case study of com-
paring 15 bank branches (A1, A2, . . . , A15) in Iran was conducted to examine the
applicability of this TOPSIS method with interval data. Four financial ratios

Table 1
The Interval decision matrix of 15 alternatives
C1 C2 C3 C4
xL1j xU
1j xL2j xU
2j xL3j xU
3j xL4j xU
4j

A1 500.37 961.37 2 696 995 3 126 798 26 364 38 254 965.97 6957.33
A2 873.7 1775.5 1 027 546 1 061 260 3791 50308 2285.03 3174
A3 95.93 196.39 1 145 235 1 213 541 22 964 26 846 207.98 510.93
A4 848.07 1752.66 390 902 395 241 492 1213 63.32 92.3
A5 58.69 120.47 144 906 165 818 18 053 18 061 176.58 370.81
A6 464.39 955.61 408 163 416 416 40 539 48 643 4654.71 5882.53
A7 155.29 342.89 335 070 410 427 33 797 44 933 560.26 2506.67
A8 1752.31 3629.54 700 842 768 593 1437 1519 58.89 86.86
A9 244.34 495.78 641 680 696 338 11 418 24 108 1070.81 2283.08
A10 730.27 1417.11 453 170 481 943 2719 2955 375.07 559.85
A11 454.75 931.24 309 670 342 598 2016 2617 936.62 1468.45
A12 303.58 630.01 286 149 317 186 14 918 27 070 1203.79 4335.24
A13 658.81 1345.58 321 435 347 848 6616 8045 200.36 399.8
A14 420.18 860.79 618 105 835 839 24 425 40 457 2781.24 4555.42
A15 144.68 292.15 119 948 120 208 1494 1749 282.73 471.22
1382 G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384

Table 2
The Interval normalized decision matrix
C1 C2 C3 C4
nL1j
 nU
1j nL2j nU
2j nL3j nU
3j nL4j nU
4j

A1 0.0856 0.1645 0.5176 0.6001 0.1974 0.2865 0.0706 0.5086


A2 0.1495 0.3038 0.1972 0.2037 0.0283 0.3768 0.1670 0.2320
A3 0.0164 0.0336 0.2198 0.2329 0.1720 0.2010 0.0152 0.0373
A4 0.1451 0.2999 0.0750 0.0758 0.0036 0.0090 0.0046 0.0067
A5 0.0100 0.0206 0.0278 0.0318 0.1352 0.1352 0.0129 0.0271
A6 0.0794 0.1635 0.0783 0.0799 0.3036 0.3643 0.3403 0.4300
A7 0.0265 0.0586 0.0643 0.0787 0.2531 0.3365 0.0409 0.1832
A8 0.2999 0.6211 0.1345 0.1475 0.0107 0.0113 0.0043 0.0063
A9 0.0418 0.0848 0.1231 0.1336 0.0855 0.1805 0.0782 0.1669
A10 0.1249 0.2425 0.0869 0.0925 0.0203 0.0221 0.0274 0.0409
A11 0.0778 0.1593 0.0594 0.0657 0.0151 0.0196 0.0684 0.1073
A12 0.0519 0.1078 0.0549 0.0608 0.1117 0.2027 0.0880 0.3169
A13 0.1127 0.2302 0.0616 0.0667 0.0495 0.0602 0.0146 0.0292
A14 0.0719 0.1473 0.1186 0.1604 0.1829 0.3030 0.2033 0.3330
A15 0.0247 0.0500 0.0230 0.0230 0.0111 0.0131 0.0206 0.0344

Table 3
The Interval weighted normalized decision matrix
C1 C2 C3 C4
vL1j vU
1j vL2j vU
2j vL3j vU
3j vL4j vU
4j

A1 0.0107 0.0205 0.06471 0.07502 0.0246 0.0358 0.0088 0.0635


A2 0.0186 0.0379 0.0246 0.0254 0.0035 0.0471 0.0208 0.0290
A3 0.0020 0.0042 0.0274 0.0291 0.0215 0.0251 0.0019 0.0046
A4 0.0181 0.0374 0.0093 0.0094 0.0004 0.0011 0.0005 0.0008
A5 0.0012 0.0025 0.0034 0.0039 0.0169 0.0169 0.0016 0.0033
A6 0.0099 0.0204 0.0097 0.0099 0.0379 0.0455 0.0425 0.0537
A7 0.0033 0.0073 0.0080 0.0098 0.0316 0.0420 0.0051 0.0229
A8 0.0374 0.07766 0.0168 0.0184 0.0013 0.0014 0.0005 0.0007
A9 0.0052 0.0106 0.0153 0.0167 0.0106 0.0225 0.0097 0.0208
A10 0.0156 0.0303 0.0108 0.0115 0.0025 0.0027 0.0034 0.0051
A11 0.0097 0.0199 0.0074 0.0082 0.0018 0.0024 0.0085 0.0134
A12 0.0064 0.0134 0.0068 0.0076 0.0139 0.0253 0.0110 0.0396
A13 0.0140 0.0287 0.0077 0.0083 0.0061 0.0075 0.0018 0.0036
A14 0.0089 0.0184 0.0148 0.0200 0.0228 0.0378 0.0254 0.0416
A15 0.0030 0.0062 0.0028 0.0028 0.0013 0.0016 0.0025 0.0043

Table 4
Distance of each alternative from the positive ideal solution
þ þ þ þ þ þ þ þ þ þ þ þ þ þ þ
d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15
0.063 0.087 0.082 0.108 0.099 0.071 0.090 0.123 0.088 0.102 0.099 0.093 0.103 0.077 0.105
G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384 1383

Table 5
Distance of each alternative from the negative ideal solution
þ              
d1 d2 d3 d4 d5 d6 d7 d8 d9 d10 d11 d12 d13 d14 d15
0.122 0.083 0.083 0.059 0.078 0.097 0.088 0.043 0.079 0.062 0.069 0.085 0.064 0.089 0.074

Table 6
Closeness coefficient and ranking
Alternatives Rj Rank
A1 0.659352269 1
A2 0.48911912 6
A3 0.505445965 4
A4 0.355647462 14
A5 0.440416214 9
A6 0.57596554 2
A7 0.494120485 5
A8 0.258369495 15
A9 0.473078522 8
A10 0.379417215 13
A11 0.409684296 11
A12 0.477519948 7
A13 0.38233013 12
A14 0.538211999 3
A15 0.415388351 10

(C1, C2, . . . , C4) were identified as the evaluation criteria for these banks. (Note
that Steps 1, Step 2 and Step 3 are done).

Step 4: Suppose that the vector of corresponding weight of each criteria is as


follows:
W ¼ ½0:125; 0:125; 0:125; 0:125.
Step 5: The interval decision matrix and interval normalized decision matrix
are shown in Tables 1 and 2, respectively.
Step 6: The interval Weighted normalized decision matrix is as Table 3.
Step 7: The positive ideal solution and the negative ideal solution are then
determined as:
þ
A ¼ ½0:001255569; 0:075023386; 0:047105492; 0:063583238;

A ¼ ½0:077647586; 0:002877994; 0:00046068; 0:000538197.
1384 G.R. Jahanshahloo et al. / Appl. Math. Comput. 175 (2006) 1375–1384

Step 8: A comparison between the normalized performance ratings of each


þ
alternative Ai and A by Eq. (7) (that is shown in Table 4), and

between that of Ai and A by Eq. (8) (that is shown in Table 5) would
indicate how the bank is performing as compared with the best per-
formance and the worst performance of all the bank branches with
respect to each criterion.
Step 9: Calculate the relative closeness of each alternative to positive ideal
solution as Table 6.
Step 10: According to the closeness coefficient, ranking the preference order of
all alternatives is as Table 6.

5. Conclusion

Considering the fact that, in some cases, determining precisely the exact
value of the attributes is difficult and that, their values are considered as inter-
vals, therefore, in this paper TOPSIS for interval data has been extended. Also,
an algorithm to determine the most preferable choice among all possible
choices, when data is interval, is presented. In this algorithmic method, as well
as considering the distance of a DMU from the positive ideal solution, its dis-
tance from the negative ideal solution is also considered. That is to say, the less
the distance of the DMU under evaluation from the positive ideal solution and
the more its distance from the negative ideal solution, the better its ranking.

References

[1] M.A. Abo-Sinna, A.H. Amer, Extensions of TOPSIS for multi-objective large-scale nonlinear
programming problems, Applied Mathematics and Computation 162 (2005) 243–256.
[2] C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment,
Fuzzy Sets and Systems 114 (2000) 1–9.
[3] S.J. Chen, C.L. Hwang, Fuzzy Multiple Attribute Decision Making: Methods and Applications,
Springer-Verlag, Berlin, 1992.
[4] L. Duckstein, S. Opricovic, Multiobjective optimization in river basin development, Water
Resources Research 16 (1) (1980) 14–20.
[5] J.S. Dyer, P.C. Fishburn, R.E. Steuer, J. Wallenius, S. Zionts, Multiple criteria decision making,
Multiattribute utility theory: The next ten years, Management Science 38 (5) (1992) 645–654.
[6] C.L. Hwang, K. Yoon, Multiple Attribute Decision Making Methods and Applications,
Springer, Berlin Heidelberg, 1981.
[7] Y.J. Lai, T.Y. Liu, C.L. Hwang, TOPSIS for MODM, European Journal of Operational
Research 76 (3) (1994) 486–500.
[8] M. Zeleny, Multiple Criteria Decision Making, McGraw-Hil, New York, 1982.

You might also like