Efficient Decision Support Systems Pract
Efficient Decision Support Systems Pract
Efficient Decision Support Systems Pract
SUPPORT SYSTEMS –
PRACTICE AND
CHALLENGES IN
MULTIDISCIPLINARY
DOMAINS
Edited by Chiang S. Jao
Efficient Decision Support Systems –
Practice and Challenges in Multidisciplinary Domains
Edited by Chiang S. Jao
Published by InTech
Janeza Trdine 9, 51000 Rijeka, Croatia
Statements and opinions expressed in the chapters are these of the individual contributors
and not necessarily those of the editors or publisher. No responsibility is accepted
for the accuracy of information contained in the published articles. The publisher
assumes no responsibility for any damage or injury to persons or property arising out
of the use of any materials, instructions, methods or ideas contained in the book.
Preface IX
Series Preface
This series is directed to diverse managerial professionals who are leading the
transformation of individual domains by using expert information and domain
knowledge to drive decision support systems (DSSs). The series offers a broad range of
subjects addressed in specific areas such as health care, business management,
banking, agriculture, environmental improvement, natural resource and spatial
management, aviation administration, and hybrid applications of information
technology aimed to interdisciplinary issues.
This book series is composed of three volumes: Volume 1 consists of general concepts
and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical
domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary
domains. The book is shaped decision support strategies in the new infrastructure that
assists the readers in full use of the creative technology to manipulate input data and
to transform information into useful decisions for decision makers. This book series is
dedicated to support professionals and series readers in the emerging field of DSS.
Preface
Section 2, including Chapter 10 through 13, presents a set of DSSs aimed to water
resource management and planning issues. Chapter 10 focuses on improving water
delivery operations in irrigation systems through the innovative use of water DSSs.
Chapter 11 uses one of the latest biophysical watershed level modeling tools to
estimate the effects of land use change in water quality. Chapter 12 presents a flood
prediction model with visual representation for decision making in early flood
warning and impact analysis. Chapter 13 presents an integrated sustainability analysis
model that provides holistic decision evaluation and support addressed the
environmental and social issues in green operations management.
Section 3, including Chapter 14 through 15, presents two DSSs applied in the
agricultural domain. Chapter 14 integrates agricultural knowledge and data
representation using fuzzy logic methodology that generalizes decision tree
algorithms when an uncertainty (missing data) is existed. Chapter 15 presents Web-
based DSS models in respectively dealing with surface irrigation and sustainable pest
management in the agricultural domain.
Section 4, including Chapter 16 through 17, illustrates two spatial DSSs applied to
multidisciplinary areas. Chapter 16 presents the DSS for location decisions of taxicab
stands in an urban area with the assistance of geographical information system (GIS)
and fuzzy logic techniques. Chapter 17 presents the spatial DSS integrated with GIS
data to assist managers identifying and managing impending crisis situations.
Section 5, including Chapter 18 and 19, emphasizes the importance of DSS applications
in risk analysis and crisis management. In a world experiencing recurrent risks and
crises, it is essential to establish intelligent risk and crisis management systems at the
managerial level that supports appropriate decision making strategies and reduces the
Preface XI
This book concludes in Section 6 that covers a set of DSS applications adopted in
aviation, power management, warehousing, and climate monitoring respectively.
Chapter 20 presents an aviation maintenance DSS to promote the levels of
airworthiness, safety and reliability of aircrafts and to reduce indirect costs due to
frequent maintenance.Chapter 21 introduces a fuzzy logic DSS to assist decision
makers in better rankings of power consumption in rock sawing process. Chapter 22
presents a DSS for efficient storage allocation purpose that integrates management
decisions in a warehousing system. Chapter 23 investigates challenges in climate-
driven DSS in forestry. By including prior damage data and forest management
activities caused by changes in weather and forest structure, a DSS model is presents
to assist forest managers in assessing the damage and projecting future risk factors in
monitoring the climate change.
Chiang S. Jao
Transformation, Inc. Rockville
Maryland University of Illinois (Retired) Chicago
Illinois
Part 1
Applications in Business
1
1. Introduction
The focus of this research is on customer relationship management (CRM) and its adoption
in Taiwan’s banking industry. The concept of CRM and its benefits have been widely
acknowledged. Kincaid (2003, p. 47) said, “CRM deliver value because it focuses on
lengthening the duration of the relationship (loyalty).” Motley (2005) found that satisfiers
keep customers with the bank while dissatisfiers eventually chase them out. Earley (2003)
pointed out the necessity of holistic CRM strategies for every company because today even
the most established brands no longer secure lasting customer loyalty. It seems clear that
customer relationship management is critical for all service firms, including the banks.
2. Literature review
A body of previous studies on this topic lends a solid basis to the present investigation. This
literature covers the following sub-areas: (1) introduction of customer relatioship
management, (2) measurement of success with CRM, (3) CRM technologies and success with
CRM.
Once a CRM system has been implemented, an ultimate question arises. That is, whether the
CRM adoption can be considered a success. Since there exist many measures to assess IT
adoption success, care must be taken when selecting the appropriate approach for analyzing
CRM implementations with banks. In the present study, the author chose to determine CRM
implementation by process efficiency and IT product quality. These approaches make it
possible to highlight the goals that need to be managed more actively during the CRM
introduction process to make the adoption of CRM systems a success.
assessing and enhancing users’ subjective evaluations of the system (Yahaya, Deraman,
and Hamdan, 2008; Ortega, Perez, and Rojas, 2003). A key management objective when
dealing with information products is to understand the value placed by users on these IT
products. In contrast to the technical focus of CRM system’s quality assurance research,
customer satisfaction is an important objective of TQM (total quality management)
initiatives. Customers have specific requirements, and products/services that effectively
meet these requirements are perceived to be of higher quality (Deming, 1986; Juran, 1986). A
similar perspective is evident in the IS management studies, where significant attention has
been paid to understanding user requirements and satisfying them. Research has focused
on identifying the dimensions of developing reliable and valid instruments for the
measurement of this construct (Bailey and Pearson, 1983; Galletta and Lederer, 1989; Ives,
Olson, and Baroudi, 1983). It seems clear that, for Taiwan’s banking industry, product
quality is an important variable in determining the success of CRM.
modular and flexible CRM/ERP systems could mitigate some of the legacy-application
problems with the banks since CRM/ERP systems may enable the banks to align their
business processes more efficiently. Although the CRM systems are able to provide huge
direct and indirect benefits, potential disadvantages, e.g., lack of appropriate CRM/ERP
package, can be enormous and can even negatively affect a bank’s Success with CRM
adoption. Scott and Kaindl (2000) found that approximately 20% of functionalities/modules
needed are not even included in the systems. This rate may arguably be even higher in
banking, possibly creating the impression that CRM/ERP systems with appropriate
functionality coverage are not available at all. In addition to the system package, the
pressure from CRM/ERP vendors to upgrade is an issue. This pressure has become a
problem due to recent architectural changes that systems of major CRM/ERP vendors face.
That is, with the announcement of mySAP CRM as the successor of SAP R/3 and its arrival
in 2004, SAP’s maintenance strategy has been extensively discussed while users fear a strong
pressure to upgrade (Davenport, 2000; Shang & Seddon, 2002).
According to Hsu and Rogero (2003), in Taiwan, the major CRM vendors are the well-
known brand names, such as IBM, Oracle, Heart-CRM, and SAP. Confronted with so many
vendors, organizations that wish to implement a CRM system will need to choose wisely.It
has been recognized that the service ability of vendors would affect the adoption of
information technologies (Thong and Yap, 1996). Lu (2000) posited that the better the
training programs and technical support supplied by a CRM vendor, the lower the failure
rate will be in adopting a CRM system. It is generally believed that CRM vendors should
emphasize a bank’s CRM adoption factors such as costs, sales, competitors, expertise of
CRM vendors, managers’ support, and operational efficiency (H. P. Lu, Hsu & Hsu, 2002).
For banks that have not yet established a CRM system, CRM vendors should not only
convince them to install CRM systems, but also supply omnifarious solutions and complete
consulting services. For those banks that have already established a CRM system, in
addition to enhancing after-sales services, the CRM vendors should strive to integrate the
CRM systems into the legacy information systems.When companies choose their CRM
vendors, they face many alternatives (Tehrani, 2005). Borck (2005) believed that the current
trend among CRM vendors is moving toward producing tools that satisfy the basic needs of
sales and customer support. The ultimate goal of a bank in choosing a CRM vendor is to
enable the IT application intelligence to drive the revenues to the top line. Based on these
views, it may be concluded that choosing a reputable CRM vendor and benefiting from its
expertise are prerequisite for a successful CRM adoption in Taiwan’s banking industry.
2.4 Summary
In conclusion, this author, based on the previous studies, would like to test the relationship
between CRM technologies and success with CRM. Table 1 sums up the factors selected
from the literature review to be tested in the present study.
Application of Decision Support System in Improving
Customer Loyalty: From the Banking Perspectives 9
Factors
A. CRM deployment.
Measurements of CRM success B. Process efficiency.
C. Product quality.
CT1 Conducting the decision support system (DSS)s
CT2 Customizing CRM functions/modules.
CT3 Choosing reputed CRM vendors.
CT4 Drawing on the expertise of CRM vendors.
CT5 Pressure from CRM(ERP) vendor to upgrade.
3. Research methodology
The objective of this study is to examine the impact of a variety of relevant factors on the
CRM success in Taiwan’s banking industry. This section presents the research question,
hypothesis, and statistical techniques for hypothesis testing.
3.3.1 Introduction
Figure 1 summaries the processes of two statistical methods in the present study.
4. Results
In the present study, this author would like to test the causal relationship between CRM
technologies and success with CRM. The hypothesis of the study, the variables involved and
the statistical methods are described in the following.
Tested hypothesis: H1: The CRM technology will be positively associated with successful
CRM adoption in Taiwan's Banking industry
Observed variables: CT1 (Conducting the decision support system), CT2 (Customizing
CRM functions/modules), CT3 (Choosing reputed CRM vendors), CT4
(Drawing on the expertise of CRM vendors), CT5 (Pressure from
CRM/ERP package), CT6 (Non-availability of appropriate CRM/ERP
packages), CRM deployment, Process efficiency, and Product quality
Latent variables: CRM technologies; success with CRM
Statistical method: Structural-equation modeling
The writing in the remaining part of this section is organized into the following subsections:
(a) offending estimates, (b) construct reliability and average variance extracted and (c)
goodness-of-fit.
Application of Decision Support System in Improving
Customer Loyalty: From the Banking Perspectives 11
a. Offending Estimates
As shown in Table 1, the standard error ranges from 0.001 to 0.006. There is no negative
standard error in this model.
Estimate
Success with CRM <--- CRM technologies .381
CT1 <--- CRM technologies .563
CT2 <--- CRM technologies .665
CT3 <--- CRM technologies .863
CT4 <--- CRM technologies .983
CT5 <--- CRM technologies .892
CT6 <--- CRM technologies .774
CRM Deployment <--- Success with CRM .791
Process Efficiency <--- Success with CRM .895
Product Quality <--- Success with CRM .738
Note: Estimate = Standardized Coefficients
Table 2. Standardized Regression Weights: Default model
b. Construct reliability and Average variance extracted
Construct reliability of CRM Technologies was calculated (with a suggested lower limit of
0.70) by using this formula:
∑
∑ ∑
(1)
12 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
is the construct reliability of CRM technologies. Let be the standardized loadings (or
the standardized coefficients) for CRM Technologies. Let be the error variance for CRM
Technologies. Based on the data in Table 3, the construct reliability of CRM technologies is:
. . . . . .
= (2)
. . . . . . . . . . . .
= 0.9863
Construct reliability of success with CRM in this model was calculated (with a suggested
lower limit of 0.70) by using this formula:
∑
∑ ∑
(3)
is the construct reliability of success with CRM. Let be the standardized loadings (or
the standardized coefficients) for success with CRM. Let be the error variance for success
with CRM. Based on the data in Table 3, the construct reliability of success with CRM in this
model is:
. . .
= = 0.9882 (4)
. . . . . .
The average variance extracted of CRM Technologies was calculated (with a suggested lower
limit of 0.50) by using this formula:
∑
∑ ∑
(5)
is the average variance extracted of CRM Technologies. Based on the data in Table 3, the
average variance extracted of CRM Technologies is:
. . . . . .
= (6)
. . . . . . . . . . . .
= 0.9708
The average variance extracted of success with CRM in this model was calculated (with a
suggested lower limit of 0.50) by using this formula:
∑
∑ ∑
(7)
is the average variance extracted of success with CRM. Based on the data in Table 3, the
average variance extracted of success with CRM is:
. . .
= = 0.9657 (8)
. . . . . .
To sum up, the construct reliability and average variance extracted in this model are
considered acceptable as all of them are much higher than suggested values (0.70 and 0.50).
This means, the inner quality of this model is acceptable and deserves further analyses.
c. Goodness-of-fit
Application of Decision Support System in Improving
Customer Loyalty: From the Banking Perspectives 13
Note: Chi-Square =Discrepancy; P = Probability Levels; GFI = Goodness of Fit Index; AGFI =
Adjusted Goodness of Fit Index; NFI = Normed Fit Index; TLI = Tucker-Lewis Index; RMSEA =
Root Mean Square Error of Approximation
Fig. 2. Path Diagram
vi. Based on the findings described above, it seems clear that H1 —The CRM technology will
be positively associated with successful CRM adoption in Taiwan's Banking industry — is
accepted, and the null hypothesis is rejected.
takes artificial neural network (ANN) technology as one of the most popular methods on
evaluation of credit-card loan decisions in Taiwan’s banking industry. Support vector
machine (SVM), an ANN technology, is singled out. Chou (2006) took artificial neural
network (ANN) technology as one of the most popular methods for evaluating credit-card
loan decisions in Taiwan’s banking industry. Support vector machine (SVM), an ANN
technology, was singled out by Chou. Both Chou (2006) and Hung (2005) held the
predicting power of the support vector machine as being better than that of the other tools.
Not only does it outplay the traditional statistical approach in precision, it also achieves
better results than the neural network. With the incorporation of a support-vector machine
(SVM) into individual credit-card loan systems, the banks will be to render a highly effective
evaluation of customers credit conditions while decreasing the probability of bad debts.
Equipped with complete mathematical theories, SVM can comply with the principle of risk
minimization and deliver better performance. It is anticipated that properly applying the
SVM theory to loaning decisions of a bank will be a great asset. Clearly, conducting decision
support systems could help Taiwan’s banking industry analyze its customers more
accurately. CRM gathers customer data while tracking the customers. What’s important here
is using data to effectively manage customer relations. The foregoing discussions, in fact,
resonate with the statistical results of the present study. Clearly, conducting decision
support systems could help Taiwan’s banking industry analyze its customers more
accurately. CRM gathers customer data while tracking the customers. What’s important here
is use data to effectively manage customer relations. These further discussion resonate the
statistical results of the present study.
5. Conclusion
In the present study, CRM technologies, containing six observed variables ([i.e., CT1
(conducting the decision support system), CT2 (customizing CRM functions/modules), CT3
(choosing reputed CRM vendors), CT4 (drawing on the expertise of CRM vendors), CT5
(pressure from CRM/ERP packages), and CT6 (non-availability of appropriate CRM/ERP
packages)], indeed help CRM adoption in Taiwan’s banking industry. Moreover, as
mentioned in the research result, four variables contain special (high or negative) correlation
coefficients need to be singled out. They are CT1 (conducting the decision support system)
and CT2 (customizing CRM functions/modules), both with a high coefficient weight (0.59).
Thus, if the banks in Taiwan wish to deploy a CRM program, they should consider the
following suggestion:
Conducting the decision support systems (DSS): If the goal of implementing a CRM
system is to improve the process efficiency and IT product quality, managers in Taiwan’s
banking industry should emphasize more on conducting the decision support systems (DSS)
(e.g., artificial neural network (ANN) technology) as one of the customized module in the
whole CRM system.
6. References
Bailey, J. E. & Pearson, S. W. (1983). Development of a tool for measuring and
analyzingcomputer user satisfaction. Management Science, Vol.29, pp. 519-529.
Baldock, R. (2001). Cold comfort on the CRM front. The Banker, Vol.151, pp. 114.
Application of Decision Support System in Improving
Customer Loyalty: From the Banking Perspectives 17
Fadlalla, A. & Lin, C. (2001). An analysis of the applications of neural networks in finance.
Interfaces, Vol.31, pp. 112-122.
Galletta, F. G. & Lederer, A. L. (1989). Some cautions on the measurement of user
information satisfaction. Decision Sciences, Vol.20, pp. 419-438.
Hormazi, A. M. & Giles, S. (2004). Data mining: a competitive weapon for banking and retail
industries. Information Systems Management, Vol.21, pp. 62-71.
Huang, Y. H. (2004). E-commerce technology – CRM implementation and its integration
with enterprise systems. Unpublished Doctoral dissertation, Golden Gate
University, California.
Huang, C. C. & Lu, C. L. (2003). The processing functions and implementation factors of
Customer Relationship Management Systems in the local banking industry. Journal
of Information Management, Vol.5, pp. 115-127.
Hung, L. M. (2005). An application of support vector machines and neural networks on
banks loan evaluation. Unpublished Master’s thesis, National Taiwan University of
Science and Technology, Taipei, Taiwan, R.O.C.
Hsu, B. & Rogero, M. (2003). Taiwan CRM implementation survey report--2002. Available
from http://www.rogero.com/articles/taiwan-crm-report-article1.htm
Ives, B., Olson, M.H. & Baroudi, J. J. (1983). The measurement of user information
satisfaction. Communications of the ACM, Vol. 26, pp. 785-793.
Julta, D. & Bodorik, P. (2001). Enabling and measuring electronic customer relationship
management readiness. In The 34th Hawaii International Conference on System
Science, 3-6 January 2001. Hawaii, U.S.A.
Juran, J. M. (1986). Quality control handbook. McGraw-Hill, New York.
Kincaid, J. W. (2003). Customer relationship management getting it right! Prentice Hall, NJ.
Kohli, R., Piontek, F., Ellington, T., VanOsdol, T., Shepard, M. & Brazel, G. (2001). Managing
customer relationships through E-business decision support applications: A case of
hospital-physician collaboration. Decision Support System, Vol. 32, pp. 171-187.
Lai, K. Y. (2006). A study of CRM system from its current situation and effective factors
perspectives. Unpublished Master’s thesis, National Central University, Chung Li,
Taiwan.
Levitt, T. (1960). Marketing myopia. Harvard Business Review, July-August, pp. 3-23.
Li, M. C. (2003). Object-oriented analysis and design for service-oriented customer
relationship management information systems. Unpublished Master’s thesis,
Chaoyang University of Technology, Taichung, Taiwan, R.O.C.
Liao, S. J. (2003). The study of decision factors for customer relationship management
implementation and customer satisfaction in Taiwan banking industry.
Unpublished Master’s thesis, National Taipei University, Taipei, Taiwan, R.O.C.
Lin, F. H. & Lin, P. Y. (2002). A Case study of CRM implementation: Cathay United Bank as
an example. In The 13th International Conference on Information Management, 25
May 2002. Taipei, Taiwan, Republic of China.
Lu, K. L. (2000). The study in CRM systems adoption effective factors in Taiwan’s business.
Unpublished Master’s thesis, National Taiwan University, Taipei, Taiwan, R.O.C.
Application of Decision Support System in Improving
Customer Loyalty: From the Banking Perspectives 19
Lu, H. P., Hsu, H. H. & Hsu, C. L. (2002). A study of the bank’s establishment on customer
relationship management. In The 13th International Conference on Information
Management, 25 May 2002. Taipei, Taiwan, Republic of China.
Luo, W. S., Ye, R. J. & Chio, J. C. (2003). The integration of CRM and Web-ERP system. In
The 2003 Conference of Information Technology and Applications in Outlying
Islands, 30 May 2003. Peng-Hu, Taiwan, Republic of China.
Moormann, J. (1998). Status quo and perspectives of information processing in banking,
Working Paper, Hochschule for Bankwirtschaft, Frankfurt, Germany.
Motley, L. B. (1999). Are your customer as satisfied as you bank’s shareholders? Bank
Marketing, Vol.31, pp. 44.
Motley, L. B. (2005). The benefits of listening to customers. ABA Banking Marketing, Vol. 37,
pp. 43.
Ortega, M., Perez, M. A., & Rojas, T. (2003). Construction of a systemic quality model for
evaluating a software product. Software Quality Journal, Vol.11, pp. 219-242.
Peppers, D. & Rogers, M. (1993). The one to one future: Building relationships one customer at a
time. Bantam Doubleday Dell Publishing Group, New York.
Rebouillon, J., & Muller, C. W. (2005). Cost management as a strategic challenge. In Z.
Sokolovsky & S. Loschenkohl (Eds.), Handbuch Industrialisierung der
Finanzwirtschaft (pp. 693-710). Wiesbaden: Gabler Verlag.
Scott, J. E., & Kaindl, L. (2000). Enhancing functionality in an enterprise software package.
Information and Management, Vol.37, pp. 111-122.
Shang, S., & Seddon, P. B. (2002). Assessing and managing the benefits of enterprise
systems: The business manager’s perspective. Information Systems Journal, Vol.12,
pp. 271-299.
Shermach, K. (2006). Online banking: Going above and beyond security. Available from
http://www.crmbuyer.com/story/53901.html
Sheshunoff, A. (1999). Winning CRM strategies. American Bankers Association ABA Banking
Journal, Vol.91, pp. 54-66.
Swift, R. S. (2001). Accelerating customer relationships: Using CRM and relationship technologies.
Prentice Hall PTR, NJ.
Tehrani, R. (2005). A host of opinions on hosted CRM. Customer Inter@ction Solutions,
Vol.23, pp. 14-18.
Thong, J. Y. L. & Yap, C. S. (1995). CEO characteristics, organizational characteristics and
information technology adoption in small business. Omega, Vol.23, pp. 429-442.
Trepper, C. (2000). Customer care goes end-to-end. Information Week, March, pp. 55-73.
Turban, E., Aronson, J. E. & Liang, T. P. (2004). Decision support systems and intelligent systems
(7th ed.). Prentice Hall PTR, NJ.
Veitinger, M., & Lbschenkohl, S. (2005). Applicability of industrial practices to the business
processes of banks. In Z. Sokolovsky & S. Loschenkohl (Eds.), Handbuch
Industrialisierung der Finanzwirtschaft, (pp. 397-408). Wiesbaden: Gabler Verlag.
Yahaya, J. H., Deraman, A., & Hamdan, A. R. (2008). Software quality from behavioural and
human perspectives. International Journal of Computer Science and Network Security,
Vol.8, pp. 53-63.
0
2
1. Introduction
The technological progress is constantly evolving and each year presents new growth
opportunities for different people and companies. Modern production and trade management
are processing terabytes of gathered statistical data to gain useful information for different
management tasks. Most common tasks that have arisen for solving are forecasting the
demand for a new product, knowing only the product descriptive data and creating a
production plan for a product on different product life cycle phases. As the amount of data
grows, it becomes impossible to analyse it without using modern intelligent technologies.
Trade management is strictly connected with sales forecasting. Achieving accurate sales
forecasting will always remain on the top of the most important tasks. Due to that different
solutions are introduced each year in order to overcome the efficiency of the traditional sales
forecasting methods. Let us discuss several possible solutions recently introduced. A sales
forecasting model based on clustering methods and fuzzy neural networks was proposed by
(Chang et al., 2009). The model groups demand time series applying clustering algorithms,
then a fuzzy neural network is created for each cluster and used to forecast sales of a new
product. Another interesting solution for sales forecasting was proposed by (Ni & Fan, 2011).
The model combines the Auto-Regressive trees and artificial neural networks to dynamically
forecast a demand for a new product, using historical data. Both models apply modern
intelligent information technologies and, as authors state, are more efficient and accurate
comparing to traditional forecasting methods. Nevertheless, the models preserve that a
product is already introduced on the market and some sales data are already available. This
rises a question that may be important for retailers, supplying market with items, but not
producing them personally. The question may be stated as "How to forecast a demand for
a product before purchasing it for resale?". A system, capable of making such forecasts, will
bring to retailers a useful and valuable information, enabling them to create a list of stock
products before any investments are made. A possible solution to such task was proposed by
(Thomassey & Fioraliso, 2006). The authors proposed a hybrid sales forecasting system based
on a combination of clustering and classification methods. The system applies clustering
methods to gain product demand profiles from historical data, then, using product descriptive
data, a classifier is built and used to forecast a sales curve for a new product, for which only
descriptive data are available. The system was tested on textile item sales and, as authors
state, the results were relatively good. Authors continued their research and in (Thomassey
& Happiette, 2007) a new system based on idea of combining clustering and classification
methods was introduced. Authors used Artificial Neural Networks technologies to cluster
22
2 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
and classify sales time series. As they state, the proposed system increases accuracy of
mid-term forecasting in comparison with the mean sales profile predictor.
The production management itself combines several tasks of forecasting and planning, one
of which is a product life cycle (PLC) management. A product life cycle is a time of
product existence on the market and can be viewed as a sequence of known phases with
bounds. Each PLC phase differs by sales profiles, due to what different production planning
strategies are used and what is more important - different advertising strategies are applied.
Traditionally a product life cycle has four phases, namely "Growth", "Introduction", "Maturity"
and "End-of-Life". The models with a larger number of phases are also possible and are used
(Aitken et al., 2003). For products with a significantly long PLC, growth and introduction
phases can be merged into one phase - "Introduction". This PLC phase imply high investments
in advertising, as potential customers need to know about the new product. This gives rise to
the task of defining bounds for the PLC phases, which to be one of the important tasks in the
PLC management. Let us discuss several possible solutions introduced for this task. Most of
the solutions concentrate around the idea of forecasting the sales curve for a product and then
defining in which particular PLC phase product currently is. Often the Bass diffusion model or
its modifications are used to forecast sales, as it was proposed by (Chien et al., 2010). Authors
present a demand forecasting framework for accurately forecasting product demand, thus
providing valuable information to assist managers in decision-making regarding capacity
planning. The framework is based on the diffusion model and, as authors state, has a practical
viability to forecast demands and provide valuable forecasting information for supporting
capacity planning decisions and manufacturing strategies. Other researchers (Venkatesan &
Kumar, 2002) applied a genetic algorithm to estimate the Bass diffusion model and forecast
sales for products, when only a few data points are available. Both solutions make accurate
forecasts in the scope of the tasks solved, but still give an answer to what the demand will
be, but not to when the PLC phase will be changed. To answer that, the forecasted sales
curves need to be analysed by a manager, who will set PLC phase bounds, using his personal
experience. Due to that, the efficiency of the discussed models may fall as the amount of
monitored products growths.
The present research is oriented towards bringing intelligent agent technology for supporting
manager’s decisions in such tasks as forecasting demand for a new product and defining the
end of the introduction/beginning of the maturity PLC phase for the monitored products. The
chapter proposes two multi-agent systems designed directly for solving the tasks defined.
Intelligent agent technology was chosen as it gives a new modern look at the forecasting
and planning tasks and brings new possibilities in designing solutions to support modern
production and trade management.
The conceptual and functional aspects of the proposed systems are revealed in Sections
2 and 3, but first, some agent terminology needs to be defined to eliminate possible
misunderstandings. There are no unified definitions of what an agent is (Wooldridge, 2005).
One of the reasons for that is that agents are mostly defined as a task specific concept.
Nevertheless, some definitions are useful (Weiss, 1999; Wooldridge, 2005) and also applicable
to present research:
• Agent is a computer program that exists in some environment and is capable of performing
an autonomous activity in order to gain its desired objectives.
• Intelligent agent is an agent capable of learning, capable of mining and storing knowledge
about the environment it is situated in.
• Multi-agent system contains agents and intelligent agents capable of interaction through
an information exchange.
Intelligent Agent Technology
Intelligent Agent Technology inandModern
in Modern Production Production and Trade Management
Trade Management 233
• Agent community is an indivisible unit of agents with similar attributes, abilities and
behaviour, aimed at gaining the desired objectives. It differs from a Multi-agent system
in that agents are not interacting with one another.
The proposed definitions are task specific and are oriented to making the ideas proposed in
the multi-agent systems simple and clear.
1. Querying data
from database
2.1. Data cleaning
with a switching point
Demand time series
Pre-processed
data
3. Dataset formation
Clustering agent 2
...
Clustering agent n
situated in is a raw data repository. The environment affects the agent with new data, forcing
the Data Management Agent to perform the data pre-processing actions. By performing those
actions, the agent affects the environment by changing the raw data to the pre-processed data.
The agent functioning algorithm, displayed in Figure 1, contains three main processes - first
get data from a database, then pre-process it and prepare datasets for system training and
testing. The first two processes are performed in the autonomous mode, the actions in the
third process are performed reactively by responding to the Data Mining Agent’s requests.
The data repository contains data of the unique structure that may not fit the desired data
format. The processes in the first block of an Data Management Agent algorithm return
the set of sales data, obtained during a specified product life cycle phase, as a time series,
summing sales by each period (day, week, month), which is defined by the user. For each
sales time series a value of a target attribute (transition point) is supplied, excluding the new
data, for what the information on the transition point is not available yet. The next step is
pre-processing, it becomes necessary as the raw data may contain noise, missing or conflicting
data. The defined task speaks for performing several steps of data pre-processing:
• Excluding the outliers. The definition of an outlier is dynamic and changes according to
the desired tasks. This makes the step of outlier exclusion highly task specific. The present
research foresees the following actions to be completed. The time series with the number
of periods (length) less than the user defined minimum lmin or with missing values to be
excluded from the raw dataset.
• Data normalization. This step is performed in order to bring sales time series to one level,
lessening possible domination occurrences. One of the normalization methods that suites
the sales time series normalization is the Z-score normalization with standard deviation
method. To calculate the normalized value xi for i-th period of a time series X, Equation 1
is used, where X is the average of an X.
xi − X
xi = (1)
sx
To calculate the standard deviation s x , Equation 2 is used. As the number of periods in
the normalized sales time series is not large, it is recommended to use the (n − 1) as a
denominator in order to obtain an unbiased standard deviation. As an option, the standard
deviation can be replaced with a mean absolute deviation.
n
1
sx = ∑ ( x − x )2
n − 1 i =1 i
(2)
• Imitation of a data flow. The sales data become available over time and this moment
should be taken into account in order to obtain an accurate transition point forecast. This
step is very important as the proposed system will be limited to forecast a binary result
- either the transition is occurred or not, if the data flow imitation will not be performed.
The imitation of the data flow process ensures that the proposed system will be able to
forecast transition points in different time periods having only first lmin periods available.
each time a new data is obtained. The agent itself may induce changes in the environment
by requesting the Data Management Agent to produce new datasets. The Data Mining Agent
changes the environment by creating the knowledge base, that contains relations between the
sales time series and transition point values. The proposed structure of a Data Mining Agent
contains two main steps - first clustering the time series with different duration and then
extracting profiles from clusters and mapping profiles with PLC phase switching points.
The time series clustering is performed by the clustering agents in the agent community
controlled by the Data Mining Agent (see Fig. 1). Each clustering agent implements a
clustering algorithm and processes a defined part of the input data. The number of the
clustering agents as well as the part of dataset it will process, is defined by the parameter
Q called the load of a clustering agent. It is an integer parameter containing a number of
different time series lengths li ∈ L each clustering agent can handle. The set L contains all
possible lengths of the time series in the input data, as it was defined in the task statement at
the beginning of this section. For example, assume that lmin = 4 periods and Q = 3, then the
first clustering agent will handle the part of an input data with 4, 5 and 6 periods long time
series. The load of a clustering agent is defined by the user and is also used for calculating nca
- the number of the clustering agents in the Controlled Agent Community (CAC). Equation 3
is used for obtaining the value of nca .
| L|
nca = Roundup (3)
Q
The input data is distributed among nca clustering agents cai ∈ CAC according to the defined
value of Q. Given a uniform clustering agent load distribution, Equation 4 can be used for
splitting the input data among clustering agents.
i = 1, li,min = lmin
i > 1, li,min = li−1,max + 1 (4)
li,max = li,min + Q − 1
where the lmin and lmax are the bounds for the number of periods in the time series the
clustering agent cai will process. One of the proposed system objectives is to support
clustering of time series with different number of periods. A number of clustering methods
exist, but majority use an Euclidean distance to calculate the similarity of two objects. The
simple Euclidean distance will not allow a clustering agent to process time series with
different number of periods, but still may be applied while Q = 1. Two clustering methods
- Self Organising Maps (SOM) (Kohonen, 2001) and Gravitational Clustering algorithm (GC)
(Gomez et al., 2003; Wright, 1977), were chosen to support the data mining process. To gain
an ability to cluster time series with different duration we suggest using different distance
measures in each of the methods. The proposed system implements two different distance
measures - Dynamic Time Warping (DTW) (Keogh & Pazzani, 2001; Salvador & Chan, 2007)
and a modified Euclidean distance MEuclidean.
2.1.2.1 Distance measures
The DTW creates a warping path W ( X, Y ) between two time series X and Y and uses it to
calculate the distance, which occurs as follows. Let X and Y have durations equal to n and m
periods, respectively. First a distance matrix n × m is created, where each cell (i, j) contains
distance d( xi , y j ) between two points xi and y j (Keogh & Pazzani, 2001). The warping path
W ( X, Y ) is a sequence of the steps wk in the distance matrix starting with a cell (1, 1) and
Intelligent Agent Technology
Intelligent Agent Technology inandModern
in Modern Production Production and Trade Management
Trade Management 277
ending in (n, m). Each time performing a new step wk , a direction with minimal distance is
chosen. Other strategies for choosing the next step are possible and widely used (Salvador &
Chan, 2007). An example of a warping path is graphically displayed in Figure 2.
y4 (1,4)
Y: y1 y2 y3 y4
y3
y2 W(X,Y)
y1 (6,1) X: x1 x2 x3 x4 x5 x6
x1 x2 x3 x4 x5 x6
(a) Warping path in the distance matrix (b) A warping map M
The total path distance d( X, Y ) between two time series X and Y using the Dynamic Time
Warping is calculated as shown in Equation 5. The denominator K representing the total
number of steps wk in the warping path W ( X, Y ) is used in order to calculate the distance
proportionally for one step wk , as the number of steps may be different in different warping
paths.
∑kK=1 w2k
d( X, Y ) = (5)
K
The second proposed distance measure is MEuclidean - Modified Euclidean. The concept of
this measure is to calculate the distance d( X, Y ) between two time series X = x1 , . . . , xi , . . . , xn
and Y = y1 , . . . , y j , . . . , ym only by first z periods of each time series, where z = min{n, m}.
Speaking in the terminology of DTW, the warping path W ( X, Y ) in the case of MEuclidean will
always begin in the cell (1, 1) and will always end in the cell (z, z). The example of a warping
path for a distance measure MEuclidean is displayed in Figure 3.
y4 (1,4) (6,4)
Y: y1 y2 y3 y4
y3
y2 W(X,Y)
y1 (6,1) X: x1 x2 x3 x4 x5 x6
x1 x2 x3 x4 x5 x6
(a) Warping path in the distance matrix (b) A warping map M
The total path distance d( X, Y ) between two time series X and Y using the MEuclidean distance
measure is calculated using Equation 6.
z
d( X, Y ) = ∑ ( x i − y i )2 (6)
i =1
28
8 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
where n is the iteration number; W (n) - vector of weights of a j-th neuron; η is a learning
coefficient.
The learning coefficient decreases over time, but remains higher than a defined minimal value
ηmin . This can be reached by using Equation 8 (Kohonen, 2001), where τ2 is a time constant,
which can be assigned total iterations number. The value of a topological neighbourhood
function h j,i( X ) (n) at the moment n is calculated by Equation 9.
n
η (n) = η0 · exp − (8)
τ2
d2j,i( X )
h j,i( X ) (n) = exp − 2 , (9)
2σ (n)
where d is the lateral distance between the j-th neuron and the winning neuron; parameter σ
defines the width of a topological neighbourhood function.
In Subsection 2.1.2.1 it was shown that both of the proposed distance measures can produce
a warping path in order to compare two time series. If a distance measure MEuclidean is
used, then Equation 7 is modified and Equation 10 is used for synoptic adaptation of neuron
weights.
2. The length lW of a weight vector Wj of the j-th neuron is greater than the lX - the length of
a time series X. In this case only the first lX weights wk ∈ Wj will be adopted according to
the warping map M. The rest of neuron weights will remain unchanged.
If the distance measure DTW is used, then cases are possible, when one weight wk should be
adopted to several values of a time series X. An example of this case is shown in Figure 2.b.
Assuming that Y is a weight vector, the weights y1 , y2 and y4 must be adopted to two or three
different values at one time. In this case each of the mentioned weights will be adopted to the
average of all time series X values it must be adopted to, according to the warping map M.
This concludes the modifications of a SOM algorithm, the rest part of the subsection is devoted
to the second chosen clustering algorithm - gravitational clustering algorithm. This algorithm
is an unsupervised clustering algorithm (Gomez et al., 2003; Wright, 1977). Each clustering
agent in the CAC represents a multidimensional space, where each time series from the input
dataset is represented as an object in the n-dimensional space, where each object can be
moved by the gravitational force of the other objects. If two or more objects are close enough,
that is distance d between them is less or equal to the dmax , then those objects are merged
into the cumulative object. By default, masses of all non-cumulative objects are equal to 1.
Depending on user choice, masses of the merged objects can be summed or remain equal to
1. A movement of an object X, induced by the gravitational force of an object Y at the time
moment t, can be calculated by Equation 11.
→ G · mY · Δt2
X (t + Δt) = X (t) + v(t) · Δt+ d(t) · → 3 , (11)
2·
d(t)
where v(t) is the object velocity; G - gravitational constant, which can also be set by the user;
→
mY - the mass of an object Y; vector d(t) represents the direction of a gravity force, induced by
→
the object Y and is calculated as d(t)= X (t) − Y (t).
Setting velocity v(t) as a null vector and having Δt = 1, Equation 11 can be simplified as
shown in Equation 12.
→ G·m
X (t + Δt) = X (t)+ d(t) · Y 3 , (12)
→
2·
d ( t )
The GC uses the Euclidean distance to measure spacing between to objects. The support of
clustering of time series with different number of periods can be gained by using the distance
measures, proposed in Subsection 2.1.2.1, but the first of the gravitational clustering algorithm
concepts should be redefined. In the classical algorithm this concept is defined as follows: "In
the n-dimensional space all objects have values in all n dimensions". The proposed redefined
version is "In the n-dimensional space can exist m-dimensional objects, where m ≤ n". This
means that in a 10-dimensional space can exist an object with four dimensions, which will be
present only in the first four dimensions.
In the case of using the distance measure MEuclidean two possible cases can occur while
calculating the induced gravitational force:
1. The number of the object‘s X dimensions n X is less or equal to the number of the inducting
object‘s Y dimensions nY . In this case the gravitational force is calculated for all dimensions
of an object X and it is moved according to the calculated gravitational force.
30
10 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
2. The number of the object‘s X dimensions n X is greater than the number of the inducting
object‘s Y dimensions nY . In this case the gravitational force for X is calculated only for the
first nY dimensions of the X. The object X is moved only in those dimensions for which
the gravitational force was calculated.
Using the dynamic time warping, an m-dimensional object X will always be moved in all m
dimensions. Situations, when for some dimension xi a gravitational force should be calculated
using several dimensions of an object Y, may take place. This is similar to the case with
SOM when a single weight should be adopted to multiple time series values. Having a such
situation, the gravitational force, induced by the object Y, is calculated using an average from
all y j dimensions inducing gravitational force to the object‘s X dimension xi .
2.1.2.3 Creating a knowledge base
The knowledge base contains a model, representing the connections between sales
profiles and the transition points, induced from the input data. The knowledge base is
level-structured, the number of levels coincides with the number of the clustering agents in
the clustering agent community. Each level contains clusters from a corresponding clustering
agent. Let us define the meaning of a cluster for each of a clustering algorithm described in
the previous subsection.
The cluster extraction in the SOM algorithm begins after network organisation and
convergence processes are finished, and contains next three steps:
1. Each record X from a training set is sent to the system.
2. The winning neuron for each record X is found. For each winning neuron a transition point
statistics S j is collected, containing a transition point value and frequency of it‘s appearing
in this neuron, which is used as a rank of a transition point.
3. The cluster is a neuron that at least once became a winning neuron during the cluster
extraction process. The weight vector of a neuron is taken as a centroid of a cluster.
For the gravitational clustering algorithm a cluster is defined as follows: "The cluster is a
cumulative object having at least two objects merged". Using the merged objects, a transition
point statistics S j is collected for each cluster c j . Each transition point, included in S j , is
assigned a rank as the frequency of it‘s appearing in the cluster c j . The centroid of a cluster
is set as a position of a cumulative object in the clustering space at the moment of cluster
extraction.
2.00 2.00
1.60 1.60
1.20 1.20
0.80 0.80
0.40 0.40
0.00 0.00
1 3 5 7 9 11 13 15 17 19 1 3 5 7 9 11 13 15 17 19
Clustering agent load, Q Clustering agent load, Q
DTW MEuclidean DTW MEuclidean
Fig. 4. Learning error (MAE) in periods for the system with gravitational clustering algorithm
As may be seen from Figure 4, the results, received using the distance measure MEuclidean,
are more precise than those obtained with the DTW. The MEuclidean dominates DTW within
all possible Q values, and that is the reason to exclude the DTW distance measure from further
experiments with testing the system with gravitational clustering algorithm. As can also be
seen, the results, obtained using MEuclidean in Figures 4.a and 4.b, are close to be equal.
This indicates that both strategies for calculating the mass of a cumulative object are equal.
However the strategy, where masses are summed after merging the objects, has a weakness.
Merging the objects results in the growth of a mass of a cumulative object, consequently
increasing the gravity force of that object. This could lead to the situation, when merging of
objects will be affected more by the masses of an objects either by the closeness of those objects.
32
12 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
This could be the reason for that some potentially good combinations of clusters will not
appear during the clustering process. Due to that, the strategy with summing objects masses
will not be used in further experiments while using the gravitational clustering algorithm.
The transition point forecast error (using MEuclidean) decreases while the clustering agent
load increases. This is the reason for performing further experiments by using only five values
of Q that returned the smallest learning error. Those values of a clustering agent load are 16,
17, 18, 19 and 20, that is Q = {16, 17, 18, 19, 20}.
The system with SOM algorithm was tested using three neural network topologies - quadratic
topology with eight neighbours in the first lateral level, cross-type topology with four
neighbours and linear topology with two neighbours. The neural network contained 100
neurons, organized as a matrix 10×10 neurons for quadratic and cross-type topologies and
as a one-dimensional line for the linear topology. The obtained results with learning error for
each topology and distance measure MEuclidean are displayed in Figure 5.
3.9
3.7
3.5
3.3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Clustering agent load, Q
Linear Cross-type Quadratic
Fig. 5. Learning error (MAE) in periods for the system with SOM algorithm, using
MEuclidean
The SOM algorithm returned a much larger learning error than the gravitational clustering
algorithm. Due to that, only three values of Q, Q = {5, 6, 7} will be used in further
experiments. Table 1 contains the learning error obtained with SOM algorithm while using
DTW and selected Q values.
Q=5 Q=6 Q=7
Linear 3.381 3.320 3.323
Cross-type 3.366 3.317 3.323
Quadratic 3.354 3.317 3.323
Table 1. Learning error (MAE) in periods for SOM algorithm, using DTW
Comparing the efficiency of the chosen topologies (see Figure 5) it may be concluded that none
of three topologies dominates the others two. For further experiments with SOM algorithm
all three topologies were chosen.
The learning error evaluation was used to lessen the number of experiments needed for
system strong evaluation. To evaluate systems the 10-fold crossvalidation method was
applied. The proposed system with gravitational clustering was tested, applying only the
MEuclidean distance measure and using the five values of the clustering agent load, selected
at the beginning of this section, while evaluating the system learning error. Table 2 shows
the obtained system testing results, each cell contains a 10-fold average Mean Absolute Error in
periods. Figure 6 displays the data from Table 2.
Intelligent Agent Technology
Intelligent Agent Technology inandModern
in Modern Production Production and Trade Management
Trade Management 33
13
Q = 16 Q = 17 Q = 18 Q = 19 Q = 20
MAE 2.021 1.995 2.026 2.019 2.010
Table 2. Mean absolute error in periods for the system with GC algorithm
As can be seen, the error remains at the level of two periods, the best result was gained with
the clustering agent load Q = 17 and is 1.995 periods.
2.30
2.20
2.10
2.00
1.90
1.80
1.70
16 17 18 19 20
Clustering agent load, Q
Fig. 6. MAE in periods for system with gravitational clustering, using MEuclidean
The same testing method and data were used for testing the system with SOM algorithm.
While testing the system, all three neural network topologies - linear, cross-type and quadratic,
were applied, combining with two proposed distance measures (see Subsection 2.1.2.1). The
system with SOM was tested by using only three clustering agent load values - Q = {5, 6, 7},
selected while evaluating the learning error. Table 3 summarises the obtained testing results
and Figure 7 displays them.
Linear topology Cross-type topology Quadratic topology
Q DTW MEuclidean DTW MEuclidean DTW MEuclidean
5 3.475 3.447 3.362 3.437 3.434 3.578
6 3.267 3.267 3.332 3.404 3.353 3.353
7 3.344 3.411 3.372 3.513 3.371 3.509
Table 3. Mean absolute error in periods for the system with SOM algorithm
The best results for all three topologies were obtained while testing the system with clustering
agent load Q = 6 and remains equal about 3.3 periods. That supports the conclusion, stated
while evaluating the learning error that none of three topologies dominates other two. The
best result was obtained while using the linear topology and is equal to 3.267 periods.
Comparing the efficiency of applied distance measures we may conclude that for the proposed
system with SOM algorithm it is not crucial whether to choose DTW or MEuclidean distance
measure, as the DTW returned better result only once, using the cross-type topology. For other
topologies both distance measures returned equal results.
34
14 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
3.5 3.6
3.5
3.4
3.4
3.3
3.3
3.2 3.2
5 6 7 5 6 7
Clustering agent load, Q Clustering agent load, Q
Linear Cross-type Quadr. Linear Cross-type Quadr.
(a) Using distance measure DTW (b) Using distance measure MEuclidean
The third step of pre-processing is feature selection. During this process a set of the attributes
with descriptive data is produced as a an aggregation of available attributes. The objective of
this process is to lessen the attributes negative impact to the classification results (Thomassey
& Fioraliso, 2006). Attributes can be selected using the information gain or by simple choosing
based on user experience. The third - Dataset formation, process is performed reactively by
responding to the Data Mining Agent’s requests.
normalization
2.3. Feature
2. Pre-processing data
selection
Pre-processed
data
3. Dataset formation
C4
C1
Clust. 2 3 4 5 C3
Seasonality
1 2
Price Type
C2 C4
3.99 12.00 6 2
C2 C4 C1 C2 C1 C3
3. Cluster centroids are recalculated using the records in the appropriate cluster. Then the
algorithm returns to Step 2.
Having all data distributed among the defined number of clusters, the clustering error should
be calculated in order to show the efficiency of the chosen number of clusters. The clustering
error is calculated as a mean absolute error. First, Equation 14 is used to calculate the absolute
error AEi for each cluster ci . Then the mean absolute error is calculated by Equation 15.
n(c ) i
1
AEi = · ∑ dj , (14)
n ( c i ) j =1
where n(ci ) is the number of records in the cluster ci ; d j - the absolute distance between j-th
record in the cluster ci and the centroid of this cluster.
1 m
m i∑
MAE = · AEi (15)
=1
The result of clustering the sales data is the set P, containing a number of sales profiles -
cluster centroids, example of which is displayed in Figure 9, where the centroid is marked
with a bold line. Each sales profile p is assigned a unique identification number, which will
be used in the classification rule set induction step. The data clustering step finishes with the
0.90
0.80
Normalized sales value
0.70
0.60
0.50
0.40
0.30
0.20
0.10
0.00
-0.10
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12
Periods
The Data Mining Agent builds the inductive decision tree using the training dataset, merged
with found sales profiles. It uses the C4.5 classification algorithm (Quinlan, 1993). The C4.5
algorithm is able to proceed with discrete and numeric attributes and to handle the missing
values. The decision tree built is the model representing the relations between descriptive
attributes and the sales profiles. The internal nodes represent the descriptive attributes and
leafs are pointing to one of sales profiles found.
Kind
≤1 >1
... Type
≤8 >8
Type С8
≤7 >7
Price C6
≤ 55 >55
С4 Type
≤1 >1
С4 С8
testing. As a descriptive data, the following three parameters were chosen: Kind, showing
either the product is seasonal or not, the Price of the product and the Type, showing what the
product actually is - jeans, t-shirt, bag, shorts, belt etc.
The data clustering was powered by the k-means algorithm and the number of clusters was
chosen by evaluating clustering error with different cluster number starting with two and
continuing till 21, as the square root of 423 records is approximately equal to 20.57. Figure
11 shows the results of clustering error evaluation with different number of clusters. The
0.41
0.39
0.37
0.35
0.33
0.31
0.29
0.27
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
The number of clusters
Type Type
≤9 >9
≤8 >8
С1 С8 Price С7 С1 С5 С3 Price
С6 С5 С5 С8
Will-be-set-by-IN-TECH
Domains
Intelligent Agent Technology
Intelligent Agent Technology inandModern
in Modern Production Production and Trade Management
Trade Management 41
21
Classifiers
Error measures ZeroR OneR JRip Naive Bayes IBk C4.5
RMSE 0.295 0.375 0.284 0.282 0.298 0.264
MAE 0.174 0.141 0.161 0.156 0.160 0.129
Table 4. Error measures for various classification methods
4. Conclusions
The modern production and trade management remains a complicated research field and
as technology is evolving, becomes even more complicated. This field combines different
tasks of forecasting, planning, management etc. In the present research two of them - PLC
management and sales forecasting, were discussed and possible solutions proposed. Looking
back in time, those tasks were efficiently solved by standard statistical methods and a single
user was able to perform that. Today the level of technological evolution, the amount of
gathered data and complicated planning objectives require the solutions that are able to lessen
the human’s load and are capable to perform different data analysis tasks in the autonomous
mode. In order to gain the above possibilities, the agent technology and combinations of the
intelligent data analysis methods were chosen as the basis. Using the mentioned technologies,
the structures of the PLC management support multi-agent system and the multi-agent sales
forecasting system were proposed and the concepts of each system were described and
discussed. The structures of the proposed systems were intentionally made general in order
to demonstrate the advances of the agent technology. The modularity of the proposed systems
allows one to adapt individual processes to the specific field of application, without breaking
the structure of the system.
Both systems were tested using the real-life data and the results obtained show the
effectiveness of the proposed multi-agent systems and the technologies, chosen for creating
those systems. The conclusion of the present research is that the intelligent agent technology
and other intelligent data analysis technologies can be efficiently applied in tasks connected
with supporting the modern production and trade management. The two proposed
multi-agent systems give an example of such applications.
42
22 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
5. Acknowledgement
This work has been supported by the European Social Fund within the project "Support for
the implementation of doctoral studies at Riga Technical University".
6. References
Aitken, J., Childerhouse, P. & Towill, D. (2003). The impact of product life cycle on supply
chain strategy, International Journal of Production Economics Vol. 85(2): 127–140.
Armstrong, J. S., Collopy, F. & Yokum, J. T. (2005). Decomposition by causal forces: A
procedure for forecasting complex time series, International Journal of Forecasting Vol.
21(1): 25–36.
Chang, P.-C., Liu, C.-H. & Fan, C.-Y. (2009). Data clustering and fuzzy neural network for sale
forecasting: A case study in printed circuit board industry, Knowledge-Based Systems
Vol. 22(5): 344–355.
Chien, C.-F., Chen, Y.-J. & Peng, J.-T. (2010). Manufacturing intelligence for semiconductor
demand forecast based on technology diffusion and product life cycle, International
Journal of Production Economics Vol. 128(2): 496–509.
Gomez, J., Dasgupta, D. & Nasraoui, O. (2003). A new gravitational clustering algorithm,
Proceedings of the Third SIAM International Conference on Data Mining, pp. 83–94.
Keogh, E. J. & Pazzani, M. J. (2001). Derivative dynamic timewarping, Proceedings of the First
SIAM International Conference on Data Mining, pp. 1–11.
Kohonen, T. (2001). Self-Organizing Maps, 3rd edn, Springer-Verlag Berlin Heidelberg.
Ni, Y. & Fan, F. (2011). A two-stage dynamic sales forecasting model for the fashion retail,
Expert Systems with Applications Vol. 38(3): 1529–1536.
Quinlan, J. R. (1993). C4.5: Programs for Machine Learning, Morgan Kaufmann publishers.
Salvador, S. & Chan, P. (2007). Fastdtw: Toward accurate dynamic time warping in linear time
and space, Intelligent Data Analysis Vol. 11(5): 561–580.
Tan, P. N., Steibach, M. & Kumar, V. (2006). Introduction to Data Mining, Addison-Wesley.
Thomassey, S. & Fioraliso, A. (2006). A hybrid sales forecasting system based on clustering
and decision trees, Decision Support Systems Vol. 42(1): 408–421.
Thomassey, S. & Happiette, M. (2007). A neural clustering and classification system for sales
forecasting of new apparel items, Applied Soft Computing Vol. 7(4): 1177–1187.
Venkatesan, R. & Kumar, V. (2002). A genetic algorithms approach to growth phase forecasting
of wireless subscribers, International Journal of Forecasting Vol. 18(4): 625–646.
Weiss, G. (ed.) (1999). Multiagent Systems. A Modern Approach to Distributed Artificial
Intelligence, MIT Press.
Wooldridge, M. (2005). An Introduction to MultiAgent Systems, 3rd edn, John Wiley & Sons,
Ltd.
Wright, W. E. (1977). Gravitational clustering, Pattern recognition Vol. 9: 151–166.
3
1. Introduction
Over 60% of the families in the United States have billions of dollars invested in mutual
funds. Consequently portfolio managers are under tremendous pressure to make critical
investment decisions in dynamically changing financial markets. Experts have been
forecasting and trading financial markets for decades, using their knowledge and expertise
in recognizing patterns and interpreting current financial data. This paper describes a
knowledge based decision support system with the analogical and fuzzy reasoning
capabilities to be used in financial forecasting and trading. In an attempt to maximize a
portfolio’s return and avoid costly losses, the portfolio manager must decide when to enter
trades as well as when to exit and thus, must predict the duration as well as the direction of
stock price movement. A portfolio manager is faced with a daunting information
management task and voluminous amounts of rapidly changing data. It is simply an
impossible task to manually and simultaneously follow many investment vehicles, stocks
and market moves effectively.
To assist portfolio managers’ decision making processes, tens of millions of dollars are spent
every year on Wall Street to automate the process of stock analysis and selection. Many
attempts have been made at building decision support systems by employing mathematical
models and databases(Liu et al, 2010; Tan 2010). These traditional decision support systems
are composed of three components: a database, a user interface, and a model base. The user
interface allows the end user (in most cases, a manager) to select appropriate data and
models to perform an analysis for a given stock. The database contains data necessary to
perform the analysis, such as market capitalization, shares outstanding, price ratio, return
on equity, and price to book ratio. The model base consists of mathematical models to deal
with the present value of expected future payoffs from owning the stock, such as the
dividend discount model, price-to-earning ratio(P/E) models, and their variants. Standard
decision support systems employ quantitative approaches to decision making. Specific
quantities in the form of payoffs and probabilities are used to arrive at a quantitative
expected value. The decision maker simply selects the alternative that has the highest
expected value. Despite the mathematical soundness and systematic approach of traditional
decision support systems, researchers have discovered compelling reasons for developing
qualitative approaches in decision making. It is inappropriate to suggest that people reason
about decisions in a purely quantitative fashion. In the domain of security analysis and
44 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
trading, if we expect a stock to go up in value, we are likely to invest in the stock despite not
knowing quantitatively how much or how soon a stock will rise. A qualitative analysis of a
decision problem is usually logically prior to a quantitative analysis(Sen, 2009).
In order to develop a decision support system with the capabilities of qualitative analysis,
considerable progress has been made toward developing “knowledge-based decision
support systems”, sometimes known as expert systems (Silverman, 1992). An expert system
designed in the domain of security analysis is an intelligent decision support system whose
behavior duplicates, in some sense, the ability of security analysts. Expert systems are
computer programs with characteristics that include the ability to perform at an expert
level, representing domain-specific knowledge and incorporating an explanation of
conclusions reached. Traditional expert systems consist of five major components: an end
user interface, an inference engine, a working memory, a knowledge base, and an
explanation mechanism. The interface allows the end user to input data to and receive
explanations or conclusions from the expert system. The working memory keeps track of
specific data, facts and inferences relevant to the current status of the problem solving. The
knowledge base contains problem-solving knowledge of a particular domain collected from
an expert. The inference engine matches data in the working memory with knowledge in the
knowledge base, and acts as the interpreter for the knowledge base. By combining the five
components together, an expert system is able to act as a knowledgeable assistant and
provide “expert quality” advice to a decision maker in a specific domain.
Despite the impressive performance of expert systems, they have a number of inherent
flaws, especially when operating in a dynamically changing environment. An expert system
solves problems by taking input specifications and then “chaining” together an appropriate
set of rules from the knowledge base to arrive at a solution. Given exactly the same problem
situation, the system will go through the same amount of work to come up with a solution.
In addition, traditional expert systems are “brittle” in the sense that they require substantial
human intervention to compensate for even slight variations in descriptions, and break
down easily when they reach the edge of their knowledge(Zhou, 1999).
In response to weaknesses associated with traditional decision support systems and expert
systems, this paper presents a knowledge-based and case-based intelligent decision support
system with fuzzy reasoning capabilities called TradeExpert, designed for assisting portfolio
managers to make investment decisions based not only mathematical models, but also facts,
knowledge, experiences, and prior episodes.
EMH. A variety of empirical tests developed new and often elaborate theories that sought to
explain subtle security-pricing irregularities and other anomalies. Market strategist Jeremy
Grantham has stated flatly that the EMH is responsible for the current financial crisis,
claiming that belief in the hypothesis caused financial leaders to have a "chronic
underestimation of the dangers of asset bubbles breaking"(Nocera, 2009).
More recently, institutional investors and computer-driven trading program appeared to be
resulting in more anomalies. Anomalies may also arise from the market’s incomplete
assimilation of news about particular securities, lack of attention by market participants,
difference of opinion about the meaning and significance of available data, response inertia
and methodological flaws in valuation(Robertson & Wolff, 2006). Findings like the
preceding suggest that the EMH is not perfect representation of market realities that behave
as complex systems. Because security markets are complex, time-variant, and probably
nonlinear dynamic systems, one simple theory cannot adequately represent their behavior;
From time to time there may arise anomalies that are exploitable. In the design of
TradeExpert, technical analysis rules, trend analysis rules, and fundamental analysis rules
are used to exploit whatever inefficiencies may exist and unearth profitable situations.
Technical analysis is the systematic evaluation of price, volume and patterns for price
forecasting(Wilder 1978). There are hundreds of thousands of market participants buying
and selling securities for a wide variety of reasons: hope of gain, fear of loss, tax
consequences, short-covering, hedging, stop-loss triggers, price target triggers, broker
recommendations and a few dozen more. Trying to figure out why participants are buying
and selling can be a daunting process. Technical analysis puts all buying and selling into
perspective by consolidating the forces of supply and demand into a concise picture. As a
complete pictorial record of all trading, technical analysis provides a framework to analyze
the battle raging between bulls and bears. More importantly, it can help determine who is
winning the battle and allow traders and investors to position themselves accordingly.
Technical analysis uses trading indicators and stock trading prices to analyze the stock
trend. Trading indicators, e.g. Moving Average, Bollinger Bands, Relative Strength Index,
and Stochastic Oscillator, provide trading signals which can be used to determine when to
trade stocks.
Trend analysis, also known as Momentum Trading(Simple, 2003), is the process by which
major and minor trends in security prices are identified over time. The exact process of trend
analysis provides investors with the ability to identify possible changes in price trends. As
securities prices change over time it will exhibit one of three states: uptrend, downtrend or
sideways consolidation. It is the job of the technical analyst to identify when a security may
change from one trend to another as these may provide valid trading signals. Trend analysis
uses a fixed trading mechanism in order to take advantages from long-term market moves
without regards to past price performance. Trend analysis is a simplistic trading strategy
that tries to take advantage of stock price movements that seem to play out in various
markets. Trend analysis aims to work on the market trend mechanism and take benefits
from both sides of the market. It gains profits from ups and downs of the stock market.
Traders who use this approach can use current market price calculation, moving averages
and channel breakouts to determine the general direction of the market and to generate
trade signals.
Fundamental analysis maintains that markets may misprice a security in the short run but
that the "correct" price will eventually be reached. Profits can be made by trading the
mispriced security and then waiting for the market to recognize its "mistake" and reprice the
46 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
security. One of the most important areas for any investor to look when researching a
company is the financial statement of that company. Fundamental analysis is a technique
that attempts to determine a security’s value by focusing on underlying factors that affect a
company’s actual business and its future price.
To summarize, technical analysis, fundamental analysis, and trend analysis do not really
follow the theory of Efficient Market hypothesis. They attribute the imperfections in
financial markets to a combination of cognitive biases such as overconfidence, overreaction,
representative bias, information bias, and various other predictable human errors in
reasoning and information processing.
3. System architecture
TradeExpert is a hybrid system that uses both quantitative and qualitative analysis to
exploit changing market conditions and to grind through mountains of economic, technical
and fundamental data on the market and on individual stocks. Furthermore, TradeExpert
incorporates the mechanism of case-based reasoning which enables it to recall similar episodes
from the past trading experience, and the consequences of that action. The goal, of course, is to
ferret out patterns from which future price action can be deduced, to meaningfully assimilate
vast quantities of information automatically, and to provide support intelligently for portfolio
managers in a convenient, timely and cost-effective manner.
TradeExpert assumes the role of a hypothetical securities analyst who makes a
recommendation for a particular stock in terms of strong buy, buy, hold, sell and strong sell
based on chart reading, fundamental analysis, investment climate, and historical
performance. TradeExpert is designed and implemented based on principles consistent with
those used by a securities analyst(Kaufman 2005). It is capable of reasoning, analyzing,
explaining, and drawing inferences. In terms of system architecture, TradeExpert consists of
the following major components: a user interface, databases, knowledge bases, a case base, a
similarity evaluator, an explanation synthesizer, a working memory, and an inference
engine, shown in the following diagram 1. The user interface provides communication
between the user and the system. Input data from the user to the system contains the
parameters of a stock under consideration, such as Yield, Growth rate, P/E, Return on net
worth, Industry sector, Profit margins, Shares outstanding, Estimated earning per share,
Price to book ratio, Market capital, Five-year dividend growth rate, Five-year EPS growth
rate, and Cash flow per share. If a stock being analyzed is already stored in the database,
TradeExpert retrieves data and fills in the parameter fields on the input screen
automatically. The fuzzy inference component employs fuzzy logic and reasoning to
measure partial truth values of matched rules and data. It makes reasoning processes more
robust and accurate. The output from the system is a trading recommendation with a list of
reasons justifying the conclusion. With human expertise in the knowledge base, past trading
experience in the case base, and stock’s data in the database, TradeExpert is capable of
analyzing a stock, forecasting future price movements, and justifying the conclusion
reached.
knowledge bases: technical analysis knowledge base, fundamental analysis knowledge base
and trend analysis knowledge base. TradeExpert can use any or all of these different but
somewhat complementary methods for stock picking. For example it uses technical analysis
for deciding entry and exit points, uses trend analysis for deciding the timing of a trade, and
uses fundamental analysis for limiting its universe of possible stock to solid companies. As a
justification of the expert system serving as a model of human thought processes, it is
believed that a single rule corresponds to a unit of human knowledge. In the design of an
intelligent decision support system, knowledge representation for human problem solving
expertise is a critical and complex task.
Case Bases Databases
Industry Economic
Stock Company Industry
Group Politcal
History Performance Impacts Data Data
User
Inference engine Working Memory Similarity Evaluatior
Interface
Knowledge Bases
TradeExpert Architecture
Fig. 1.
In what follows, the knowledge representation of stock analysis and selection in
TradeExpert is described along with a rule frame as an example (Trippi & Lee 1992).
RULE <ID>
(INVESTMENT HORIZON: <statement>)
(ASSUMPTION: <statement>)
(INVESTMENT OBJECTIVES: <statement>)
(IF <statement> <statement> ..... <statement>)
THEN <statement>)
(REASONS: <statement>)
(UNLESS: <statement>)
(CERTAINTY FACTOR: <real number>)
The key word INVESTMENT HORIZON designates the period of the investment: day
trading, short term or long term. The key word INVESTMENT OBJECTIVES indicates the
investment objective of the user, such as trading for profits, income, and capital
appreciation. The key word REASONS provides justifications and explanations of how a
rule arrives at a conclusion. The key word UNLESS excludes the stock even when the
48 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
RULE <31>
(INVESTMENT HORIZON: <day trading>)
(ASSUMPTION: <bull market>)
(INVESTMENT OBJECTIVES: <trading profits>)
(IF
<Company is in internet-sector> AND
<internet-sector = Hot> AND
<Avg-peer-earnings = beat the estimate> AND
<estimated EPS = increase over last two months> AND
<product = high demand> AND
<average-daily-volume >= 1.5 average) AND
< price-range = new 52-week high> AND
< liquidity = good>
THEN <Short term Strong Buy> )
(REASONS:
<technical analysis signals a new high> AND
<price movement trend is favorable> AND
<stock momentum carries over> AND
<small number of outstanding shares> AND
<history performance of hot stocks> )
(UNLESS:
<downgrades-by-analysts > 2 in last 5 days> OR
<earning-warning-by-peers > 1 in last 5 days>)
(CERTAINTY-FACTOR: 0.85)
With a set of reasons associated with the conclusion of a rule, TradeExpert is able to justify
its reasoning by providing evidence and rational of its conclusion in addition to a certainty
factor. In order to retrieve appropriate rules efficiently, the rules in TradeExpert are
organized in a hierarchical structure. They are classified in terms of their scope and
attributes. One rule designed for day trading may not be important at all for long-term
investment and vice versa. A technical break out of stock price and daily price fluctuation
may not have impact on long-holding investors. Classifying rules into different categories
makes logical sense in terms of knowledge retrieval and maintenance. Of course, one rule
may be indexed by multiple categories so that the match and evaluation steps during the
inference process can be completed efficiently. Another factor for consideration is an
investor’s subjective judgmental call, such as the assessment and long term view of a
Modeling Stock Analysts Decision Making: An Intelligent Decision Support System 49
particular market. Given current economic data and political environment, different
investors market be up or down over the next 6 months?” will definitely elicit several
answers from the investor community. A partial diagram that shows the knowledge
organization (Trippi & Lee 1992) in TradeExpert is presented in figure 2.
RULE <16>
(INVESTMENT HORIZON: <long term>)
(ASSUMPTION: <bull or bear market>)
(INVESTMENT OBJECTIVES: <return on capital>)
(IF
<PE < similar_stock_average> AND
<earning_history = stable> AND
<dividend pay-out history = consistent> AND
<market_value_to_book < 1.05> AND
<Company is believed to be profitable> AND
<credit-rating > A or better> AND
<debt rations = low> AND
THEN <Long term Strong Buy> )
(REASONS:
<good credit rating> AND
<its intrinsic value = attractive> AND
<consistent dividend payout>
(UNLESS:
<law suit = pending> OR
<In the middle of financial_scandal> OR
< Bankruptcy = likely>
(CERTAINTY-FACTOR: 0.73)
Fundamental analysis is a technique that attempts to determine a security’s value by
focusing on underlying factors that affect a company's actual business and its future
prospects. It believes that the intrinsic value of the stock eventually drives long-term stock
prices. It looks at revenue, expenses, assets, liabilities and all the other financial aspects of a
company to gain insight on a company's future performance and therefore, the stock’s
future price.
have been developed to gain insight about the market behavior. Each of them may have
different concept about when to buy and sell (trading signals). Some well-known and
widely used technical indicators are listed below briefly. Moving Average (MA) is used to
remove market noise and find the direction of prices. It is calculated by sum up the stock
prices over n days divided by n. Relative Strength Index (RSI) compares the magnitude of
the stock’s recent gains to the recent losses then turns into the number ranged from 0 to 100.
It compares the magnitude of recent gains to recent losses in an attempt to determine
overbought and oversold conditions of an asset. It is calculated using the following formula:
RSI = 100 - 100/(1 + RS*), where RS = Average of x days' up closes / Average of x days'
down closes. If RSI is less than 30, it shows an oversold market condition. Stochastic
Oscillator (SO) is an oscillator that measures the relative position of the closing price within
a past high-low range. Volatility Index(VIX) is derived from the prices of ALL near-term at-
the-money call and out-of-the-money call and put options. It is a measure of fear and
optimism among option writers and buyers. When a large number of traders become
fearful, then the VIX rises. Conversely, when complacency reigns, the VIX falls. The VIX is
sometimes used as a contrarian indicator and therefore can be used as a measurement of
how oversold or overbought the market is. Usually it increases as the market decreases. Its
usefulness is in showing when a trend reversal is about to take place.
One popular form of technical analysis is charting, in which the graphic display of past price
performance, moving-average trends, cycles, and intro or inter-day stock price ranges are
studied in an attempt to discern cues for profitable trading. Stock price pattern analysis is
the basis of the technical analysis of stocks. It relies on the interpretation of some typical
configurations of the ups and downs of price movements like “head and shoulders”, “top
and bottom formations” or resistance lines. Stock price pattern analysis comes down to
comparing known patterns with what is evolving on the chart. Charting makes use of
techniques such as moving indices, trend analysis, turning-point indicators, cyclical spectral
analysis, and the recognition of various formations or patterns in prices in order to forecast
subsequent price behavior. Some examples of patterns are assigned names such as “flag”,
“triangle”, “double bottom or top”, “symmetrical triangles”, and “head-and-shoulders”.
The rules in the technical analysis knowledge base represent the expertise of chart reading
without concerning the fundamentals of a company. By analyzing MA, RSI, SO, VIX, and
trading volumes of a stock, the technical analysis rules suggest entry points as well as exit
points, taking advantage of market fluctuations instead of being victimized by them. One
technical analysis rule used by TradeExpert is presented below:
RULE <92>
(INVESTMENT HORIZON: <short term>)
(ASSUMPTION: <bull market>)
(INVESTMENT OBJECTIVES: <trading for profits>)
(IF
<RSI < 35> AND
<10_day MA > 30_day MA> AND
<VIX > 70) AND
< Chart reading shows a near complete W shape> AND
<average-daily-volume >= 1.5 average)
THEN <Short term Strong Buy> )
(REASONS:
52 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
10 days, the length k of MAL usually lies between 10 and 30 days. (if one uses 30 minutes
data, then Buy (go long) when the short-term (faster) moving average crosses the long-term
(slower) moving average from below and sell (go short) when the converse occurs. Or
equivalently: Open a long position when the difference (MASj−MALk) becomes positive,
otherwise open a short position. If one expresses this difference as percentage of MALk one
gets the moving average oscillator:
RULE <45>
(ASSUMPTION: <bull market>)
(INVESTMENT OBJECTIVES: <trading for profits>)
(IF
<growth_rate > PE> AND
<Momentum > 0) AND
<the industrial sector = Hot> AND
<reported-peer-earnings = beat the estimate> AND
<floating-shares = small> AND
<estimated EPS = increase over last two months> AND
<earning_release_day < 5 days> AND
<average-daily-volume >= 1.5 average)
THEN <Short term Strong Buy> )
(REASONS:
<possible short squeeze> AND
<Stock price is on uptrend> AND
<momentum before earning release> AND
<expectation of good earning report> AND
<historical performance of hot stocks before
earning> )
54 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
(UNLESS:
<downgrades-by-analysts > 2 in last 5 days> OR
<earning-warning-by-peers > 1 in last 5 days>)
(CERTAINTY-FACTOR: 0.78)
Trend analysis observes a security price movement, times an exit and entry point, analyzes
price change directions, and jumps on a trend and rides it for the purpose of making quick
profits. It tries to profitably exploit the frequent occurrence of asset price trends (“the trend
is your friend”).Hence, this trading technique derives buy and sell signals from the most
recent price movements which indicate the continuation of a trend or its reversal.
5. Object-oriented databases
The objected-oriented database contains financial data of industries and companies, such as
the company name, debt ratio, price-earning ratio, annual sales, value to book value and
projected earnings(Trippi & Lee 1992). An object in the database is defined by a vector of
three components: <I, P, M>. The component I is a unique ID or name for the object, which
may represent an industry or a company. The component P is a set of properties belonging
to an object represented as a vector. The vector characterizes an object in terms of its
attributes. The component M is a set of methods implemented by member functions. In
addition to direct input, these functions are the only means of accessing and changing the
attributes of objects. There are two kinds of objects in our database: industry-based objects
and company-based objects. The company-based objects contain company-specific data,
such as P/E ratio, projected growth rate, debt ratio, and annual sales. The industry-based
objects contain data and information related to a specific industry, such as tax benefits, stage
of life cycle, sensitivity to interest rates, legal liability, and industry strength.
Examples of such objects are shown below:
An industry-based object:
CASE 24:
(
{ATTRIBUTES:
<time-frame: 1962>
<industry-sector: steel makers>
<R/D spending: insignificant>
<price-competition: fierce>
<long-term growth rate: < 10% >
<management/worker-relation: intense>
.
.
}
{RECOMMENDATION: Sell}
{PERFORMANCE: down 50% on average in 6 months}
{JUSTIFICATIONS: Caused by JFK’s policy}
{METHODS: calculating industry-related data}
)
In 1962, President John Kennedy declared war on steel companies blaming the higher labor
cost and management. The sudden change of federal policy caused the steel stock prices to
Modeling Stock Analysts Decision Making: An Intelligent Decision Support System 57
drop more than 50% in a very short period of time and remained low for months to come.
This case remembers the consequences of an unfavorable drastic change in federal policy
toward a particular industrial sector, and the dramatic price decline in the months ahead. In
1992, President Bill Clinton proposed a reform to national healthcare widely seen as
criticism of pharmaceutical industry’s practice and pricing. Consider the following pairs:
presidents(John Kennedy, Bill Clinton) federal policies(against the steel industry, against the
pharmaceutical industry), one could easily see the consequence of political pressure on the
price of affected stocks. When given a drug manufacturer’s stock, TradeExpert is able to
draw inferences from its past experience and to make a SELL recommendation based on the
historical lessons learned. The days that follow show the significant price decline of all
pharmaceutical stocks in the next 12 to 18 months. Without the case-based reasoning
mechanism, a decision support system would not be able to make such a predication since
these two industry sectors are very different in terms of basic attributes, such as industry-
sector, R/D spending, price-competition and long-term growth rate.
In addition to the external impact evaluator, TradeExpert also employs a feature matching
process. The similarity score is determined by a combination of features in common and the
relative importance, expressed by weights, of these features(Simpson, 1985). Formally, the
feature evaluator can be described as follows.
Let N be a case with m features:
N = {n1 n2 .. nm}
and O is an old case with k features:
O = {o1 o2 .. ok}
CF denotes a common feature set
CF = {c1 c2 .. ct}
More specifically,
CF = c CF where c N c O
Thus, a similarity score, S(N,O), of a new case N with respect to an old case O is given as:
k
i c i k
S(N,O) = i 1 1ik
m
where i is a weight assigned to the ith feature of a case.
With the case-based reasoning mechanism built in, TradeExpert is able to accumulate vast
and specialized expertise in the domain of stock analysis and trading, and to apply it to new
situations.
A case in point is the similar price movements of casino stocks in the early 90’s and internet
stocks from the period of 1997-2000. Both sectors have many attributes in common, such as
expected high growth rates, high P/E, daily volumes far exceeding average trading
volumes, a record of better-than-average earnings, and investors’ favorable sentiments. By
employing the feature similarity evaluator, TradeExpert is able to identify the upward price
movements of internet-related stocks, while ignoring unbelievably high P/Es associated
58 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
with them. As hot stocks in different periods of 90’s, many stocks in these two sectors
generated 3-digit returns in a matter of 12 months. Another example of analogical reasoning
can be found in the banking sectors both in Great Depression of 30’s and in Great Recession
of 2007-2010. During these years, the stocks of all banks went down and stayed low for a
long period of time. With regard to particular stocks, let us consider the examples of Google
(GOOG) and Baidu (BIDU), a Chinese search engine. Having observed an impressive run by
GOOG, it can be predicated with confidence that an IPO of BIDU would do very well
considering the dominant market presence and the customer base. Indeed, Bidu actually
went from $12(split adjusted) in 2005 to a high of $118 in 2011.
In general, given a stock, TradeExpert searches for similar cases in the case base, establishes
correspondence between past experience and the stock currently under consideration, and
then transforms related experience to a recommendation for the stock.
"fairly" with the value 0.81, then the semantic difference becomes obvious. The probabilistic
calculation would yield the statement
If x is a high-growth company and x is an investor-preferred company, then x is a fairly high-
growth, investor-preferred company.
The fuzzy calculation, however, would yield
If x is a high-growth company and x is an investor-preferred company, then x is a very high-growth,
investor-preferred company.
Another problem arises as we incorporate more factors into our equations (such as the fuzzy
set of actively-traded companies, etc.). We find that the ultimate result of a series of AND's
approaches 0.0, even if all factors are initially high. Fuzzy theorists argue that this is wrong:
that five factors of the value 0.90 (let us say, "very") AND'ed together, should yield a value
of 0.90 (again, "very"), not 0.59 (perhaps equivalent to "somewhat").
Similarly, the probabilistic version of A OR B is (A+B - A*B), which approaches 1.0 as
additional factors are considered. Fuzzy theorists argue that a sting of low membership
grades should not produce a high membership grade. Instead, the limit of the resultant
membership grade should be the strongest membership value in the collection.
Another important feature of fuzzy systems is the ability to define "hedges," or modifier of
fuzzy values(Radecki, 1982). These operations are provided in an effort to maintain close
ties to natural language, and to allow for the generation of fuzzy statements through
mathematical calculations. As such, the initial definition of hedges and operations upon
them will be a subjective process and may vary from one statement to another. Nonetheless,
the system ultimately derived operates with the same formality as classic logic.
For example, let us assume x is a company. To transform the statement “x is an expensive
company in terms of its p/e” to the statement “x is a very expensive company in terms of its
p/e. The hedge “very” can be defined as follows:
"very"A(x) = A(x)^2
Thus, if Expensive(x) = 0.8, then Very_Expensive(x) = 0.64. Similarly, the word “more or
less” can be defined as Sqrt(Expensive(x)). Other common hedges such as "somewhat,"
"rather," and "sort of," can be done in a similar way. Again, their definition is entirely
subjective, but their operation is consistent: they serve to transform membership/truth
values in a systematic manner according to standard mathematical functions. From the
above discussion, it is clear that fuzzy logic can describe the investor’s decision making
process in a more natural and accurate way than the probability theory.
0 < < 10
0 < < 10 10 < < 20
PEverylow (x) = PElow (x) =
0 0 ≥ 10 0 ≥ 20
Modeling Stock Analysts Decision Making: An Intelligent Decision Support System 61
0 ≤ 10 0 ≤ 20
10 < ≤ 20 20 < ≤ 30
PEavg (x) =
20 < < 30 PEhigh(x) = 30 < < 40
0 ≥ 30 0 ≥ 40
0 ≤ 30
PEveryhign(x) = 30 < < 40
1 ≥ 40
Fuzzy value
0.5
0 5 10 15 20 25 30 35 40 45 50 60 70
Input variable
Fuzzy logic is a form of mathematics that let computers deal with shades of gray. It handles
the concept of partial truth – truth values between completely true and completely false. As
an example, consider a p/e of 17, it yields the following fuzzy values:
Fa(x) is Contained in Fb(x) if and only if for all x: Fa(x) <= Fb(x).
With these fuzzy operators, a decision support system can provide structured ways of
handling uncertainties, ambiguities, and contradictions.
functions, it is possible that several rules may be fired concurrently with different certainty
factors.
The following diagram shows how the fuzzy reasoning component works in TradeExpert:
Input data
Fuzzification
Case base(fuzzy)
Fuzzy inference
Fuzzy membership and fuzzy match
Rule Base(fuzzy)
Evaluate rules
and cases
difuzzification
Crisp output
To illustrate this process, let us consider the following simplified rules stored in the
fundamental knowledge base and the fuzzy functions PE(x) as described before and Yield(x)
which is defined below:
0 ≤ 1.5
.
Yieldhigh(x) = . 1.5 < < 3
1 ≥ 3
Given a stock with a p/e of 13 and dividend yield of 2.7%, the following membership
functions return non-zero values in the process of fuzzification:
Yieldavg(2.7) = 0.2
Yieldhigh(2.7) = 0.8
8. Conclusion
The development of TradeExpert demonstrates the effectiveness and value of using analogy
in an intelligent decision support system, particularly in situations of imprecision,
dynamics, or lack of perfectly-matched knowledge. TradeExpert shows its ability to
manipulate two kinds of knowledge over time: episodic information and evolving
experience. The system becomes “wiser” as more cases are added to the case base. With a
research area as new as case-based reasoning in intelligent decision support systems, it often
raises more questions than answers. Future research includes but is not limited to the
following topics: how to dynamically adjust the weights associated with the features of
cases, how to automatically index and store cases to facilitate efficient retrievals and
searches, and how to modify the rules in knowledge bases with minimal intervention from
human experts. In conclusion, the work reported in this paper discussed several important
issues in the design of intelligent decision support systems, proposed and implemented
solutions to these problems, and demonstrated the usefulness and feasibility of these
solutions through the design of TradeExpert.
Modeling Stock Analysts Decision Making: An Intelligent Decision Support System 65
9. References
Baldwin, J.F. "Fuzzy logic and fuzzy reasoning," in Fuzzy Reasoning and Its Applications, E.H.
Mamdani and B.R. Gaines (eds.), London: Academic Press, 1981.
Carter, John. “Mastering the Trade”. McGraw_Hill. 2006.
Edwards, Robert and Magee, John, “Technical Analysis of Stock Trends”. John Magee Inc.
1992.
Eng, W. F. “The Day Trader’s Manual”. John Wiley & Sons, Inc. 1993. New York.
Fama, Eugene (1965). "The Behavior of Stock Market Prices". Journal of Business 38: 34–105.
Fox, Justin, “The Myth of the Rational Market”. E-book. ISBN: 9780061885778.
Liu, Shaofeng; Duffy, Alex; Whitfield, Robert; Boyle, Iain. Knowledge & Information Systems,
Mar2010, Vol. 22 Issue 3, p261-286, 26p.
Nocera, Joe , “Poking holes in a Theory on Markets”. June 5,2009. New York Times. Nocera.
http://www.nytimes.com/2009/06/06/business/06nocera.html?scp=1&sq=efficie
nt%20market&st=cse. Retrieved 8 June 2009.
Kaufman, P.J. “New Trading System and Methods”, John Wiley & Sons, Inc., 2005.
Kolodner, Janet. "Reconstructive Memory: A Computer Model," Cognitive Science 7 (1983): 4
Malkiel, Burton G.(1987). "Efficient market hypothesis," The New Palgrave: A Dictionary of
Economics, v. 2, pp. 120–23.
Covel, Michael, Trend Following: “How Great Traders Make Millions in Up or Down Markets,
Financial Times”, Prentice Hall Books, 2004
Radecki, T. "An evaluation of the fuzzy set theory approach to information retrieval," in R. Trappl,
N.V. Findler, and W. Horn, “Progress in Cybernetics and System Research”, Vol.
11: Proceedings of a Symposium Organized by the Austrian Society for Cybernetic
Studies, Hemisphere Publ. Co., NY: 1982.
Robertson, C. S. Geva, R. Wolff, "What types of events provide the strongest evidence that the stock
market is affected by company specific news", Proceedings of the fifth Australasian
conference on Data mining and analystics, Vol. 61, Sydney, Australia, 2006, pp.145-
153.
Şen, Ceyda Güngör; Baraçlı, Hayri; Şen, Selçuk; Başlıgil, Hüseyin. “An integrated decision
support system dealing with qualitative and quantitative objectives for enterprise software
selection”. Detail Only Available By: Expert Systems with Applications, Apr2009 Part
1, Vol. 36 Issue 3, p5272-5283.
Silveman, Barry G.” Survey of Expert Critiquing Systems: Practical and Theoretical Frontiers”.
Communications of the ACM, Apr92, Vol. 35 Issue 4, p106-127.
Simple,J. James, “Trend-following strategies in currency trading”. Quantitative Finance 3, 2003,
C75-C77.
Simpson, R. L. “A Computer Model of Case-based Reasoning in Problem Solving: An Investigation
In The Domain of Dispute Mediation”. Ph.D. Dissertation, Georgia Tech. 1985
Schulmeister, Stephan. “Technical Trading Systems and Stock Price Dynamics”. WIFO-Studie
mit Unterstützung des Jubiläumsfonds der Österreichischen Nationalbank, 2002
Schulmeister. Stephan. Review of Financial Economics 18 (2009) 190–201.
Tan; Hock-Hai Teo; Izak Benbasat. “Assessing Screening and Evaluation Decision Support
System: A Resource-Matching Approach”. Information Systems Research, Jun2010, Vol.
21 Issue 2, p305-326.
4
France
1. Introduction
Since the beginning of Decision Support Systems (DSS), benchmarks published by hardware
and software vendors show that DSS technologies are more efficient by offering better
performance for a stable cost and are aiming at lowering operations costs with resources
mutualisation functionalities. Nevertheless, as a paradox, whereas data quality and
company politics improves significantly, user dissatisfaction with poor performances
increases constantly (Pendse, 2007).
In reality, this surprising result points the difficulties in globally defining the efficiency of a DSS.
As a matter of fact, we see that whereas a “performance/cost” ratio is good for a technician, from
the user perspective the “real performance/ expected performance” ratio is bad. Therefore,
efficient may mean all or nothing, due to the characteristic of user perspective: a DSS may be very
efficient for a developers needs, and completely inefficient for production objectives.
In order to improve the global DSS efficiency, companies can act upon (i) cost and/or (ii)
expectations. They must be capable of well defining and exhaustively taking into account
the costs and expectations relative to their DSS.
Concerning the first aspect of cost, the Total Cost of Ownership (TCO) provides a good
assessment. It contains two principal aspects: hardware/software costs and cost of
operations. With hardware/software, companies are limited by market offers and
technological evolutions. The cost of operations, in addition to the potential gain by resource
mutualisation, strongly influences the organization costs in rapport with its pertinence.
Over this problematic, companies have implemented rationalization based on best practice
guidelines, such as the Information Technology Integration Library (ITIL). Nevertheless, an
internal study we have conducted at SP2 Solutions in 2007 showed that these guidelines are
little or not-known to the DSS experts within or outside company IT departments. Our
experience shows that this hasn’t changed almost at all since then.
Relating with the second aspect, we state that user expectations cannot be ignored when
evaluating the efficiency of a DSS. We estimate that the expected results must be based on
the Quality of Service (QoS) and not on the raw technical performances. This way we define
the DSS efficiency as a ratio between the DSS QoS and the DSS TCO, which is the
foundation of our propositions.
68 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Having described the context in which we position ourselves, we have developed and we
offer several solutions to improve the DSS efficiency.
(1) First, we note that an improvement process cannot exist without a prior monitoring
process. We propose to adapt and apply best practices, already deployed for operational
systems, by implementing a system which permits the measuring of the DSS efficiency.
Following ITIL guidelines, this kind of system is known as the Configuration Management
Data Base (CMDB), and relates with ITIL specifications for: (i) service support and (ii) service
offering (Dumont, 2007). The CMDB integrates all the information required to manage an
information system. This information varies from incidents and known problems,
configuration and change management, to management of availability, capacity, continuity
and levels of service.
(2) Second, we show the necessity of taking into account DSS specifics (in contrast with
operational systems), with the purpose of elaborating the right measures for the efficiency of
a DSS. These measures are different from the ones used with operational systems, which
focus on the problematic of real time utilization. DSS are often defined in comparison with
operational information systems as the distinction between On-Line Analytical Processing
(OLAP) and On-Line Transaction Processing (OLTP) shows it (Inmon, 2005), (Inmon, 2010).
The two are part of fundamentally different worlds over aspects such as: the contained data
(raw/aggregated), data organization (data bases/data warehouses), orientation
(application/subject) or utilization (continuous/periods of utilization). DSS are classically
aimed at managers and rely on analytical data, which is derived from the primitive data
provided by operational systems.
Our major proposition presented here is the elaboration of the Business Intelligence CMDB
(BI-CMDB) for the supervision of a DSS. It serves as a “decisional of the decisional”
solution, and, to our knowledge and experience, it has never been approached so far with an
integrated proposition. The BI-CMDB follows the principles of ITIL’s CMDB and contains
all the elements required for managing the DSS: architecture, configuration, performances,
service levels, known problems, best practices etc. It integrates information both structured
(e.g. configuration) and non-structured (e.g. best practices). With the help of the BI-CMDB,
SLA/Os (Service Level Agreements / Objectives) are taken into account when computing
performance, thus focusing on improving user satisfaction over technical performance.
Following this, a second contribution is to optimize the value offered by the CMDB. We
combine BI, OLAP and semantic web technologies in order to improve DSS Management
efficiency, while offering autonomic self management capabilities with references from
(Nicolicin-Georgescu et al., 2009).
The chapter is organized as follows. Section 2 presents how monitoring of the DSS is done
so far and what elements are followed in concordance with the ITIL guidelines. In Section 3
we focus on the description of the DSS specifics in comparison with the operational systems,
and how performance measurement changes with these specifics. The subjective aspect of
user perceived performance is equally discussed. Further, we present our proposition, the
BI-CMDB, in Section 4. The A-CMDB and the integrated monitoring and analysis solutions
for DSS management are some of the described elements. We equally note that the
presented work has already been implemented and the solutions area available for use (SP2
Solutions, 2010). Section 5 consists of a series of possible shortcomings and limitations of the
BI-CMDB, so we can conclude in Section 6 with an overview of the chapter and the future
doors that BI-CMDB opens for DSS management.
Business Intelligence – CDMB – Implementing BI-CMDB
to Lower Operation Cost Expenses and Satisfy Increasing User Expectations 69
account their particularities (Inmon, 2005). Nevertheless, ITIL provides a good foundation
with the CMDB over service sustainability and offering (Dumont, 2007).
Service sustainability corresponds to the interface between the service user and the system
providing the service. The CMDB integrates monitored data from: incidents, known
problems and errors, configuration elements, changing and new versions. Each of the five
data types corresponds to a management module (e.g. incident management, configuration
management etc.). In turn, each of the management modules is retaken by the user to ensure
service maintenance.
Service offering formalizes the interface between the clients and the system. This time, it
refers to offering new services in rapport with the defined policies. Starting from
management and infrastructure software (like the CMDB), the service levels are assessed
based on SLA/Os. Elaboration of these agreements and objectives include availability
management, capacity management, financial service management, service continuity,
management of the levels of service and the assurance of a high QoS.
combined data parameters. To provide with a very simple example, consider that a data
mart with two size indicators: the index file size and the size of the data file. We can obtain
the two separately, but we have no indicator of the total size of the data mart. This is
obtained by adding the two and ‘creating’ a new combined monitored indicator.
warehouse. The executed calculation and restructuration operations are very resource and
time consuming. Moreover, the new data must be integrated before the start of the next day
in order for the users to have the correct reports.
One last DSS characteristic we want to discuss here is the motto of the decision expert: “Give
me what I say I want, so I can tell you what I really want” (Inmon et al., 1999). This expresses the
fact that data warehouse users access a large variety of data from the data warehouse,
always being in search of explanations over the observed reports. Therefore anticipation and
prediction of the requested data, data usage or system charge is very hard to implement,
leading to (not necessarily a good thing) more relaxed performance requirements.
3.2 Performance
Performance measurement with DSSs brings into discussion, along the technical aspect
(from operational systems), the user satisfaction aspect (form the business objective oriented
point of view of analytical systems). There are two main types of performance (Agarwala et
al., 2006):
Raw technical performance – related to the technical indicators for measuring system
performance (such as response times for recovering data from the data warehouse, report
generation, data calculation etc.)
Objective performances – relating to user expectations. These measures are based on
predefined policies and rules that indicate weather or not satisfactory levels are achieved by
the system (e.g. response time relative to the importance of the data warehouse or with the
user activity).
Retaking the example shown by the authors of (Agarwala et al., 2006), a detailing of the QoS
formula is shown below.
MonitoredValue
QoS (1)
ObjectiveValue
and
n
QoSi * ScalingFactori
1
QoSglobal n
(2)
ScalingFactori
1
The basic single parameter QoS is computed by the ratio between the monitored values and
the specified expected values. If several parameters are responsible for a global QoS, then a
scaling factor is multiplied with each individual QoS, and, in the end, an average of all the
scaled QoS is computed to obtain the overall system QoS value.
The data warehouse quality model is goal oriented and roots from the goal-question-
metric model. Whether metrics satisfy or not the goal determinates the quality of the data
warehouse (Vassiliadis et al., 1999). Data warehouses must provide high QoS, translated
into features such as coherency, accessibility and performance, by building a quality
meta-model and establishing goal metrics through quality questions. The main advantage
of a quality driven model is the increase in the service levels offered to the user and
implicitly increase in the user satisfaction. QoS specifications are usually described by the
SLAs and the SLOs.
Business Intelligence – CDMB – Implementing BI-CMDB
to Lower Operation Cost Expenses and Satisfy Increasing User Expectations 73
We have seen what SLAs stand for. In a similar manner, SLOs describe service objectives,
but on a more specific levels. For example, a SLA would describe that an application is
critical on the first week of each month, whereas the SLO would state that during critical
periods the query response time on the data marts used by the application should not be
greater than 1s.
With DSSs and data warehouses, one important area where service levels are crucial is the
utilization periods, in direct link with the answer to the question: what is used, by whom and
when? Data warehouse high charge periods are specified with SLAs. When building a new
DSS and putting in place the data warehouse architecture, the utilization periods are
specified for the various applications. To paraphrase a study from Oracle, the key to enterprise
performance management is to identify an organization’s value drivers, focus on them, and align the
organization to drive results (Oracle, 2008).
4. The BI-CMDB
In this section we present, in an exhaustively manner, our view and proposition for building
a CMDB adapted to decisional systems, which we consequently call the BI-CMDB. In order
to understand how this is built, we present a schema of the data flow and the organization
of five modules that are part of our approach.
On the bottom level we have the data sources that provide the monitored data. As
mentioned, these are distributed, non-centralized and under different formats. Next, there is
a data collector module, which gathers and archives the data sources. This is the first level of
centralization while greatly reducing the size of this data by archiving. Following, an
Operational CMDB is build with the data collector centralized information. It is the first level
of filtering (parsing and interpretation) of the monitored data, which allows to keep only the
interesting data for further analysis.
Analysis of the O-CMDB is brought climbing up to the Analytical CMDB, which builds a
data warehouse architecture, by specific objective data marts, with the data from the O-
CMDB. We note that the elements of the O-CMDB and of the A-CMDB correspond to ITIL’s
guidelines, with the addition of elements that correspond to the specifics of DSS. Once the
A-CMDB has been constructed, two more levels of information access (by user) are added,
with a module for publishing the reports built on the A-CMDB data warehouse and a Web
Portal for consulting these reports and for dynamically exploring the data from the CMDBs.
These last two are not of interest for this chapter, but equally make a challenge, from the
study of computer-human interaction and GUI design.
A detailed description of the first three modules that are interesting to the BI-CMDB is done
in the following. Examples for each of them are provided, in the light of the differences with
the traditional CMDB and on the characteristics of DSS.
Example. We show below an example of data collection for a connector implementation for
the Oracle Hyperion Essbase server, with the various connector data fluxes.
The Essbase server log files are under the /log repository of the Essbase installation. The
connector will specify the machine and port (for connection to the Essbase server) and the
place of the Essbase installation (e.g. C:/Hyperion/logs/essbase). This repository contains a
suite of .log and .xcp files, that are organized by application (here the term application
indicates the regrouping several data marts which suit the same business objective)
The .log file contains various information about the application and its data marts, which is
interesting for the log, perf and err data fluxes. An extract of the file is shown below for a
data retrieval operation on a data mart from the application:
[Mon Jan 31 17:57:51 2011]Local/budgetApplication///Info(1013210)
User [admin] set active on database [budDM]
[Mon Jan 31 17:57:51 2011]Local/budgetApplication/budDM/admin/Info(1013164)
Received Command [Report] from user [admin]
[Mon Jan 31 17:58:21 2011]Local/budgetApplication/budDM/admin/Info(1001065)
Regular Extractor Elapsed Time : [29.64] seconds
These lines permit to identify that there was an Essbase report request (data retrieval) on the
budDM data mart, part of the budget application on the on 31st of January at 17:57:51 from
the admin user, which lasted 29.64 seconds. These elements represent the interesting
monitored information, and are recovered from the last four lines. The first two lines are
useless. This filtering process is done at the construction of the O-CMDB.
The .xcp file contains the information needed for the xcp data flux, logging the fatal crashes
of the Essbase server. An extract of this file could be:
Current Date & Time: Fri Sep 19 11:29:01 2008
Application Name: budgetApplication
Exception Log File: /app/hyp/Hyperion/AnalyticServices/app/budgetApplication/log00001.xcp
Current Thread Id: 2
Signal Number: 0x15=Termination By Kill
Last, in order to get the configuration parameters of the application data marts, the esscmd
command line console is used (i.e. GETDBINFO, GETAPPINFO etc.). The results of these
commands are written into customized text files, which assure the perf data flux.
The role of the example was to show that the monitored information is very different, and
data sources and formats vary even within the same implementation. The data collector has
the role of assuring a first centralized level of information, with no data analysis intelligence.
The only intelligence in the module allows an incremental data collection, such that data is
not redundant (e.g. if data is monitored once each day, each collection takes only the
information from the logs corresponding to the last past day). The following levels of
information centralization are shown next with the construction of the O-CMDB and the A-
CMDB.
available, holding only the useful one (through interpretation of the log data), and
constructing a series of purpose objective data stores with this filtered data. The
organization of the data stores retakes the main ITIL groups for the CMDB and continues in
the line of the data collector connector data fluxes. Technically, our implementation of these
data stores is via relational databases, for both the O and the A CMDB, term which we will
further use for a better understanding.
The following O-CMDB databases are constructed.
The CI (Configuration Items) tables, holding information about the architectural organization
of the target decision system. These tables are different by construction, as they are not built
with the data recovered from the data collector, but they have as source specific defined files
or databases which we use with our system to let the user ‘complete’ his DSS architecture. In
specific cases some of the CIs can be automatically built from the data from the logs. This
operation of DSS entity specification must be performed before the construction of the O-
CMDB and is compulsory, as the rest of the tables are based on the DSS entities.
In our conception, we identify six levels of vertical hierarchy: (i) the entire DSS, (ii) the
physical servers, (iii) the logical servers, (iv) the applications, (v) the bases and (vi) the
programs. In order to keep this as generic as possible we note that depending on the specific
BI software implementation, some of these levels disappear or are remapped (e.g. an
application for Essbase is well defined with its own parameters while for a BO solution it
becomes a report folder as a logical organization of BO reports).
The PARAM tables contain information about the configuration indicators. This is raw data,
organized by indicator type and by CI type. A reference measure table (configuration and
performance), lists the interesting monitored indicators. The data collector archives are
analyzed for patterns that allow the identification of these indicators, and then extracted and
written in the O-CMDB PARAM table. An example is shown in Table 1. We have mentioned
that the data contained here is raw data, meaning that there is no treatment done on it. This
is what we have earlier seen as basic monitored structured information. However an
exception from this rule is done with combined monitored configuration indicators (i.e.
deduced from existing indicators). The given example was on the size of an Essbase data
mart. There are two basic configuration indicators (data file size and index size) which are
extracted directly from the data collector archives and stored in the O-CMDB. Yet, an
interesting measure is the total size of the data mart, which expresses as the sum of the two
indicators. In this case, when constructing the O-CMDB a second passage is made to
compute these combined indicators.
further analytical purposes, and in concordance with ITIL guidelines. Still, a series of
conceptual differences are identified, mainly related to the timeline intervals. Configuration
indicators are recorded in relation with the date at which they last change, operations that
are rare. On the other hand, performance indicators have a high apparition frequency, and
depend on several factors (from the network lag to the machine processor charge). On busy
periods, we have met up to several of tens of thousands performance entries in the log files,
which is a totally different scale from the configuration case.
In addition to these three ‘main’ tables, a series of additional tables are build, such as the
XCP tables with data concerning fatal crashes, the ERR tables with data from (non-lethal)
warnings and errors or the MSG tables with the various functioning messages.
One of the biggest benefits of constructing the O-CMDB is the reduction of the size of the
monitored information. By comparing the size of the initial raw monitored data sources
with the size of the O-CMDB database after integration of the data from those sources, we
have obtained a gain of 1 to 99 (i.e. for each 99Mb of raw data we have an equivalent of 1Mb
of integrated interesting data). In addition, this reduction allows keeping track of the data
for both short and medium time periods (even long in some cases), which permits further
prediction and tendency assessment.
Usage purposes are similar to the utilization periods, and are formalized under the same
form. By using the hour time axis, the day and night intervals are defined (i.e. from 6:00 to
22:00 there is day, from 22:01 to 05:59 is night). Integrating this axis allows the user to
identify or not whether a certain performance analysis is pertinent (i.e. following the
restitution times during night periods makes little sense).
The user type axis represents the main focus axis when assessing user satisfaction. A
response time that is acceptable for the developer who is testing the data warehouse can be
completely unacceptable for a DSS user who wants his rapport as soon as possible. In the O-
CDMB, the tables have columns which specify the user who triggered the actions (e.g. the
user who asked for a report, the user that modified the configuration parameters etc.). When
integrated in the A-CMDB, users are organized logically by areas of interest, such as: (i) data
warehouse administrator, developer or user; (ii) area of responsibility (entire system,
department, application); (iii) type of responsibility (technical, business objective). By
intersecting these axes, the A-CMDB can rapidly respond at any questions concerning the
user activity, thus covering a total ‘surveillance’ of who is using what and when.
Lastly, the SLA/SLO axis integrates all the elements related to business / technical
objectives. For instance, defining the time intervals for the utilization and usage periods as
business objectives and specifying the performance thresholds for each of these periods as
technical objectives. By having these values we can afterwards compute the QoS indicators,
with regards to any intersection of the analysis axes, as the A-CMDB offers all the data
required to do so. The SLA and SLO data are not obtained from the O-CMDB and are not
part of the data recovered by the data collector. They are usually integrated when building
the A-CMDB, either by being manually specified or by using ETLs to load them from the
source where they are specified. Even it may seem a paradox, in many enterprises today
SLAs and SLOs are specified in semi-formatted documents (e.g. word documents) and their
integration is only possible manually. This represents a big drawback for automatic
integration, and by respecting the A-CMDB specifications, enterprises are ‘forced’ to render
these elements into formats that allow automatic manipulation (such as spreadsheets, .xml
or even data bases). Even if this alignment operation poses an effort in start, the return over
investment makes its implementation worth.
We provide an example with an extract of some of the columns from the AGG_PERF table,
which aggregates the O-CMDB performance tables.
Moreover, this average is computed for the admin user, during a high utilization periods
and during day usage. Finally, there is an SLO objective of 3 seconds for the admin user
under the same conditions, which leads to a quality of service of 0.56 (=3 / 5.3), meaning
that the admin user is only 56% satisfied with the logical server response times. If we
consider that under a level of satisfaction of 80%, more details are required, automatically
we have the need to drill into the data in order to know exactly what are the applications
and further the bases and the programs that generate this dissatisfaction. This functioning is
characteristic for any data warehouse (from the DSS user motto).
Therefore, the A-CMDB contains all the information needed for the analysis of the
functioning of the DSS. It is a question of vision and development of the ITIL’s CMDB as a
decisional component. Moreover, it is about applying the specific DSS characteristics for its
construction, such that the relevant measures and indicators are well defined and that DSS
supervisors have a very clear view over the performance of their system from the
perspective of user and their satisfaction levels when using the DSS.
work over the last years. Using ontologies to formalize the DSS management aspects (such
as best practices, subjective quality measurement, business rules etc.) go hand in hand with
the BI-CMDB elaboration. Moreover, by using ontologies, it is possible to specify horizontal
relations between the CI, as an addition to the hierarchical vertical relations of the DSS
architecture, thus allowing a full description of the elements and their connections.
Last, but not least, having a complete integrated view of the data and information describing
the DSS, automatic behaviors can be adopted, such that the DSS human expert is helped
with managing his system. To this end, solutions as the Autonomic Computing (IBM, 2006)
have been developed, with the declared purpose of helping IT professionals with low level,
repetitive tasks, so they can focus on high level, business objectives. Inspired by the
functioning of the human body, this autonomy model proposes four principles to assure
that a system can function by ‘itself’, self: configuration, healing, optimization and
protection. An intelligent control loop implemented by several Autonomic Computing
Managers, assure these functions. It has four phases, which correspond to a self sufficient
process: monitor, analyze, plan and execute, all around a central knowledge base.
The problem with autonomic computing is that it has been elaborated with the real time
operational systems as targets. In order to apply the autonomic model for DSS, similar to the
passage from the CMDB to the BI-CMDB, the characteristics of DSS must be taken into
consideration. The works of (Nicolicin-Georgescu et al., 2009) have shown an example of
optimizing shared resources allocations with data warehouses, by focusing on the data
warehouse importance and priority to the user rather then on raw technical indicators.
Moreover, to comply with the DSS functioning, changes in the traditional loop functioning
were proposed, via the description of heuristics targeted at the SLAs and the improvement
of the QoS.
is configurable, based on the specific needs (e.g. once each week, once each 3 days etc.), and
user needs for this interval are mostly higher than the time needed to integrate the data into
the BI-CMDB.
Following, one shortcoming may be the unpredictability of usage of the system, which
makes scaling effects harder to measure. We have here a decisional of the decisional
solution, which metaphorically is like doubling the DSS expert motto. A user of the BI-
CMDB will constantly require new rapports and find new analysis axes to intersect. This
means, that alongside the list of the main analysis axis that are implemented, depending on
the users needs and objectives a series of new analysis pivots are described. As these needs
vary from case to case, it is hard to asses the usage process of the BI-CMDB. Its
performances depend strongly on the chosen client implementation. Moreover, through the
web portal interface the user has the liberty to define his custom queries towards the BI-
CMDB, a liberty which can make the system overcharge (similar to the data warehouse user
freedom problematic).
Also, as we have spoken of the combination of the BI-CMDB with ontologies, this is
regarded as a future evolution. For now, assuring model consistency and completeness
(from the contained data point of view) is done ‘semi-manually’, after the report generation
(therefore at the final stage). We try to move this verification at database construction level,
as to avoid any unnecessary report generation (or worst give the bad report to the bad
receiver). Implementing an ontology model and by benefitting from the inference engines
capacities, such verifications can be done at the construction stage, before any utilization of
the BI-CMDB data. A classic example is the mixing of the CIs from the hierarchy. The CI
specification is build from the manual user specification of his system, thus is prone to
human error. A physical server which becomes an application by accident would lead to
completely incorrect rapports (from what the user was expecting).
Finally, we note that semantic and autonomic evolutions of the BI-CMDB arise a series of
questions, from knowledge integration to supervision of autonomic behaviors and rule
consistency.
6. Conclusion
We have detailed in this chapter an implementation of the Business Intelligence – CMDB,
with the purpose of improving the efficiency of a DSS. Based on the fact that despite the
technological advancements, user satisfaction with DSS performance keeps decreasing, we
have isolated some of the elements that are responsible for this paradox, among which the
Quality of Service, the Total Cost of Ownership and the integration of Service Level
Agreements policies when computing DSS performance. We state that user perception is a
vital factor that cannot be ignored in the detriment of the raw technical performance
indicators.
In conclusion, implementing the BI-CMDB for DSS offers a series of elements we consider
compulsory to an efficient DSSs: a CMDB for DSS, a complete TCO, an integration of the
SLA/Os, an adoption of ITIL guidelines and best practices, all these with lower costs and the
improvement of the user’s satisfaction levels. The BI-CMDB proposition for the ‘decisional of
the decisional’, integrated with cutting edge semantic and autonomic technologies opens the
doors to a whole new area of research towards the future of efficient DSS.
Nevertheless, developing the BI-CMDB is a first successful step towards the future of DSS.
With the development of semantics and autonomic technologies, we strongly believe that
future steps include these aspects, towards the development of a complete BI-CMDB,
82 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
integrated with all pieces of knowledge required for managing a DSS. The ultimate
autonomic semantic based BI-CMDB for DSS is a planet which seems “light years” away, but
currently technology evolution speeds rapidly approach it.
7. Acknowledgment
We would like to thank Mr. Eng. Adrien Letournel and Mr. Eng. Guillaume St-Criq, both
employees of SP2 Solutions, for their support with providing the data and the elements for
the examples and for their feedback and support with this chapter.
8. References
Agarwala, S., Chen, Y., Milojicic, D. & Schwan, K. (2006), Qmon: Qos-and utility-aware
monitoring in enterprise systems, in ‘IEEE International Conference on Autonomic
Computing’, pp. 124–133.
Codd, E. F., Codd, S. & Salley, C. (1993), ‘Providing olap to user-analysts: An it mandate’.
Dumont, C. (2007), ITIL pour un service informatique optimal 2e édition, Eyrolles.
Ganek, A. G. & Corbi, T. A. (2003), ‘The dawning of the autonomic computing era’, IBM
Systems Journal 43(1), 5–18.
Gruber, T. (1992), What is an ontology?, Academic Press Pub.
IBM (2006), An architectural blueprint for autonomic computing, 4th Edition, IBM Corporation.
(2011), ‘Ibm cognos 8 financial performance management’. Last accessed - January 2011.
http://publib.boulder.ibm.com/infocenter/c8fpm/v8r4m0/index.jsp?topic=/com.
ibm.swg.im.cognos.ug_cra.8.4.0.doc/ug_cra_id30642LogFiles.html
Inmon, W.H. (2010), ‘Data warehousing: Kimball vs. inmon (by inmon)’. last accessed july
2010. http://www.b-eye-network.com/view/14115/
Inmon, W. H. (2005), Building the data warehouse, fourth edition, Wiley Publishing.
Inmon, W. H., Rudin, K., Buss, C. K. & Sousa, R. (1999), Data Warehouse Performance, John
WIley & Sons, Inc.
Nicolicin-Georgescu, V., Benatier, V., Lehn, R. & Briand, H. (2009), An ontology-based
autonomic system for improving data warehouse performances, in ‘Knowledge-
Based and Intelligent Information and Engineering Systems, 13th International
Conference, KES2009’, pp. 261–268.
Nicolicin-Georgescu, V., Benatier, V., Lehn, R. & Briand, H. (2010), Ontology-based
autonomic computing for resource sharing between data warehouses in decision
support systems, in ‘12th International Conference on Enterprise Information
Systems, ICEIS 2010’, pp. 199–206.
Objects, B. (2008), Business Objects enterprise xi white paper, Technical report, SAP Company.
Oracle (2008), Management excellence: The metrics reloaded, Technical report, Oracle.
Oracle (2010), ‘Oracle Hyperion Essbase’. last accessed July 2010.
http://www.oracle.com/technology/products/bi/essbase/index.html
Pendse, N. (2007), ‘Performance matters - the OLAP surveys hold some performance
surprises’. www.olapreport.com
SAP (2010), Data, data everywhere: A special report on managing information, Technical
report, SAP America, Inc.
SP2Solutions (2010), ‘SP2 Solutions homepage’. last accessed December 2010, www.sp2.fr
Vassiliadis, P., Bouzeghoub, M. & Quix, C. (1999), Towards quality-oriented data warehouse
usage and evolution, in ‘Proceedings of the 11th International Conference on
Advanced Information Systems Engineering, CAISE 99’, pp. 164–179.
5
1. Introduction
The chapter focuses on one of possible approaches to decision support problem, which is
based on multi-agent simulation modelling. Most decision support cases generally consist of
a set of available alternatives definition, estimation and selection of the best one. When
choosing a solution one needs to consider a large number of conflicting objectives and, thus,
to estimate possible solutions on multiple criteria.
System analysis and synthesis constitute a management aid, targeting more effective system
organization considering its limitations and objectives. In the limits of decision support
process, analysis and synthesis methods are used for forecast and estimation of the
consequences of taken decision. In such case available situation development scenarios are
designed and analysed, scenario synthesis is performed, results are being evaluated in order
to develop the policy for the most desired results. Computer aid for analysis and synthesis
problems provides the most perspective and rational system development strategy making.
Formalized analysis and synthesis problems solving might be interesting from the point of
view of workflow automation for analysts and decision making people.
The following analysis and synthesis problems of business systems are discussed further:
virtual enterprise establishment; business process re-engineering; business process
benchmarking (improvement and enhancement). In Russia major research relates to
business systems analysis. This concept includes business processes, management,
production and manufacture, logistics, technological processes and decision making. The
task of virtual enterprise establishment (Andreichikov & Andreichikova, 2004) relates to a
class of complex systems structural synthesis tasks. The main objective of virtual enterprise
establishment is cooperation of legally independent companies and individuals,
manufacturing certain product or providing services in common business process. Main
goal of business processes re-engineering is reorganization of material, financial and
informational flows, targeting organizational structure simplification; resource use
redistribution and minimization; client needs satisfaction time decrease; quality of client
service improvement.
Business process re-engineering offers solution for the following tasks: existing business
process and enterprise activity analysis, decomposition to sub-processes; analysis of
bottlenecks in business processes structure, e.g. related to resources underload or overload,
84 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
queuing; business process re-organization (synthesis) for elimination of problems and reach
of set effectiveness criteria; design and development of information system for business
process support. Problem of business process structure bottlenecks analysis can be solved
with aid of static or dynamic modeling methods. Static modeling implies definition and
analysis of business process structure, as well as cost analysis of process functions,
identification of the most demanding and unprofitable functions, or those ones with low
coefficient of resource use. Dynamic simulation modeling allows implementation of
multiple business process operations in continuous period of time, offering statistics
gathering of process operation and identification of bottlenecks in their structure. Business
process re-organization relying on static modeling is based on heuristic approach and
demands high qualification and experience from analyst. Simulation modeling in business
processes re-engineering allows automation of re-engineering rules and their application.
Business process benchmarking closely relates to re-engineering. Main goal of
benchmarking is business process re-organization in accordance with master model. Master
model is based on a combination of best business processes of various enterprises, identified
with comparative analysis.
The main idea of the chapter is situational, multi-agent, simulation and expert modeling
methods and tools integration in order to increase the decision support effectiveness in
situational control of resource conversion.
Practical application of business process re-engineering approaches as management tools
has its limitations: need for a highly qualified analyst, his deep understanding of problem
domain; unavailability of synthesis of business process re-engineering decisions and check
of computed solutions on real management object. Simulation and AI methods applied to
automation of business process re-engineering allow to: 1. Reduce demand for analyst
experience in system analysis and synthesis by using formalized expert knowledge and
applying mathematical modelling methods; 2. Estimate computed re-engineering solutions
and select the most effective solution on management object model.
Use of multi-agent approach on stage of business process model formalization is caused by
the presence of decision making people in the system; their behaviour is motivated, they
cooperate with each other, accumulate knowledge of problem domain and management
tasks solution scenarios. Intelligent agent model provides design of decision making person
model, which is the basis of information technology implementation for model analysis and
decision making.
Bugaichenko’s model (Bugaichenko, 2007) defines intelligent agents with mental (Belief-
Desire-Intention, BDI) architecture. Multi-agent system properties are defined with aid of
developed formal logic – MASL – multi-agent systems logical specification method with
time limitations; these systems are considered to be capable of experience accumulation and
analysis.
Multi-agent system model is represented with a triplet:
ag=(S, A, env, see, Ib, bel, brf, ID, des, drf, filter, plan, prf), where
A – nonempty finite collection of agent activities;
env – external environment behavior function, correlating a collection of next possible states
of external environment to the current state of external environment and selected action of
an agent;
see – correct perception of external environment states by an agent, setting collection P of
equivalence classes on S;
Ib – collection of agent beliefs, dependent on agent perception of external environment and
his own activity;
bel – collection of current agent beliefs;
brf – beliefs update function;
ID – collection of agent desires; depends on goals (criteria functions);
des – collection of current agent desires;
drf – agent desires update function;
filter – agent desires filtration function;
plan – current agent plan, represented by finite state machine with input alphabet P, output
alphabet A, states set Ipln, transition relation σpln and initial state of ipln,0;
prf – plan update function.
Bugaichenko’s agent has mental (BDI) architecture, featuring three components: beliefs,
desires and intentions (Bugaichenko, 2005).
Beliefs contain information about regularities and current state of external environment of an
agent. This information may be erroneous or incomplete, thus it may be considered as belief,
not as reliable knowledge. Note that prognosis function is also considered an agent belief.
Agent desires make up a collection of all agent goals. It is unlikely that an agent, limited with
resources, will be able to achieve all his goals.
Intentions make up a collection of goals that an agent decided to achieve. Satisfiability of
intentions (in case of planning agent) may be defined as possession of a plan, leading to goal
achievement.
One of the most complex stages of decision support process is selection (planning) of
activities for goals achievement. Bugaichenko offers agent plans representation in form of a
network of interacting finite state machines (Bugaichenko, 2007).
Problems of accumulating experience by the system and agent self-prognosis of own activity
within Bugaichenko model are also solved with use of symbolic data representation in form
of resolving diagrams.
Decision Support Systems Application to Business Processes at Enterprises in Russia 87
Multi-agent system properties are described with aid of developed formal logic MASL –
method for logical specification of multi-agent systems with time limitations; these systems
are considered to be capable of experience accumulation and analysis. MASL logic is the
interconnection of capabilities of the following temporal logics: Propositional Dynamic
Logic PDL, Real-Time Computation Tree Logic RTCTL and Alternating-Time Temporal
Logic ATL (Bugaichenko, 2006). Adaptation of these logics for specification of properties of
multi-agent system mathematical model allowed Bugaichenko formalize definition of
cooperation and agent competitiveness, define nondeterministic behaviour of external
environment, extend expressive power of specification language.
Key concept of agent interaction modeling is coalition. According to Bugaichenko, coalition
C is a certain subset of system agents that act together in order to achieve personal and
common goals. Agents of the coalition trust each other, exchange information and
coordinate activities. At the same time they do not trust the agents for outside the coalition,
and do not affect their behavior. So, formally, coalition may be considered a single
intelligent agent, that may have sense of external environment and mental behavior defined
as a combination of senses and mental states of coalition agents.
Masloboev’s multi-agent model (Masloboev, 2008) was developed for information support
of innovative activity in the region together with support and estimation of potentially
effective innovative structures.
Innovative activity subjects are represented in form of software agents, operating and
interacting with each other in common information environment (virtual business
environment, VBE) for the benefit of their owners, forming an open multi-agent system with
decentralized architrecture.
According to Masloboev (Masloboev, 2008), VBE model has the form of
Model of dialog local structure has finite set of dialog states (Pi positions) and finite set of
transfers tj, each of which has a corresponding expression (transfer condition) in interaction
language L.Model makes use of KIF and KQML languages for language interaction. The
main concept here is speech acts theory, which is the basis of KQML language, as well as
large amount of problem domains, defined on the basis of KIF language.
Graphical notation of local structure model for agents dialog is presented on Fig. 3.
execut _ action
P agent _ state exec _ reactions , color _ position, actions _ collection
end _ of _ dialg
Each dialog transfer has a structure like:
t Pi 1 ,Pi , transfer _ condition( pa nting _ Pi 1 ), actions _ with _ colors , where
Sequece of agents communicative actions is defined on the Petri net by output track
analysis, determination of passed positions and inclusion of those actions into final sequence
that are defined in actions list for current position.
General model of intelligent agents interaction is implemented in SMIAI, developed with
aid of Gensym Corp. G2 and Microsoft Visual Studio. System has been tested in problem
domains of online billing, investment projects management, control of state for chemically-
dangerous objects anf other (Rybina & Paronjanov, 2009).
Resource-Activity-Operation model (RAO) is used for definition of complex discrete
systems (CDS) and activities within these systems in order to study static and dynamic
features of events and activities (Emelyanov & Yasinovskiy, 1998). System distrecity is
defined with two properties:
CDS content may be defined with countable set of resources, each of which relates to
certain type;
CDS state changes occur in countable moments of time (events) and C e+i = C e-i+1 (system
state after ei event is identical to system state before ei+1 event).
Conceptually CDS may be represented with a set of resources, that have certain parameters.
Parameter structure is inherited from resource type. Resource state is defined with a vector
of all its parameter values. System state is defined with all its resource parameter values.
Decision Support Systems Application to Business Processes at Enterprises in Russia 91
There are two types of resources – permanent (always exist during model simulation) and
temporary (dynamically created and destroyed during model simulation). By analogy with
GPSS language, RAO resources may be called transacts, that are dynamically generated in
certain model object, pass the other objects and are finally destroyed. Note that resource
database in RAO is implemented on the basis of queueing system apparatus, having
automatic support of system operation statistic gathering: average value of examined
indicator, minimum and maximum values, standard deviation.
Resources perform specific activities by interacting with each other. Activities are defined
with events of activity start and activity end, the event in general is considered a signal,
transferring data (control) about certain CDS state for certain activity. All events are divided
into regular and irregular. Regular events reflect logic of resource interaction between each
other (interaction sequence). Irregular events define changes of the system, unpredictable in
production model. Thus, CDS operation may be considered as timely sequence of activities
and interrupting irregular events.
Activities are defined with operations, that are basically modified production rules that
consider timely relations:
knowledge base for all operations and checks pre-conditions for availability of operation
start. If located, the start events are raised for corresponding actions. Trace system displays
detailed information on events to dedicated file, which is further processed for process
detailed analysis and information representation in convenient form.
Dynamic model of multi-agent resource conversion processes (MRCP) (Aksyonov &
Goncharova, 2006) was developed on the basis of resource conversion process (RCP) model
(Aksyonov, 2003) and targets modeling of business processes and decision support for
management and control pricesses.
Multi-agent resource conversion process model was developed on the basis of several
approaches integration: simulation and situational modeling, expert and multi-agent
systems, object-oriented approach.
Key concept of the RCP model is a resource convertor that has input, launch condition,
conversion, control, output. Launch condition defines a moment in time when a convertor
starts activity on the basis of such factors as state of conversion process, state of
input/output resources, control commands, tools for conversion process and other events in
external environment. Conversion duration is defined immediately before conversion based
on control command parameters and active resource limitations.
MRCP model may be considered an extension to base RCP model, adding functionality of
intelligent agents.
The main objects of discrete Multi-agent RCP are: operations (Op), resources (Res), control
commands (U), conversion devices (Mech), processes (PR), sources (Sender) and resource
receivers (Receiver), junctions (Junction), parameters (P), agents (Agent) and coalitions (C).
Process parameters are set by the object characteristics function. Relations between resources
and conversion device are set by link object (Relation). The agents and coalitions existence
resumes availability of the situations (Situation) and decisions (action plan) (Decision).
MRCP model has hierarchical structure, defined with system graphs of high-level
integration.
Agents control the RCP objects. There is a model of the decision-making person for every
agent. An agent (software or hardware entity) is defined as an autonomous artificial object,
demonstrating active motivated behavior and capable of interaction with other objects in
dynamic virtual environment. In every point of system time a modeled agent performs the
following operations (Aksyonov & Goncharova, 2006): environment (current system state)
analysis; state diagnosis; knowledge base access (knowledge base (KB) and data base (DB)
interaction); decision-making. Thus the functions of analysis, situations structuring and
abstraction, as well as resource conversion process control commands generation are
performed by agents.
Coalition is generated consequently after several agents union. Agent coalition has the
following structure:
С = <Name, {A1,…,Am}, GС, KBС , M_In, M_Out, SPC, Control_O >, where
Name – coalition name;
{A1,…, Am} – a collection of agents, forming a coalition;
GС – coalition goal;
KBС – coalition knowledge base;
M_In – a collection of incoming messages;
M_Out – a collection of outgoing messages;
SPC – a collection of behaviour scenarious acceptable within coalition;
Control_O – a collection of controlled objects of resource conversion process.
Decision Support Systems Application to Business Processes at Enterprises in Russia 93
Fig. 5 shows an example of C1 coalition formation after union of A2 and A3 agents. Here C1
coalition controls agents A2 and A3, but A1 agent acts independently.
SPA1
Receiver1
U1
SPC1
Sender1 U4
Res1, Res4,
Res2 PR1 Res5,
Res6 U5
A1
Op1
Res7,
С1 Mech1,
Res8 KBA1
Mech2, Op2
Mech3,
KBС1 Res9,
Res10
MsgС1,A3 Mech4,
MsgС1,A2 Mech5
MsgA2,С1 MsgA3,С1
actions processing (state diagnosis, control commands generation); conversion rules queue
generation; conversion rules execution and operation memory state (i.e. resources and
mechanisms values) modification. Simulator makes use of expert system unit for situations
diagnosis and control commands generation (Aksyonov et al., 2008a).
Each agent possesses its knowledge base, set of goals that are needed for behavior
configuration setting, and priority that defines agent order in control gaining queue.
Generally in case of any corresponding to agent’s activity situation an agent tries to find a
decision (action scenario) in the knowledge base or work it out itself; makes a decision;
controls goals achievement; delegates the goals to its own or another agent's RCP objects;
exchanges messages with others.
Multi-agent resource conversion process agent may have hybrid nature and contain two
components (Fig. 6):
Intelligent (production rules and/or frame-based expert system access).
Reactive (agent activity is defined on UML activity diagram)
Two main agent architecture classes are distinguished. They are:
1. Deliberative agent architecture (Wooldridge, 2005), based on artificial intelligence
principles and methods, i.e. knowledge-based systems;
2. Reactive architecture, based on system reaction to external environment events.
94 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
search diagram (Fig. 7), based on UML sequence diagram. Each decision represents agent
activity plan. Each plan consists of a set of rules from reactive component knowledge base.
Based on located decision, current agent plan is updated. Examination of all available
options, contained in knowledge base, generates agent plans library.
Masloboev’s
Bugaichen-
ko’s model
Rybina’s
Comparison criteria
MRCP
model
model
model
model
model
RAO
Gaia
Resource convertor model
input/output/start ●/●/ ●/●/ ●/●/
conditions/duration ●/○ ●/● ●/●
hierarchical model
○ ○ ○ ○ ○ ●
of convertor
operations timely
○ ○ ●
interrupts
Queue system ○ ○ ○ ○ ● ●
Reactive agent
model MASL Painted
Producti- Producti- Producti-
(knowledge temporal Petri ○
ons ons ons
representation logic nets
form)
Intelligent agent model
agent goals ● ● ● ● ●
actions forecast ● ● ○ ○ ●
action analysis and
● ● ● ● ●
planning
agents self-study ● ● ● ○ ●
planning, forecast, Resolving
System Frame-
self-study diagrams ○
dynamics Expert Expert based
implementation -based
simula- system system expert
technology expert
tion system
system
message exchange TPA /
TPA TPA Signals Signals
language Signals
agents cooperation
● ● ● ● ●
model
Software
Proto- Proto-
implementation Beta ○ Beta RTM
type type
of the model
Comparison criteria T G L A S M R
Problem domain conceptual model design ○ ○ ○ ○ ○ ○ ●
RCP description language ● ● ● ● ● ● ●
Systems goals definition:
Graphical ● ● ○ ○ ○ ○ ○
Balanced ScoreCard based ● ○ ○ ○ ○ ○ ○
Hierarchical process model ● ● ● ● ● ○ ○
Commands description language ○ ● ○ ○ ○ ○ ○
Use of natural language for model definition ○ ● ○ ○ ● ○ ○
Multi-agent modeling availability
“Agent” element ○ ○ ● ○ ● ● ○
Agents behavior models ○ ○ ● ○ ● ● ○
Agent’s knowledge base support ○ ○ ○ ○ ● ● ○
Message exchange language ○ ○ ○ ○ ● ○ ○
Discrete event Simulation modelling ● ● ● ● ● ○ ●
Expert modeling ○ ● ○ ○ ● ○ ○
Situational modeling ○ ● ○ ○ ● ○ ○
Object-oriented approach
Use of UML language ● ○ ○ ○ ○ ○ ○
Object-oriented programming ○ ● ● ○ ● ○ ●
Object-oriented simulation ○ ● ● ○ ● ○ ●
Problem domain conceptual model and object-
○ ○ ○ ○ ○ ○ ●
oriented simulation integration
Retail price, ths $ 50 70 8 4 0 0 0
Table 2. Modeling tools comparison
As we can see, all current systems lack support of some features that might be useful in
effective simulation. For example, problem domain conceptual model design and agent-
based approach implementation is limited, except RAO system that makes use of internal
programming language. Another disadvantage of two most powerful systems, ARIS ToolSet
and G2, is a very high retail price, which might stop a potential customer. Also systems such
as AnyLogic, G2, SMIAI and RAO require programming skills from users. So, from a non-
programming user’s point of view, no system has convenient multi-agent resource
98 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
conversion process definition aids. Again, AnyLogic and G2 make use of high-level
programming language, which results in these products being highly functional.
Simulation, situational modeling and expert systems are used in modeling, analysis and
synthesis of organizational-technical systems and business processes. Multi-agent resource
conversion processes theory (Aksyonov & Goncharova, 2006) may be used for
organizational-technical systems definition from decision support point of view, a dynamic
component of business processes, expert systems, situational modeling and multi-agent
systems.
Next section presents development principles and technical decisions of designed object-
oriented multi-agent resource conversion processes based decision support system, relying on
above-stated multi-agent resource conversion process model and multi-agent architecture.
work of expert system for real estate agency. The figure shows available house/apartment
search in the database on the basis of user set criteria. The search is run in form of decision
search diagram.
Object-oriented decision support system BPsim.DSS allows the following features
implementation:
1. Problem domain conceptual model definition
2. Multi-agent resource conversion process dynamic model design
3. Dynamic simulation
4. Experiment results analysis
5. Reporting on models and experiment results
6. Data export to MS Excel and MS Project
Decision support system visual output mechanism builder, based on decision search
diagrams (Fig. 7), as well represents agent knowledge base, based on frame-concepts. So,
agent knowledge base may be defined in two ways: productive and frame-concept – based.
Here is an example of wizard implementation. This one focuses on analysis an re-
engineering of business processes.
System analysis in area of business process management implies design of various system
models (or views) by the analyst, each of which reflects certain aspects of its behavior. Re-
engineering of these models targets the search of alternative ways of system processes
development by means of consolidation of certain model sub-elements into a single process.
There are three ways of achieving this. One is parametric synthesis, when initial model is
converted into series of new models by modifying separate parameters. Another is structural
synthesis, this is a way of forming new models by modifying model structure according
certain rules. Mixed synthesis uses features of both options. An actual task is development of
algorithm for model analysis and mixed synthesis.
Fig. 9. Decision search graph for synthesis operators application, implemented in intelligent
agent
There are several examples demonstrating BPsim.DSS system application. They are
presented in the next section.
decision support models, including the main model “CIS implementation options selection”.
Model knowledge base contains information on networking, hardware and software,
information systems, IT-projects, teams of IT-specialists.
Expert system module is used for project alternatives and effective alternative search
algorithms knowledge base development. Simulation model is used for separate project
stages monitoring, detection of errors and conflicts, occurred on initial planning stage,
solution of vis major (i.e. search of decision in a force majeure situation that occurs under
irresistible force or compulsion and may drastically change system certain parameters), that
happens during development project control and CIS deployment. Simulation model is
based on Spiral model of software lifecycle and is designed in BPsim.DSS.
BPsim line products were used for business processes analysis and requirements specification
development for Common Information System (CIS) of Ural Federal University. University
has complex organizational structure (faculties, departments, subsidiaries and representative
offices), which means that success of University business processes optimization depends on
quality of survey and enterprise model comprehensiveness. Survey revealed non-optimal
implementation of certain processes at the University, e.g. movement of personnel (Fig. 10).
Simulation model “as-is” of the process was designed in dynamic situations modeling
system BPsim.MAS. Model data has been achieved from questioning employees of
personnel office, IT department, and four selected dean offices. Model nodes represent
document processing stages. Nodes 1-10 correspond to document processing within faculty,
nodes 11-14 – processing of faculties weekly documents in personnel office. Use of
intelligent agent for business process analysis and re-engineering allowed fixing certain
problems.
1. Intelligent agent discovered two identical transacts within the model. Analyst decided
that dean office employees prepared two similar documents on different stages –
application and first statement. Recommendation – unification of these documents.
2. Intelligent agent discovered a feedback loop, which is a negative indicator for
document flow business process. Analysis of highly-loaded personnel affairs-involved
employees revealed that dean offices allowed generation of long document queues.
Also asst. rector was required to approve 3 similar documents at different stages.
Recommendation – process automation, use of electronic documents.
3. Analyst discovered that IT personnel assistance is required for database data
modification. Recommendation – process automation, data changes should be carried
out by information-specific personnel, IT dept. employees should not perform
uncommon functions.
The following process updates were offered after analysis and re-engineering:
1. When employee signs an application, the dean office representative creates its electronic
copy in common database;
2. First statement is not necessary, all information needed for the order is stored in
common database, so, approvals need to be gathered only once;
3. All process participants work directly with electronic application, without requirement
of sending documents to dean office;
4. Availability of storing files electronically;
5. Asst. rector approves only application and order;
6. Availability of tracing document route;
7. Employees of IT department no longer perform uncommon function of updating
database;
8. In case of urgent order the corresponding information objects are generated by the dean
office employee.
As a result, the “as-to-be” model no longer has nodes 7, 9, 10, 15 (Fig. 10). New business
process definition became a basis for CIS module “Movement of personnel” requirements
specification, which as well used diagrams, designed in BPsim.SD CASE tool. The
deployment effectiveness was estimated after deployment and half-year use of the module
based on simulation models “as-is” (old model) and “as-to-be” (new model). Results are
presented in Table 4.
Thus, due to “Movement of personnel” process improvement and automation dean’s office
employees work efficiency was raised by 27%, student desk employees work efficiency was
raised by 229%. Deployment economical effect is estimated by about 25 thousand euro per
calendar year. Economical effect is achieved in shortening and automation of unnecessary
document processing stages, information double input prevention and employee load
decrease.
104 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
“AS-IS” “AS-TO-BE”
Indicators
model model
Documents processed, per month 390 1264
Documents lost, per month 12 0
Dean office employees performance,
0.4 0.5
documents per hour
Personnel office employees performance,
2.4 7.9
documents per hour
Table 4. “Movement of personnel” deployment effect estimation
The following mathematical methods are used in MSN and business processes modelling,
analysis and synthesis tasks: teletraffic theory may be used on all MSN levels except services
level; simulation, situational and expert modelling methods are used for business processes
analysis and synthesis tasks. Expert and situational modelling methods, neural networks,
multi-agent and evolutionary modelling methods can be used in RCP formalization.
Multi-agent resource conversion processes theory is applied for MSN definition from
decision support point of view.
Frame-concept and conceptual graphs based approach, offered by A. N. Shvetsov and
implemented in form of «Frame systems constructor» expert system shell (FSC), is used as a
means of knowledge formalization (Shvetsov, 2004). A frame-based semantic network,
representing feasible relations between frame-concepts, is defined in form of extended UML
classes diagram, at the stage of system analysis.
UML sequence diagram is used for visual FSC output mechanism builder implementation.
This approach allows visual (in form of flowchart) problem solution flow definition, when
solution turns into a sequence of procedure (method/daemon) calls from one frame to
another. Hereby, this approach allowed visual object-oriented ontology and knowledge-
based output mechanism constructor implementation in form of decision search diagrams.
BPsim.DSS was applied for MSN technical economical engineering in Ural region, covering
metropolis Ekaterinburg, Russia, and satellites. Designed model is shown on Fig. 11.
6. Conclusion
In this chapter we have presented the following keynote features.
Some popular dynamic situations modelling systems including AnyLogic, ARIS, G2, Arena,
RAO and models/prototypes of other systems were compared. This comparison revealed
the necessity of a new system development, for it to be focused on multi-agent resource
conversion processes. Among the disadvantages of the named systems we can name an
incomplete set of features for dynamic situations modelling system; no support for problem
domain conceptual model engineering and multi-agent models, containing intelligent
agents, design; incomplete multi-agent resource conversion processes problem orientation;
programming user orientation; high retail price.
Multi-agent resource conversion process situational mathematical model requirements were
designed. The model must provide the following functions: dynamic resource conversion
processes modelling; definition of intelligent agent communities, controlling the resource
conversion process; situational approach application.
System development required multi-agent resource conversion process model definition.
The following features of the model were designed:
Decision Support Systems Application to Business Processes at Enterprises in Russia 107
7. Acknowledgment
This work was supported by the State Contract No. 02.740.11.0512.
8. References
Aksyonov K.A., Sholina I.I. & Sufrygina E.M. (2009a). Multi-agent resource conversion
process object-oriented simulation and decision support system development and
application, Scientific and technical bulletin, Vol. 3 (80), Informatics.
Telecommunication. Control, St.Petersburg, pp.87-96.
Aksyonov K.A., Bykov E. A., Smoliy E. F. & Khrenov A. A. (2008a). Industrial enterprises
business processes simulateon with BPsim.MAS, Proceedings of the 2008 Winter
Simulation Conference, Pages 1669-1677
Aksyonov K.A., Spitsina I.A. & Goncharova N.V. (2008b). Enterprise information systems
engineering method based on semantic models of multi-agent resource conversion
processes and software, Proceedings of 2008 IADIS International Conference on
Intelligent Systems and Agents, part of Multi Conference on Computer Science and
108 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Information Systems (MCCSIS), July 22-27, 2008, Amsterdam, Netherlands, pp. 225-
227.
Aksyonov, K.A. & Goncharova N.V. (2006). Multi-agent resource conversion processes dynamic
simulation, Ekaterinburg, USTU
Aksyonov, K.A. (2003). Research and development of tools for discrete resource conversion
processes simulation, DPhil research paper. Ural State Technical University,
Ekaterinburg, Russia
Andreichikov, A.V. & O.N.Andreichikova (2004). Intelligent information systems, Moscow:
Finance and statistics.
Bugaichenko, D.Y. (2007). Development and implementation of methods for formally logical
specification of self-tuning multi-agent systems with timely limitations. DPhil research
paper. St. Petersburg, Russia.
Bugaichenko, D.Y. (2006). Mathematical model and specification of intelligent agent systems
/ Bugaichenko. System Programming. No 2.
Bugaichenko, D.Y. (2005). Abstract architecture of intelligent agent and methods of its
implementation / Bugaichenko, Soloviev. System Programming. No 1.
Emelyanov V.V., Yasinovskiy S.I. (1998). Introduction to intelligent simulation of complex
discrete systems and processes. RAO language. Moscow, Russia
Masloboev A.V. (2009). Hybrid architecture of intelligent agent with simulation apparatus /
Masloboev, MGTU bulletin. Vol. 12, No 1. Murmansk State Technical University,
Murmansk, Russia
Masloboev A.V. (2008). Method of superposed generation and effectiveness estimation for
regional innovation structures / Masloboev, MGTU bulleting. Vol. 11, No 2.
Murmansk State Technical University, Murmansk, Russia
Muller J.P. & M.Pischel (1993). The Agent Architecture InteRRap: Concept and Application,
German Research Center for Artificial Intelligence (DFKI)
Rybina G.V., Paronjanov S.S. (2009). SMIAI system and its application experience for
building multi-agent systems / Rybina, Paronjanov. Software products and systems.
No 4.
Rybina G.V., Paronjanov S.S. (2008a). General model of intelligent agents interaction and
methods of its implementation / XI National Conference on Artificial Intelligence.
http://www.raai.org
Rybina G.V., Paronjanov S.S. (2008b). Modeling intelligent agents interaction in multi-agent
systems / Rybina, Paronjanov. Arificial intelligence and decision making. No 3.
Shvetsov, A.N. (2004). Corporate intellectual decision support systems design models and methods,
DPhil research paper, St.Petersburg, Russia
Wooldridge M. (1995). Intelligent Agent: Theory and Practice, Knowledge Engineering Review,
Vol. 10 (2).
Wooldridge M., Jennings N., Kinny D. (2000). The Gaia Methodology for Agent-Oriented
Analysis and Design. Journal of Autonomous Agents and Multi-Agent Systems, 3. p.
285-312.
6
1. Introduction
The present state of world economy urges managers to look for new methods which can
help to start the economic growth. To achieve this goal, managers use standard as well as
new procedures and tools. Development of information society and so-called new economy
has created a whole new business environment that is more extensive and rapidly changing.
Become a standard the use of modern information and communication technologies (ICT),
which enable faster and cheaper communication and an increase in the number of business
activities. Today in the world's economy, a major role plays electronic business (e-business).
Basic support for the e-business transactions are so-called e-commerce systems. E-commerce
systems became standard interface between sellers (or suppliers and manufactures) and
customers. E-commerce systems can be considered for systems with large geographical
distribution. This follows from their nature, when they are based on the Internet. Statistics
show an increasing interest in cross-border online shopping. This is also dependent on the
efforts of manufacturers and retailers to establish themselves on foreign markets. E-
commerce systems allow them to do it quickly and at relatively low financial cost.
One of basic features of efficient e-commerce is correct definition and description of all
internal and external processes. All the management activities and decision making has to
be targeted to customers´ needs and requirements. The optimal and most exact way how to
obtain and find optimal solution of e-commerce system and its procedural structure is
modelling and simulation. Organizations developing e-business and e-commerce solutions
consider business modelling as a central part of their projects. The latest theoretical and
practical experiences are testaments to how great strategic advantage for organizations
doing business is creation and use of new e-business and e-commerce models. The
importance and necessity of the creation and use of new models is described as the most
recent publications, for example in (Laudon & Traver, 2009) or (Laudon & Traver, 2010) as
well as in older, for example (Barnes & Hunt, 2000). Next to the business model, a big
attention must be paid to models of e-commerce systems from a technical point of view.
(Rajput, 2000)
Expanded scope, creation and use of new models create the need for increased demands on
decision-making. Social and business environment is changing very rapidly and this has an
impact on managerial approaches. ICT became basic decision-making support.
Technological bases of e-commerce systems are ERP and CRM systems, which constitute the
core of enterprise informatics. Now almost all of them contain decision-making support
110 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
modules and tools, which fall into the decision support system (DSS) or business
intelligence (BI) category. In this context it is appropriate to ask, what is the difference
between DSS and BI, how can these systems be used in e-commerce, which of these systems
is more appropriate for decision support and what are the conditions that these systems
could be classified as so-called intelligent decision support systems.
On the basis of these facts main goal of this chapter is to present new approaches to the
creation of e-commerce systems models and new approaches to managing e-commerce
systems using modern software tools for decision support.
2. E-commerce system
E-commerce systems are fundamental aids of online shopping. According to (Velmurugan
& Narayanasamy, 2008) e-commerce is defined as an attempt to increase transactional
efficiency and effectiveness in all aspects of the design, production, marketing and sales of
products or services for existing and developing marketplaces through the utilization of
current and emerging electronic technologies. In the globalization era, understanding the
adoption of information communication technology, including e-commerce by developing
countries is becoming important to improve its adoption success. This, in turn, enables
developed countries to trade with developing countries more efficiently.
Generally and simply, e-commerce system can be defined as a web server linked by
company´s information system. Detailed definition of e-commerce system appears from
definition of information system whose basic components are information and
communication technologies and users. Information systems are developed mainly for
management support. Managers are in a way, a special group of users. Information system
for them is on the one hand, a tool to support management activities, on the other hand, the
source of current information describing the current state of managed objects. It is
a principle of feedback, which is one of the fundamental principles of management.
Some authors (in some publications) consider as an e-commerce system only web server that
contains all the necessary functionality (for example (Garcia et al., 2002)). Main model of e-
commerce system based on process-oriented approach is shown for example in (Rajput,
2000). This model can be extended and then we can define main components of e-commerce
systems which are (Fig. 1):
customers;
internet;
web server (web interface);
CRM (Customer Relationship Management);
ERP (Enterprise Resource Planning);
LAN (Local Network Area);
payment system;
delivery of goods;
after-delivery (after-sales) services;
information systems of cooperating suppliers and customers.
E-commerce systems are developed to support business activities. Customers (buyers) have
their own requirements and corporate managers have to find all the ways, methods and
resources to meet their needs and requirements. Great emphasis must be placed on all
management control systems and systems to support the decision-making processes. The
Intelligence Decision Support System in E-commerce 111
commerce environment. In 2003, ANEC Policy Statement on Design for All called upon the
standard-makers to take the following generic consumer requirements into account when
designing, selecting, commissioning, modifying and standardizing ICT systems.
Requirements for IS/IT were summarized as accessibility/design for all, adaptability, child
safety issues, comprehensible standards, consistent user interface, cost transparency, easily
adaptable access and content control, ease of use, environmental issues, error tolerance and
system stability, exportability, functionality of solution, health and safety issues,
information supply for first-time user set-up procedure, interoperability and compatibility,
multi-cultural and multi-lingual aspects, provision of system status information, privacy
and security of information, quality of service, system reliability and durability, rating and
grading systems, reliability of information, terminology. (ANEC, 2005)
The second group of customer requirements is closely associated with business transactions
(Suchánek, 2010a). Customers want to find what they want easily and in short time, to get
sufficient number of information, to place an order easily, payment system to be secured
and failsafe, to get goods in quality of service and in short time, goods to be guaranteed by
sellers (producers) and to get benefits in dependence on a number of purchases. To be
reliable in an uncertain and changing environment companies must be able to respond to
the changes quickly. To obtain it, management needs actual information. The most
important condition of customer satisfaction is feedback. Suppliers and producers have to
monitor market environment and all have to be targeted to the customers. All customer
requirements have to be monitored for ever and company information system with the all
company´ processes have to be formed to ensure quality and rapid processing of the all
customer feedback information, needs and requirements. Feedback information can be
getting by the communication channels which are usually integrated in CRM (Customer
Relationship Management). Feedback is the most important condition of getting
information. If managers want to satisfy all customer requirements, they should:
get precision information;
get information in time;
get information in required form;
get information in visual form;
know information they want to.
Managers need besides information:
to develop the ability to apply information technology in complex and sustained
situations and to understand the consequences of doing so;
to learn the foundations on which information technology and applications are built;
and current or contemporary skills.
Communications Driven DSS, Data Driven DSS, Document Driven DSS, Knowledge Driven
DSS, Model Driven DSS, Spreadsheet based DSS and Web-based DSS. Communications
Driven DSS is a type of DSS that enhances decision-making by enabling communication and
sharing of information between groups of people. Data Driven DSS are a form of support
system that focuses on the provision of internal (and sometimes external) data to aid
decision making. Document Driven DSS are support systems designed to convert
documents into valuable business data. Knowledge Driven DSS are systems designed to
recommend actions to users. Model Driven DSS support systems incorporate the ability to
manipulate data to generate statistical and financial reports, as well as simulation models, to
aid decision-makers. Spreadsheet based DSS offer decision-makers easy to understand
representations of large amounts of data. Web-based DSS system is operated through the
interface of a web browser, even if the data used for decision support remains confined to a
legacy system such as a data warehouse. (Velmurugan & Narayanasamy, 2008)
Decision making processes need combination of skills, creativity, recognition of the
problems, and lucidity of judgment, determination, and effective implementation in
operational plan. Generally, decision making process has five stages: (Harrison, 1998)
problem determination (definition of objectives);
collection of information (identification of alternatives);
choosing of optimal decision;
implementation of a decision;
evaluation of decision.
To adopt a right decision, managers have to get correct information in right time. In
connection with e-commerce source system of data set is extended. With a view to
minimization of failure during the domestic and especially cross-border online selling, it is
necessary to allow for many factors. Besides typically economic indicators, source
information of management systems have to be for example legislature, culture, conventions
etc. (Fig. 2)
Decision making processes are also proceed on the side of customers. The customers´
decision-making process is the process they go through when they decide to purchase
something. (Olsen, 2003) Research suggests that customers go through a five-stage decision-
making process in any purchase:
need recognition and problem awareness;
information search;
evaluation of alternatives;
purchase decision;
post-purchase evaluation.
Managers´ decisions should lead to make the customers´ decision-making process easier. All
decision making processes have to be targeted to the customers and their needs and
requirements. Customers´ needs and requirements are usually different in a number of
countries. This fact is always a cause of unsuccessfully cross-border online selling
transactions. Only the way leading to reduce the number of unsuccessfully cross-border
online selling transactions is an optimal management system making use of all necessary
source information. To obtain an efficient decision-making, there are used mathematical
models of allocation processes (Bucki, 2008). More about mathematical model of e-
commerce simulation example is written at the end of this chapter. Supranational character
of e-commerce systems evokes the need to process an extensive set of information and urges
Intelligence Decision Support System in E-commerce 115
the managers to look for the new methods leading to maintenance and improvement of
position in domestic and especially foreign markets. This is possible only with the aid of
modern information technologies. Current trend is oriented to the development and usage
of systems with business intelligence tools. (Suchánek et al., 2010b)
support. Typical management control loop incorporates the measurement of the controlled
system outputs (“what has happened” - facts) which are evaluated and compared with the
company objectives (targets). The controller – manager then takes corrective action (“what
should or must be done”) in order to reach the specified targets. In case of decision support
by simulation the outputs of the controlled subsystem and the data concerning company
environment have to be included in the model structure. The results of simulation are
presented to decision maker – manager for evaluating possible decision alternatives. While
conceptual modelling forms the base of new system design, the simulation modelling can be
seen as operations support. This is why both ways of modelling are important for achieving
flexibility of data processing in particular, the company management effectiveness in
general.
E-commerce systems to work properly should support and cover all company’s core
processes. Correct processes identifying, mapping and implementing into e-commerce
system is the main condition of successful business. E-commerce dramatically and
strategically changes traditional business models. Companies are now pursuing more
intensive and interactive relationships with their business partners: suppliers and
customers. Competitive conditions and pressures on global market are forcing companies to
search for strategies of streamlining the entire value chain. To compete effectively,
companies must structurally transform its internal and external processes. These goals could
be reached by simultaneous renovation of business processes and implementation of
electronic business solutions. Business Process Reengineering (BPR) is an organizational
method demanding radical redesign of business processes in order to achieve more
efficiency, better quality and more competitive production. BPR has become one of the most
popular topics in organisational management creating new ways of doing business. Many
leading organisations have conducted BPR in order to improve productivity and gain
competitive advantage. However, regardless of the number of companies involved in re-
engineering, the rate of re-engineering projects success is less than 50%. Some of the
frequently mentioned problems related to BPR include the inability to accurately predict the
outcome of a radical change, difficulty in capturing existing processes in a structured way,
shortage of creativity in process redesign, the level of costs incurred by implementing the
new process, or inability to recognize the dynamic nature of the processes. An e-commerce
model generally means the adoption of company’s current business model to the Internet
economy. Main purpose of developing and analysing business models is to find revenue
and value generators inside reversible value chain or business model's value network. BPR
in 90-s has been focused on internal benefits such as a cost reduction, downsizing of
company and operational efficiency which is rather tactical then strategic focus. Nowadays,
e-business renovation strategies put their focus on the processes between business partners
and the applications supporting these processes. These strategies are designed to address
different types of processes with the emphasis on different aspects: customer relationship
management (CRM), supply chain management (SCM), selling-chain management and
enterprise resource planning (ERP).
It is well known that e-commerce might bring several advantages to the company. However,
existing practical business applications have not always been able to deliver the benefits
they promise in the theory. Prior to adopting e-business, companies need to assess the costs
needed for setting up and maintaining the necessary infrastructure and applications and
compare it with the expected benefits. Although the evaluation of alternative solutions
Intelligence Decision Support System in E-commerce 117
might be difficult, it is essential in order to reduce some of the risks associated with BPR
projects.
Before implementing the e-commerce model we should know if the designed model works
properly. For this verification process is often used the simulation. Simulation has an
important role in modelling and analysing the activities in introducing BPR since it enables
quantitative estimations of influence of the redesigned process on system performances.
Simulation of business processes represents one of the most widely used applications of
operational research as it allows understanding the essence of business systems, identifying
opportunities for change, and evaluating the impact of proposed changes on key
performance indicators. The design of business simulation models that will incorporate the
costs and effects of e-commerce implementation and will allow for experimentation and
analysis of alternative investments is proposed as a suitable tool for BPR projects. Some of
the benefits can be directly evaluated and predicted, but the others are difficult to measure
(intangible benefits).
After-sales
services
Product and/
Product (-s)
Customer or services Cart (basket)
selection
catalog
Login
Registration
Back to catalog or guide CRM
of ordering ERP
False
Method of delivery
Method of payment
True Data checking
Store inventory Other additional information
Payment system
(removal from storage)
agent or monolithic system to solve. Examples of problems which are appropriate to multi-
agent systems research include online trading, (Rogers, 2007) disaster response, (Schurr,
2005) and modelling social structures. (Ron & Naveh, 2004) The agents in a multi-agent
system have several important characteristics: (Wooldridge, 2004)
autonomy - the agents are at least partially autonomous;
local views - no agent has a full global view of the system, or the system is too complex
for an agent to make practical use of such knowledge;
decentralization - there is no designated controlling agent (or the system is effectively
reduced to a monolithic system). (Panait & Luke, 2005)
Typically multi-agent systems research refers to software agents. However, the agents in
a multi-agent system could equally well be robots, (Kaminka, 2004) humans or human
teams. A multi-agent system may contain combined human-agent teams. Multi-agent
systems can manifest self-organization and complex behaviors even when the individual
strategies of all their agents are simple. Model based on multi-agents-oriented approach can
be used to define the methods and possibilities of e-commerce systems simulation.
As above, in the area of e-commerce, multi-agent system (MAS) consists of various number
of software components, called agents. An agent is an entity that is designed to undertake
autonomous action on behalf of a user in pursuit of his desired goal. This implies that agents
are intelligent and autonomous objects that are capable of behaving like a human being.
Therefore agent technology is a suitable means for expanding business activities and saving
cost. Agents possess some form of basic intelligence to allow decision-making or have
a structured response to a given set of circumstances. These behaviour patterns are acquired
from or given to the agent by the user, this enables the agent to be capable of flexible actions.
Allowing it to exhibit good oriented and opportunistic behaviour to meet it’s objective.
Agents exist in an environment and can also respond to changes therein. Complex tasks can
be broken down into smaller components that can each be the responsibility of an agent.
They concentrate on their individual components, find optimal solution and combine their
efforts by inter-agent communication. (Folorunso et al., 2006)
measurement of the success – measurement of the success should answer the question
how e-commerce system helps to meet company objectives. Outcomes of this
measurement are used primarily to optimize and control.
Source data can be divided into next groups:
operational characteristics – on the web interface there are, for example, number of
displayed web pages, number of website visitors, in connection with the information
system there can be, for example, number of successfully completed transactions,
number of failed transactions (may be associated with system disturbances), etc.;
customer data – data relate primarily to demographic characteristics of visitors and
their preferences (city listed in the order, city listed in the query or demand, gender of
customer, etc.);
transactional data – data directly related to the sale of goods or services. These data are
the basis for financial analysis. (average order value);
data from other sources - other data relating to consumer behaviour in the website.
activities, while each of them constitutes interference in the system. In this context, there are
indicators that can be included in the administrative field, partly in the area of security and
personnel. In this group of indicators, there can be included: number of correct user
intervention, number of incorrect user intervention (for example, it may be related to poor
secure system, inappropriate user interface, incompetence users), number of system
administrator intervention (corrections system) and number of users involved in processing
by one commercial transaction.
Another important area is the logistics. The logistics is a channel of the supply chain which
adds the value of time and place utility. The basic indicators tracked in this area are: product
availability, number of successfully delivered products, number of unsuccessfully delivered
products (for example customer entered incorrect address or error occurred at the side of
vendor or carrier), average length of warranty service, storage costs, cost to deliver goods,
average delivery time of goods, number of cooperating suppliers offering the same goods.
The Internet has allowed the development of the new methods of marketing that provide
very important information useful for the management support and the planning. In this
area, the important key indicators are: number of unique customers, average visit frequency,
number of repeat customers, margin per customer, number of first buyer, average sales per
repeat customers, average order value per repeat customer, market share, percentage share
of new vs. returning visitors, average conversion rate, average time spent on the website,
average number of pages displayed per visit, percentage share of returns (one page visits),
average number of clicks on adverts. One of the key methods of e-marketing is a campaign.
For example, in the campaigns we can trace percentage share of visits according to the type
of campaign and percentage share of conversion rate according to the type of campaign.
These values are the basis for the determination of index of campaign quality.
The e-commerce systems are implemented with a goal to expanse the sales channels, and
thus to increase the profits. In this context, the important general indicator is the return on
investments (ROI). To calculate ROI in the web projects, we can use, inter alia, an online
calculator on the website. In this case, the source data can include: number of days (time
period), regular fixed cost (the cost of webhosting, salary, etc.), cost per customer
acquisition, the current daily visit site, the current conversion ratio of orders, the percentage
increase of conversion ratio due to investments, percentage increase of average order value
due to investments, estimated average order value increase after, the current average
margins, the percentage increase of average margins due to investment, and estimated
average margin value increase after.
Using these data, we can calculate: number of visitors, number of orders, yields, margins,
fixed costs, direct cost of acquiring visitors, profit, profit per days, and return on initial
investments.
Data can be processed by the using of ICT and then may be obtained from them valuable
information for decision-making. We can find new customer segments, analyze trends or
uncover new business opportunities. Business intelligence solutions are used for the
purposes of advanced data analysis and search for hidden dependencies. Business
intelligence aims to support better business decision-making and also can be helpful in
process of retaining existing customers and acquiring new customers. In next part of this
chapter will be mentioned more about BI.
7. Business intelligence
It's common knowledge that Decision Support Systems are a specific class of computerized
information system that supports business and organizational decision-making activities.
A properly designed DSS is an interactive software-based system intended to help decision
makers compile useful information from raw data, documents, personal knowledge, and/or
business models to identify and solve problems and make decisions. But what about so-
called intelligence decision support system? What should this type of systems meet the
requirements? From our perspective, an intelligent decision support system is a system
containing BI tools. Business Intelligence is a term that refers to the sum total, or effect, of
gathering and processing data, building rich and relevant information, and feeding it back
into daily operations so that managers can make timely, effective decisions and better plans
for the future. Generally business intelligence brings to managers a quite number of
advantages. The advantages enjoyed by market leaders and made possible by business
intelligence include the high responsiveness of the company to the needs of its customers,
recognition of customer needs, ability to act on market changes, optimization of operations,
cost-effectiveness, quality analysis as the basis for future projections, the best possible
utilization of resources etc.
Business intelligence is oriented to the management needs and decision making support.
Optimal setting of control processes is a prerequisite of the planned and expected aims.
Business processes are the collections of activities designed to produce a specific output for
a particular customer or market. It implies a strong emphasis on how the work is done
within an organization, in contrast to a product's focus on what. To do right decisions,
managers need information. Data that is relevant to a business decision may come from
anywhere. The most important sources of data include: (Suchánek, 2010a)
Master data - this is data collected (usually once and once only) to define the entities in
an e-business system (customer file, product file, account codes, pricing codes, etc...).
Scores of the time we can meet with term of Master Data Management (MDM). MDM
comprises a set of processes and tools that consistently define and manage the non-
transactional data entities of an organization (which may include reference data).
Configuration data - as the term implies this is data defining the nature of the system
itself. The system is configured to reflect the nature and needs of the business.
Operations data (OLTP - Online Transaction Processing) - also known as activity. This
data is generated by daily business activities such as sales orders, purchase orders,
invoices, accounting entries, and so on. OLTP refers to a class of system that facilitates
and manages transaction-oriented applications, typically for data entry and retrieval
transaction processing.
Information systems (OLAP - Online Analytical Processing) - these are sophisticated
applications collecting information from various internal and external sources to analyze
data and distill meaningful information. OLAP software is used for the real-time analysis of
Intelligence Decision Support System in E-commerce 123
data stored in a database. The OLAP server is normally a separate component that contains
specialized algorithms and indexing tools to efficiently process data mining tasks with
minimal impact on database performance.
Almost all requisite data for the decision making support in e-commerce comes from CRM
and ERP systems. Business intelligence is closely related to data warehousing. Data has to
be processed (data selection, data analysis, data clearing etc.) and sent in right time and in
required form to the competent person usually acting in management system. (Fig. 4)
Obtained data are basis for decision making support at all levels and kinds of management.
Staging database
CRM
(primary layer of DW)
Communication Operational
channels CRM Management
Business (www, phone, branch
environment office, dealers, Analytical SCM (Supply Chain Management)
promotional flyer, and CRM (Customer Relationship Management)
so on) CRM FRM (Financial Resource Management)
HMR (Human Resource Management)
MRP (Manufacturing Resource Planning)
CPM (Composite Product Mapping)
Data warehouse
ETL ...
Orders & Contracts & Payments
ERP Reporting
Applications Ad-hoc reporting
Accounting Planning
OLAP
Inventory control Project management SharePoint
Production control Claim control MS Excel
BIRT Project
Order management Receivable management The Pentaho BI Project
Human resource Agata Report
Property administration
management More others
and bar charts, pie charts, line graphs, profile charts and more. At present, when the world
economic conditions are not affable, companies look for all information which would help
them to start economic growth. In this respect e-commerce companies have recently started
to capture data on the social interaction between consumers in their websites, with the
potential objective of understanding and leveraging social influence in customers' purchase
decision making to improve customer relationship management and increase sales. (Young
& Srivastava, 2007)
Business intelligence systems are able to provide the managers quite a number of statistics
dealing with customers and their environment. As important customer statistics can be
considered, for example, matching sales revenues with site visitor activity, by week and
month, in total and by product line, matching weekly and monthly sales with site visitor
activity over time (trend analysis), in total and by product line, matching sales revenues
with site visitor activity, by day and hour, in total and by product line (to measure the
effectiveness of advertising campaigns), matching sales revenues with site visitor activity
from main referrers, by week and month, in total and by product line. Where the referrer is
a search engine, also matching the search query with sales revenues. These statistics respond
the managers to questions:
Who did buy?
How much did they buy?
When did they buy?
What did they buy?
From where customers arrived at the site?
In which region customer are located?
How they arrived at the site (e.g. by what search engine query)?
From which page customer entered the site?
What is their path through the site?
From which page customer left the site?
On a weekly and monthly basis and the trends, over time.
Advantages and benefits of business intelligence in the sphere of e-commerce can be
summarized as:
Business intelligence gives any firm the specific view of corporate data that is required
for progress (quickly access sales, product and organizational data in any database).
In sales and marketing, business intelligence offers new tools for understanding
customers’ needs and responding to market opportunities
By providing financial planners with immediate access to real-time data, Business
Intelligence builds new value into all financial operations including budgeting &
forecasting.
Business intelligence supports decision-making with automatic alerts and automatically
refreshed data.
Business intelligence provides performance monitoring for accelerated action and
decision making.
Business intelligence makes companies possible to receive and process data from cross-
border business activities (above all from cross-border online shopping).
Business intelligence can bring to companies competitive advantage.
Intelligence Decision Support System in E-commerce 125
Besides data from business intelligence systems, companies can use services of the
consultation companies. There are many consultation companies providing information and
doing analyses. Correct source information are necessary, but usually have to be paid for.
There are many sources providing statistical data on the Internet, but these are usually not
adequate. Managers can acquire correct information from the many sources. (Molnár, 2009)
Following sources supply the attested information usable for manager´s decision making in
the global markets. Foreign stock market information can be found on (for example):
http://www.bloomberg.com;
http://www.dowjones.com;
http://www.nyse.com;
http://www.nasdaq.com;
http://www.reuters.com;
http://money.cnn.com;
http://www.imrworld.org/;
http://finance.yahoo.com/;
http://www.marketwatch.com/;
http://www.indiainfoline.com/.
Marketing information can be found on (for example):
http://www.formacompany.ie;
http://reports.mintel.com/;
http://www.eiu.com;
http://www.frost.com;
http://www.adlittle.com;
http://www.datamonitor.com;
http://www.euromonitor.com/.
Law and legislative information can be found on (for example):
http://www.iblc.com/home/;
http://www.iblc.com/;
http://www.ibls.com/;
http://www.enterweb.org/law.htm;
http://euro.ecom.cmu.edu/resources/elibrary/ecllinks.shtml.
Properly implemented BI tools into the DSS in conjunction with multi-agents-oriented
approach enable the implementation of simulation methods. Simulation can be described as
a technologically and logically the highest level of decision-making support. DSS allows
simulation can fit neatly into the category of intelligent DSS.
The reasons for the introduction of simulation modelling into process modelling can be
summarized as follows (Bosilj-Vukšic et al., 2010):
simulation enables modelling of process dynamics;
influence of random variables on process development can be investigated;
anticipation of reengineering effects can be specified in a quantitative way;
process visualization and animation are provided;
simulation models facilitate communication between clients and an analyst.
In this context, in the following section, we will present an example of simulations that can
be carried out within the e-commerce system.
126 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Control Actuating
Disturbances Disturbances
deviation signal
Measuring element
Management
Economics Competent
IS/IT department
department persons
- Number of proper closured business
transactions
- Number of unsuccessful business transactions
- Return on investments
- Time of incidental system breakdown
- Number of conversions on e-shop web sites
- Fruitfulness of SEO or PPC
- Number of customers´ feedback
- Number of people failures
- Profit figure
- Quite a few of others
Fig. 6. Generic e-commerce model
9. Conclusion
E-commerce systems are fundamental aids of online shopping. These systems are focused
on customer needs and requirements. E-commerce systems are large systems and produce
a huge amount of data collection particularly data related to the behavior of customers. Data
has potentially great value for management and decision making support. Data must be
accepted, processed and appropriately presented using appropriate tools. Only modern
software tools can provide data quickly and with the required quality. Modern software
tools for processing large data collections are BI and DSS systems. Development of e-
commerce systems places increasing emphasis on the need to create models of these
systems. The modelling of e-commerce systems can be effectively done by use of a process-
oriented approach, value-oriented approach or approach based on multi-agents systems.
The main difference between process modelling and value chain modelling is that process
modelling specifies "How" a process is realized and implemented, while value chain
modelling specifies "Why" the process occurs in terms of added value to the process
participants. Multi-agents approach is primarily used to support simulation.
Simulations more often help the managers implement viable plans and adequate
management activities. Simulation system can be described as intelligent decision support
128 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
systems. Important is that the simulation system would be integrated into the management
system as an automatic or semiautomatic system.
Fig. 7. The model links management layers, real activities and simulations
10. Acknowledgment
Chapter's content is based on the outputs of the project SGS/24/2010 - Use of BI and BPM to
support effective management (project is carried out at The Silesian University in Opava,
School of Business Administration in Karviná – Czech Republic).
11. References
Agrawal, P. (2006). E-Business Measurements and Analytics: (Measuralytics). iUniverse, Inc., 91
p., ISBN 978-0595398386, Bloomington, USA
ANEC. (2005). Consumer Requirements in Standardisation relating to the Information Society.
05.05.2009, Available from http://www.anec.org/attachments/it008-03rev1.pdf
Arai, T., Kragic, D. (1999). Variability of Wind and Wind Power, In: Wind Power, S.M.
Muyeen, (Ed.), 289-321, Scyio, ISBN 978-953-7619-81-7, Vukovar, Croatia
Barnes, S., Hunt. B. (2000). E-Commerce and V-Business: Business Models for Global Success,
Butterworth-Heinemann, ISBN 978-0750645324, Oxford, United Kingdom
Bosilj-Vukšič, V., Stemberger, M., I., Jaklič, J. (2010). Simulation Modelling to Assess the Added
Value of Electronic Commerce. Available from http://bib.irb.hr/datoteka/
89704.MIPRO-Vesna-Bosilj.pdf
Bucki, R. (2008). Mathematical Modelling of Allocation Processes as an Effective Tool to Support
Decision Making. Information and Telecommunication Systems, Polish Information
Processing Society, The Beskidy Group, Academy of Computer Science and
Management, No. 17, Bielsko-Biała, Poland
Intelligence Decision Support System in E-commerce 129
Dunn, C., L., Cherrington, J., O., Hollander A., S. (2005). Enterprise Information Systems. A
pattern based approach. McGraw-Hill, 397 p., ISBN 0-07-240429-9, New York, USA
ETL-Tools.Info. (2009). Definition and concepts of the ETL processes. 15.11.2009, Available from
http://etl-tools.info/en/bi/etl_process.htm
Folorunso, O., Sharma, S., K., Longe, H., O., D., Lasaki, K. (2006). An Agent-based Model for
Agriculture E-commerce System. Information Technology Journal 5(2). 2006, ISSN:
1812-5638, pp. 230-234, Available from http://www.docsdrive.com/
pdfs/ansinet/itj/2006/230-234.pdf?sess=jJghHkjfd76K8JKHgh76JG7FHGD
redhgJgh7GkjH7Gkjg57KJhT
Garcia, F.J., Paterno, F., Gil, A.B. (2002). An Adaptive e-Commerce System Definition. Springer,
Heidelberg, 2002, ISBN 978-3-540-43737-6, Berlin, Germany
Harrison, E. F. (1998). The Managerial Decision-Making Process. Cincinnati: South-Western
College Pub. 576 p., ISBN 978-0395908211, Boston, Massachusetts, United States
Hruby, P. (2006). Model Driven Design Using Business Patterns. Heidelberg: Springer
Verlag, ISBN 978-3-540-30154-7, Berlin, Germany
Kaminka, G. A. (2004). Robots are Agents, Too! AgentLink News, pp. 16–17, ISBN: 978-81-
904262-7-5, New York, USA
Laudon, K., Traver, C.G. (2009). E-Commerce 2010 (6th edition), Prentice Hall, ISBN 978-
0136100577, New Jersey, USA
Laudon, K., Traver, C.G. (2010). E-Commerce 2011 (7th edition), Prentice Hall, ISBN 978-
0136100577, New Jersey, USA
Lima, P., Bonarini, A., Mataric, M. (2004). Application of Machine Learning, InTech, ISBN 978-
953-7619-34-3, Vienna, Austria
Li, B., Xu, Y., Choi, J. (1996). Applying Machine Learning Techniques, Proceedings of ASME
2010 4th International Conference on Energy Sustainability, pp. 14-17, ISBN 842-6508-
23-3, Phoenix, Arizona, USA, May 17-22, 2010
Molnár, Z. (2009). Internetové informační zdroje pro strategické plánování. International
Conference Internet, Competitivenes and Organisational Security in Knowledge Society.
Tomas Bata University in Zlin, ISBN 978-80-7318-828-3, Zlín, Czech Republic
Murillo, L. (2001). Supply chain management and the international dissemination of e-
commerce. Industrial Management & Data Systems. Volume 101, Issue 7, Page 370 –
377, ISSN 0263-5577, San Francisco, California, USA
Olsen, H. (2003). Supporting customers' decision-making process. [on-line] Retrieved 09.12.2009.
URL: <http://www.guuui.com/issues/02_03.php >
Panait, L., Luke, S. (2005). Cooperative Multi-Agent Learning: The State of the Art.
Autonomous Agents and Multi-Agent Systems, pp 387-434, [on-line] Retrieved
January 3, 2011, <http://cs.gmu.edu/~eclab/papers/panait05cooperative.pdf>
Porter, M. (1980). Competitive Strategy: Techniques for Analyzing Industries and
Competitors. Free Press, 398 p., ISBN 978-0684841489, New York, USA
Rajput, W. (2000). E-Commerce Systems Architecture and Applications, Artech House
Publishers, ISBN 978-1580530859, London, United Kingdom
Řepa, V. (2006). Podnikové procesy: Procesní řízení a modelování. Grada Publishing, 288 p., ISBN
80-247-1281-4, Prague, Czech Republic
Rogers, A., David, E., Schiff, J. and Jennings, N. R. (2007). The Effects of Proxy Bidding and
Minimum Bid Increments within eBay Auctions. ACM Transactions on the Web, 1
(2). article 9-(28 pages).
Simon, A., Shaffer, S. (2001). Data Warehousing and Business Intelligence for E-commerce.
Academic press, 320 p., ISBN 1-55860-713-7, London, England
130 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Quick Response in
a Continuous-Replenishment-Programme
Based Manufacturer-Retailer Supply Chain
Shu-Lu Hsu1 and Chih-Ming Lee2
1Department of Management Information Systems,
National Chiayi University
2Department of Business Administration,
Soochow University
Taiwan
1. Introduction
With the widespread concept of partnership in supply chain management, the traditional
replenishment process is fast giving way to the Quick Response (QR) and Continuous
Replenishment Programme (CRP). QR is a movement in industries to shorten the
replenishment lead time which is critical to reduce inventory level and improve the levels of
customer service. Wal-Mart, Seven-Eleven Japan, and many other retailers apply
tremendous pressure on their suppliers to reduce the replenishment lead time (Chopra and
Mendil 2007). CRP is an efficient replenishment initiative which focuses on removing excess
inventory throughout the pipeline and synchronizes demand with production (EAN
International 2000). In CRP, the inventory of retailer is planned, monitored, and replenished
by the supplier on behalf of the consumers. To enable CRP, sales data and inventory level of
the retailer must be provided to the supplier via Electronic Data Interchange (EDI) or other
electronic means. Thus, to successfully implement CRP requires the supplier and the retailer
to work in a cooperative manner based on mutual trust and joint gains.
The lead time is needed for several operations among trading parties, such as ordering,
manufacturing, delivering, and handling. In practice, lead time can be shortened with extra
crashing costs. Many researchers have studied the inventory decision incorporating lead
time reduction under various assumptions. For example, Liao and Shyu (1991) first
developed an inventory model in which lead time was the unique decision variable. Ben-
Daya and Raouf (1994) extended the previous model by including both lead time and order
quantity as decision variables. Pan and Yang (2002) and Pan and Hsiao (2005) considered
lead time crashing cost as a function of both the order quantity and the reduced lead time.
Since the reduction of lead time may involve with the vendor and the buyer to improve the
related operations, a dyadic vendor-buyer viewpoint for shortening lead time is often
suggested. For instance, Iyer and Bergen (1997) compare the profits of a single vendor and a
single buyer supply chain before and after QR based on a newsboy inventory model. Ben-
Daya and Hariga (2004), Ouyang et al. (2004), Chang et al. (2006), and Ouyang et al. (2007)
132 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
n
stock, where safety stock = zv KT i2 .
i 1
6. For each order, a common ordering cost C is incurred to and shared by all retailers and
an individual ordering cost Ci is incurred to the retailer i.
7. The manufacturer incurs a setup cost A for each production run.
134 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
8. The service level measures, i.e., fraction of demand met per cycle, for the retailer i and
E(xi Si ) E(y Sv )
for the manufacturer are defined as 1 and 1 , respectively.
Di T l n
KT Di
i 1
9. The expenditure for implementing QR is modeled as the lead-time crashing cost per
order, Cr(l), and is assumed to be a non-increasing stairstep function of l, such as:
0 l l0 ,
r l1 l l0 ,
1
C r (l )
rb lb l lb 1 ,
where l0 and lb represent the existing and the minimal length of lead times, respectively.
This cost could be expenditure on equipment improvement, order expediting, or special
shipping and handling.
Si = Di T l zi i T l . (1)
hi DT DT
Si Di l DiT Si Di l hi Si Di l i = hi i zi i T l .
2 2 2
Quick Response in
a Continuous-Replenishment-Programme Based Manufacturer-Retailer Supply Chain 135
The annual individual ordering cost incurred to retailer i is Ci/T. Then, the expected annual
inventory cost for all retailers, consisting of common ordering cost, individual ordering cost,
and carrying cost, is given by
n
1 n n
DiT
ECi T C Ci hi 2
zi i T l .
i 1 i 1 i 1
Inventory
Retailer 1 S1
level
Time
Inventory
level
Retailer n Sn
l Time
Inventory
level
Manufacturer Sv
Time
Fig. 1. Inventory levels for a manufacturer and multiple retailers under a coordinated
common shipment cycle time
Based on the fifth assumption, the target levels of replenishments, Sv, is set as
n n
Sv = KT Di zv KT i2 . (2)
i 1 i 1
According to Banerjee and Banerjee (1994) with Eq. (2), the manufacturer’s average
inventory can be derived as
136 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
n n n n n
DiT Di n DiT Di 2 Di n
i 1 2 K K 1 S KT D K 1 i 1 i 1 1 z KT 2 .
i 1
2 P v i i 1
2 P P v i
i 1 i 1
n n n
DiT Di 2 Di n
A K 1 i 1 i 1 1 z KT 2 ,
EC v
KT
hv i 1
P v i
2
P
i 1
where the first term represents the average annual setup cost and the second term is the
average carrying cost.
Based on the service level measure defined in Ouyang and Chuang (2000), the following
service level constraints are specified:
S (xi Si )f i T l (xi )dxi
service level constraint on retailer i = 1 i
αi, i =1, 2,…, n, (3)
Di (T l )
S y Sv g(y )dy
service level constraint on the manufacturer = 1 v
n
αv. (4)
KT Di
i 1
As the demand is normally distributed, (3) and (4) can be rewritten as (see Appendix for
proof):
n
1 v KT Di
i 1
service level constraint on the manufacturer = (zv ) zv(zv ) 0, (6)
n
i2
i 1
where (z) and (z) represent the standard normal distribution and the complementary
cumulative normal distribution function, respectively. The expected annual total cost for the
system, comprising the expected annual inventory costs for all parties and the lead time
crashing cost, is given by
Quick Response in
a Continuous-Replenishment-Programme Based Manufacturer-Retailer Supply Chain 137
n
C r (l )
ETC(K, T, l, z, zv)= EC i EC v
i 1 T
n
A n n n
C Ci C r (l ) hv Di Di 2 Di n D h
K K 1 i 1 i 1 1
2
i 1
= T i 1 i i
T P
2
P
i 1
n n
+ hv zv KT i2 hi zi i T l , (7)
i 1 i 1
where the vector z ={z1, z2,…, zn}. Then, the decision is to minimize ETC(K, T, l, z, zv) under
the constraints (5) and (6).
In order to solve the nonlinear programming problem, the following propositions are
needed.
Proposition 1. For any given K, T, z, and zv, the minimal ETC will occur at the end points of
the interval [lj+1, lj] when Cr(l) is a stairstep function.
Proof:
2C r (l )
As Cr(l) is a stairstep function, 0 for any l in a continuous interval [lj+1, lj]. The
l 2
second derivative of (7) with respect to l is equal to
2
ETC 1 C r (l ) 2 hi zi i
i 1 0
l 2 T l 2 4 T l
3
for any l in a continuous interval [lj+1, lj]. Therefore, the proposition holds.
Proposition 2. For any fixed K, l, z, and zv, there is a unique optimal T to ETC in (7).
Proof:
For any given K, l, z, and zv in a continuous interval [lj+1, lj], a necessary condition for a
solution of T to be optimal for (7) is
n
A n n n
C Ci C r (l ) hv Di Di 2 Di
ETC K
i 1
i 1 K 1 i 1 i 1 1
T T 2 2 P P
n n n
Di hi hv zv K i2 hi zi σ i
i 1 i 1 i 1
0.
2 2 T 2 T l
138 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
n
A n n n
C Ci C r (l ) hv Di Di 2 Di
K
i 1
i 1 K 1 i 1 i 1 1
T2 2 P P
n n
n
hv zv K i2 hi zi i
Di hi
i 1
i 1
. (8)
i 1 2 2 T 2 T l
Let Tf be the solution to (8). Then, the second derivative of (7) at Tf satisfies
n
A n n
2 ETC
2 C C i
K
C r (l ) h v z v K i2 hi zi i
i 1 i 1
i 1
T 2 (T f ) T f3 3 3
4T 2 f
4 Tf l 2
n n n n n
hv Di Di 2 Di n D h hv zv K σ i hi zi σ i
2
2 i 1 K 1 i 1 i 1 1
2
i 1
i i
i 1
Tf 2 P P 2 Tf 2 Tf l
i 1
n n
hv zv K σ i2 hi zi σ i
i 1 i 1
3
3
4T f 2
4 Tf l 2
n n n n n
hv Di Di 2 Di n D h 3hv zv K i 3T f 4l hi zi i
2
i 1 K 1 i 1 i 1 1 i i i 1
i 1
0.
P 3
Tf
P
i 1 T f 4T f T f
Tf Tf l 2
That is, the second derivative of ETC at each Tf is positive. The result implies that, for given
K, l, z, and zv, an only solution for T, corresponding to the minimum of ETC, can be derived
from (8). This completes the proof.
Proposition 3. For any given K, T, and l in a continuous interval [lj+1, lj], the boundary
solution for z and zv derived from (5) and (6) is the optimal solution to this nonlinear
programming problem.
Quick Response in
a Continuous-Replenishment-Programme Based Manufacturer-Retailer Supply Chain 139
Proof:
By using the method of Lagrange multipliers, the problem can be transformed into
n
A n n n
C Ci Cr (l) hv Di Di 2 Di n
K D h
ETC(K, T, l, z, zv, , s)= i 1
T i 1 K 1 i 1 i 1 1 i i
T P
2
P
i 1 2
n n n
hv zv KT i2 hi zi i T l i (zi ) zi (zi )
1 i Di T l s2
i
i 1 i 1 i 1 i
n
1 v KT Di
i 1 2
v (zv ) zv (zv ) sv , (9)
n
i 2
i 1
where λ is a vector of Lagrange multipliers with λ={λ1, λ2,…, λn, λv} 0, and s is a vector of
slack variables with s = {s12, s22,…, sn2, sv2}. According to Kuhn-Tucker theorem (Taha, 2002),
for any given l in a continuous interval [lj+1, lj], it can be shown that s = {0,0,…,0} is a
necessary condition for a solution in (9) to be optimal. Then, for any given l in a continuous
interval [lj+1, lj], the necessary conditions for (9) to be minimized are
ETC
hi i T l i (zi ) 0 , i =1, 2,.., n, (10)
zi
n
ETC
hv KT i2 v (zv ) 0 , (11)
zv i 1
ETC T l
(zi ) zi (zi ) 1 i Di 0, (12)
i i
n
1 v KT Di
ETC i 1
(zv ) zv (zv ) 0, (13)
v n
i2
i 1
n
hv KT i2
hi i T l i 1
i* (zi ) 0, i = 1, 2,…, n, and v* (zv ) 0.
(zi ) (zv )
140 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
2 ETC
Therefore, constraints (5) and (6) are binding. In addition, = i (zi ) >0 (i=1, 2,…, n,
zi2
m), then the z and zv derived from (12) and (13) (i.e., the boundary solutions to (5) and (6))
are optimal to (9) with given K, T, and l.
Based on the above propositions, the optimal solution for K, T, l, S, and Sv to this nonlinear
programming problem can be obtained by the following algorithm:
Algorithm
10. Set K=1.
11. Find ETC * (K , l j | j 0,1,..., b )
2.1 Let j =0
2.2 Set l = lj. zi (i =1, 2,…, n) = 0, and zv = 0.
2.3 Compute T* by substituting K, lj, z and zv into (8). Set Tkj = T*.
2.4 Compute z* by substituting lj and Tkj into (12) and compute z*v by substituting K
and Tkj into (13).
2.5 Compute T* by substituting K, lj, z*, and zv* into (8).
2.6 If T* = Tkj, go to step 2.7; otherwise, set Tkj = T* and go to step 2.4.
2.7 Set T*(K,lj)=Tkj, z * (K , l j ) =z*, and zv* (K , l j ) = zv* . Compute ETC * (K , l j ) by substituting
4. Computer implementation
The algorithm proposed in Section 3 has been implemented as a decision support system on
a personal computer in which Intel Pentium D 2.8 GHz CPU with 1024 MB RAM inside it.
Visual Basic 2005 is utilized as the software platform to develop the decision support
system. Figure 2 shows the window to input the numbers of retailers, e.g., n =3 in this case,
and steps of the lead-time crashing cost function, e.g., three steps in this case, involved in
the replenishment and lead time reduction decisions. Once the decision maker inputs the
numbers, a window to specify the relevant parameters will be displayed as shown in Figure
2. If all of the relevant parameters are input for the system, the decision maker then clicks
the label “Find Solution” to select the lead time policy and to set the options of output.
Figure 3 illustrates the result that the system derives the solution in which K* = 2, l* = 0.005
years, T* = 0.0709 years, S1* =708, S2* = 760, S3* = 1276, Sv* = 3574, and ETC* = $19455.5 for the
case of implementing QR (representing by the controllable lead time option) after the
Quick Response in
a Continuous-Replenishment-Programme Based Manufacturer-Retailer Supply Chain 141
Fig. 3. The replenishment and lead time reduction decisions under CRP provided by the
decision support system
142 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Fig. 4. The coordinated replenishment decision under CRP provided by the decision support
system
decision maker clicks “Perform”. From the value of “Saving”, the benefit derived from
implementing QR can be learned. As shown in Figure 4, the system also provides the
inventory decision before QR (represented by the constant lead time option). One advantage
of the system is that the decision maker can learn the sensitivity of replenishment and lead
time decisions and the expected annual total cost of the chain by specifying the parameter
with the variation range in the Window of “What-if Analysis”. Figure 5 illustrates the
output of the what-if analysis of K*, l*, T*, S*, Sv* , ETC*, and the cost saving after QR against
the variation of standard deviation of demand.
Base example 2 0.0709 0.005 708 760 1276 3574 19455.3 2 0.0711 0.73
σi
+100% 2 0.0568 0.002 895 1086 1384 3789 27806.2 2 0.0577 2.59
(i=1,2,3)
P +100% 1 0.0877 0.005 827 874 1320 2357 18822.3 1 0.0880 0.71
C +100% 2 0.0779 0.005 758 808 1208 3885 20799.7 2 0.0781 0.67
Ci
+100% 1 0.1024 0.005 930 970 1488 2691 23410.0 1 0.1027 0.54
(i=1,2,3)
A +100% 3 0.0705 0.005 705 757 1291 5108 20696.8 2 0.0781 1.16
hi
+100% 3 0.0507 0.005 560 616 886 3803 26567.3 3 0.0513 1.88
(i=1,2,3)
hv +100% 1 0.0704 0.005 704 756 1120 1958 23078.6 1 0.0705 0.62
1-αi
+100% 2 0.0733 0.01 705 741 1128 3681 18548.2 2 0.0735 0.32
(i=1,2,3)
1-αv +100% 2 0.0727 0.005 721 772 1147 3467 18897.2 2 0.0728 0.75
a: K0 and T0 represent the optimal K and T, respectively, before QR is implemented.
b: ΔETC(%)=(1-ETC*/ETC0)×100%, where ETC0 represents the optimal ETC before QR is implemented.
replenishments for the manufacturer and the retailers may become lower after bringing
QR into practice.
5. The number of shipments per production cycle, K*, after implementing QR is always no
less than that before implementing QR. When the values of K for the two situations are
equal, the optimal shipment cycle time, T* as well as the protection period, T*+l*, after
QR is always no large than those before QR. The result implies that under a fixed
number of shipments during a production cycle, the protection period for the retailer
will be reduced after implementing QR.
6. The increase of production rate will result in a longer shipment cycle time and higher
target levels of replenishments for the retailers. In contrast, as the value of P increases,
the number of shipment per production cycle and the target level of production for the
manufacturer will become smaller.
7. When the ordering cost increases, the shipment cycle time and the target levels of
replenishments for each party will increase.
8. The value of αi specifies the minimal fraction of demand met per cycle for the retailer
and directly relates with the length of protection period (T+l). It can be found that the
amount of ΔETC(%) decreases as the retailer’s maximal fraction of demand unfilled per
cycle increases. The result implies that the benefit from implementing QR is
significantly related to the retailer’s service level threshold and the benefit is substantial
for a supply chain requesting a high customer service level.
9. The numerical example shows that as 1-αv increases from 1% to 2%, the length of
common shipment cycle time increases by 2.54% (from 0.0709 to 0.0727) but the lead
time is unaffected. The result implies that the effect of manufacturer’s service level
threshold is more significant on the common shipment cycle time than on the reduction
of lead time.
5. Conclusions
Speed, service, and supply chain management have been core capabilities for business
competition. The reduction of overall system response time with a satisfied service level has
received a great deal of attention from researchers and practitioners. In this study, we
investigated the effect of investing in QR on a CRP based supply chain where a
manufacturer produces and delivers items to multiple retailers at a coordinated common
cycle time with the minimized expected total system cost and satisfied service levels.
Extending the work of Banerjee and Banerjee (1994) by involving ordering costs and the
reducible replenishment lead time, a model and an algorithm are proposed to
simultaneously determine the optimal shipment cycle time, target levels of replenishments,
lead time, and number of shipments pre production cycle under service level constraints for
the supply chain.
A numerical experiment along with sensitivity analysis was performed and the results
explain the effect of QR on the replenishment decisions and the total system cost. The results
provide the following findings about our model:
1. The system can reduce the stocks throughout the pipeline and remain service levels of
retailers via investing in QR initiative.
2. The benefit from implementing QR is especially significant for a supply chain with high
uncertainty in demand or the retailers requesting high service levels or incurring high
carrying costs.
Quick Response in
a Continuous-Replenishment-Programme Based Manufacturer-Retailer Supply Chain 145
3. The shipment cycle time will decrease after QR implemented. Additionally, the shipment
cycle time is especially sensitive to the variation of manufacturer’s production rate, the
individual ordering cost, the variance of demand, and the retailer’s carrying cost.
4. The decision of adopting QR is mainly influenced by the variance of demand and the
retailer’s service level threshold. The higher the demand uncertainty or the higher the
retailer’s service level threshold, the more beneficial to implement QR in supply chains.
Appendix
xi i
Let G x Si f (xi )dxi and zi , where f(xi) represents a normal distribution with
Si i
the mean i and the standard deviation i. By replacing xi with i+zii, we have
G S i zi i Si (zi )dzi = i Si i zi (zi )dzi Si i Si i (zi )dzi , (A.1)
i i
i i i
where (zi) is the standard normal distribution and (zi) is the complementary cumulative
normal distribution function defined as
w2
1
(zi ) (w )dw z e 2 dw .
zi 2 i
zi2
Let v , then there is dv zi dzi and
2
2
1 Si i
1 1 v 1 2 i
1 Si i 2 e dv 2 e 1 Si i 2 2 e
v
Si i zi (zi )dz (zi ) (A.2)
i 2
2 i
2
i
Next,
Si i S (zi )dzi Si i (zi ) .
i i
(A.3)
i
By substituting (A.2) and (A.3) into (A.1), we have
G
Si
x Si f (xi )dxi i (zi ) Si i (zi ) . (A.4)
Let i = Di(T+l) and i = i T l and apply (A.4) to (3), (3) can be rewritten as
T l 0, i =1, 2, 3.
(zi ) zi (zi ) 1 i Di
i
This completes the proof for (5). Accordingly, (6) can be proved.
146 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
6. Acknowledgment
This research was partially supported by the National Science Council, Taiwan ROC (Plan
No. NSC 98-2410-H-415 -007).
7. References
Banerjee A (1986) A joint economic lot size model for purchaser and vendor. Decis Sci
17(3):92-311.
Banerjee A, Banerjee S (1992) Coordinated, orderless inventory replenishment for a single
supplier and multiple buyers through electronic data interchange. Int J Technol
Manage 79(4-5):328-336.
Banerjee A, Banerjee S (1994) A coordinated order-up-to inventory control policy for a single
supplier and multiple buyers using electronic data interchange. Int J Prod Econ
35(1-3):85-91.
Ben-Daya M, Hariga M (2004) Integrated single vendor single buyer model with stochastic
demand and variable lead time. Int J Prod Econ 92(1):75-80.
Ben-Daya M, Raouf A (1994) Inventory models involving lead time as a decision variable. J
Oper Res Soc 45(5):579-582.
Chang HC, Ouyang LY, Wu KS, Ho CH (2006) Integrated vendor-buyer cooperative
inventory models with controllable lead time and ordering cost reduction. Eur J
Oper Res 170(2):481-495.
Chen FY, Krass D (2001) Inventory models with minimal service level constraints. Eur J
Oper Res 134(1):120-140.
Chopra S, Meindl P (2007) Supply chain management. Pearson Education Inc., New Jersey.
EAN International (2000) Continuous replenishment: how to use the EAN.UCC standard.
http://www.gs1.org/docs/EDI0002.pdf.
Goyal SK (1976) An integrated inventory model for a single-supplier-single- customer
problem. Int J Prod Res 15:107-111.
Goyal SK (1988) A joint economic-lot-size model for purchaser and vendor: a comment.
Decis Sci 19(1):236-241.
Goyal SK, Srinvasan G (1992) The individually responsible and rational comment. Decis Sci
23(3):777-784.
Liao CJ, Shyu CH (1991) An analytical determination of lead time with normal demand. Int J
Oper Prod Manage 7(4):115-124.
Iyer AV, Bergen ME (1997) Quick response in manufacturer-retailer channels. Manage Sci
43(4):559-570.
Ouyang LY, Chuang BR (2000) Stochastic inventory model involving variable lead time with
a service level. Yugosl J Oper Res 10(1):81-98.
Ouyang LY, Wu KS, Ho CH (2004) Integrated vendor-buyer cooperative models with
stochastic demand in controllable lead time. Int J Prod Econ 92(3):255-266.
Ouyang LY, Wu, KS, Ho, CH (2007) An integrated vendor-buyer inventory model with
quality improvement and lead time reduction. Int J Prod Econ 108(1-2):349-358.
Pan JC, Hsiao YC (2005) Integrated inventory models with controllable lead time and
backorder discount consolidations. Int J Prod Econ 93-94:387-397.
Pan JC, Yang JS (2002) A study of an integrated inventory with controllable lead time. Int J
Prod Res 40(5):1263-1273.
Taha HA (2002) Operations research: an introduction. Prentice Hall, Upper Sadle River,
New Jersey.
8
1. Introduction
Collaborative Networked Organizations (CNO) has become one of the most prominent
strategic paradigms that companies have sought as a mean to face the challenges imposed
by globalization (Camarinha-Matos et al., 2005). There are several types of CNOs, like as
supply chain, virtual labs, virtual organizations breeding environment (VBE), extended
enterprises, virtual organizations and virtual enterprises. The common rationale behind
such alliances is that they rely on collaboration with other companies to be more
competitive. This work focuses on virtual enterprise.
A Virtual Enterprise (VE) can be generally defined as a temporary alliance of autonomous
and heterogeneous enterprises that dynamically joint together to cope with a given business
opportunity, acting as one single enterprise. A VE dismiss itself after accomplishing its goal
(Rabelo et al., 2004).
Managing the VE life cycle very efficiently is crucial for the business realization. This
involves the creation, the operation, the evolution and the dissolution of a VE. This work
focuses on the VE evolution phase. In general, the VE evolution phase comprises activities
related to managing changes and adaptations in the VE’s plan (i.e. the VE operation phase)
in order to guarantee the achievement of its goals and duties. This can comprehend actions
like simple modifications in some technical specification, passing by changes in and/or
negotiations on the VE’s schedule, or more drastically the replacement of some of its
members.
VEs have, however, some intrinsic and particular characteristics which impose respecting a
number of requirements in decision making. The most important one is that decision should
be performed in a collaborative, decentralized, distributed and transparent way, considering
that VE members are autonomous, independent and geographically dispersed. Besides that,
the fact that each VE is per definition completely different from one to another (in terms of
number of partners, their skills, culture, local regulations, specificities determined by the
given client, etc.) makes the solution of some problems not necessarily deterministic and the
use of previous decisions for equivalent problems not necessarily useful. As such, managing
the VE evolution requires additional approaches in order to be properly handled (Drissen-
Silva & Rabelo, 2009b).
Figure 1 presents a general vision of the aspects related to the management approach and
the decision making process in a centralized and decentralized ways, trying to expose the
148 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
necessary requirements for offering a new decentralized and collaborative decision making
model for the Virtual Enterprise evolution, as offered by this work.
giving transparency to the whole process, as well as to regulate partners’ involvement and
information access as long as decisions are taken. After a review in the literature, three
works have been found out that offer elements for this desired environment.
HERMES (Karacapilidis & Papadias, 2001) is a support system used for collaborative
decision-making via argumentation. It helps in the solution of non-structured problems,
coordinating a joint discussion among decision makers. It offers an online discussion about
one or more specific subjects, where each participant can suggest alternatives to the problem
or simply point out their pros and cons in relation to current alternatives. There is an
association of weights that considers the positioning in favor of or against the suggestions,
hence providing a global vision of the opinions.
DELPHI is a classical method (Dalkey & Helmer, 1963) created with the purpose of finding a
consensus about a given topic of discussion but without confrontation. Essentially, a
summary of the opinions is elaborated along diverse rounds and it is sent back to the
participants keeping the names anonymous. The process continues until the consensus /
final decision or opinion is reached.
Woelfel et al. (described in Rabelo et al., 2008) developed an integrated suite of web-based
groupware services that has considered CNOs requirements. This suite includes the services
of instant messaging, mailing, discussion forum, calendar, wiki, content management
system, and news & announcement. One interesting feature of the instant messaging service
is the possibility of having private discussions rooms, allowing having several parallel
discussions involving all partners and some rooms only available for authorized partners.
The Agile Project Management (APM) model sees the changing need as an adaptation in the
exploration of alternatives that can fit to new scenes. APM was essentially created for
projects which demand more agility and dynamism (Leite, 2004), presenting a deeper set of
actions for handling changes. There are other management models that handle changes in a
project, namely ECM - Engineering Change Management (Tavčar & Duhovnik, 2005), CC -
Configuration Control (Military Handbook, 2001) and CM – Change Management (Weerd,
2007). In general, they organize the phases of change management in four macro phases: i)
need of change identification, where the causes of the problem and the affected members are
identified in order to prepare a change solicitation; ii) change proposal, where the members
that are going to participate in the change analysis are defined; iii) change planning, where
the different possible scenarios to solve the problem are evaluated via general evaluations,
and; iv) Implementation, where the most suitable alternative for the problem is settled and the
new project’s parameters are reconfigured.
All these reference models are very general and they can be instantiated to any type of VE
topology. As such, any managerial style and model, different support techniques,
management tools and performance evaluation methods can be applied in each case
(Karvonen et al., 2005). Considering their generality the models are not ready used in VE
evolution scenarios. Therefore, in spite of the extremely importance of their foundations,
they should be adapted for that.
Another perspective is that all these works - and some other more recent ones (e.g. Hodík &
Stach, 2008; Negretto et al., 2008; Muller et al., 2008) – are however disconnected from the
global operation ambient of the companies. This means that the decision-making process is
carried out separated of the other processes. In practice, this obliges managers to switch
from one environment to another and to cope with different sources of information (this is a
problem as SMEs usually have several basic problems of systems integration). A sound
alternative for that is the BPM (Business Process Management) approach (Grefen et al., 2009)
and the SOA (Service Oriented Architecture) paradigm (Ordanini & Pasini, 2008). BPM
provides foundations for a loose-coupled, modular, composite and integrated definition of
business processes. From the process execution point of view, BPM tools generate BPEL
(Business Process Execution Language) files as output, allowing a direct integration among
business level (BPM) and execution level (SOA / web services). There are both commercial
and academic supporting tools for that (e.g. Oracle BPEL Designer, IBM WebSphere). This
combination can provide the notion of flexible and modular decision protocols.
VBE’s repositories, all this supported by a sort of ICT (technological) tools and infrastructures
(Drissen-Silva & Rabelo, 2009a). That discussion is framed by a decision protocol (conceived
using project management foundations) and is carried out within a distributed and
collaborative decision support environment. The decision protocol is the mechanism which
“links” the four pillars according to the particular problem to be solved within the VE
evolution phase. Figure 2 shows the framework.
Planning, and Implementation, as well as some of their sub-steps). Besides using ECM and
adapting it to the VE evolution context, this work has also used some ideas proposed in
O’Neill (1995) when determining the most significant events to handle in more strategic
decisions. In resume, this proposed decision protocol represents the mentioned framework’s
methodology and it is modeled via BPM and SOA-based tools.
reasons and to check if it can be solved by the own partner, without impacting the other VE
members. This reveals the strategy to involve the other partners only if the problem cannot
be solved at a “local” level. For this, the VE coordinator and the partner that has generated
the conflict (illustrated as Partner 1) discuss together (e.g. via chat and file transfer), initially.
After discussions and evaluations, if the problem is considered solved without needing the
other partners, the protocol’s flow goes through another phases, the Change Proposal,
Planning and Implementation phases. In the case the problem could not be solved, it is
necessary to evaluate which partners were affected and that should then be involved in the
collaborative discussion and decision-making. In the Change Proposal phase, the discussion
is supported by the services that combine the ideas of HERMES and Delphi methods (see
section 2.1). The part inspired in HERMES aims to organize partners’ arguments in a concise
structure, using an appropriate semantic, communicating their suggestions but in a
compiled way, including an association of weights to the most important arguments. This
aims at finding the better (and faster) consensus about the problem. The part inspired in the
Delphi method aims at avoiding direct confrontations among participants, which could
generate counterproductive discussions. In this sense, all the arguments are gathered by the
VE Coordinator who, in a first moment, acts as the moderator selecting, deleting, changing
or suggesting changes in the arguments received before they can be published to all
participants. Actually, it is not the aim to restrain partners conversation and information
exchange, but rather to guarantee a faster discussion and, mainly, that some sensible
information (e.g. the precise level of capacity of a given partner) can be disclosed to
everybody. In this way, the VE coordinator have the option to just say to the others that the
given partner has “enough” capacity. This discussion round, with the compiled opinions, is
illustrated as gray frames in figure 6, at each member’s side. The white frames illustrate the
argumentation console where partners expresses their opinions as well as where the VE
coordinator receives them. He moderates the discussion via this console. After the
arguments have been sent out to the other participants, they can reevaluate their
considerations and make other suggestions. This process continues until a consensus is
reached (within the Change Planning phase).
The protocol is not fixed in its inner actions. Regarding VE uniqueness and topology, and
the natural partners’ heterogeneity, the protocol can be different for each situation. There
are many possible scenarios that could influence the decision to be taken in order to solve
the current problem. In this way, the protocol acts as a reminder of some more important
questions so that partners can recall they should check them. For example, if an item is
delayed and the final customer is very important or the fine is too high, partners can agree
on subcontracting part of the production in order to keep the delivery date. If the client has
a very rigorous quality control and he manages the suppliers’ certification level quite
tightly, perhaps is not possible to hire any company, but one equivalent, and so forth. In the
case of any other particular issue, partners should handle this, managed by the VE
coordinator. Once the problem is solved, the new VE’s parameters are set up
(Implementation phase) and the control flow goes back to the VE operation phase.
This hypothetical argumentation scenario would be based on the results achieved helped by
a pool of tools for performance evaluation modeling, monitoring and tasks rescheduling,
which can also involve the invited expert’s opinion (Change Planning). Some participants
could use their own tools or the common toolbox (including the access to the VBE database)
available to all participants to help in the discussions.
160 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
management methods. This however does not cover the existing local tools used by each
member at their companies. The Toolbox’s tools themselves can congregate both the set of
tools previously agreed (or existing) in the VBE and tools that can be accessed on demand
from other providers.
4. Prototype implementation
This section presents the results of the implementation of the DDSS-VE framework, which is
concentrated in three different functionalities: the Decision Protocol, the Partners’
Discussion Environment and a Tool for previous evaluation scenarios. The decision
protocol once started will help manager to do actions in the right moment in the decision
making process. It was used an adapted VBE database in order to access the competences of
all partner in the usage scenario. Partner’s Discussion Environment is implemented
considering ideas from HERMES System and Delphi method, applying a collaborative
discussion with voting and comparing suggestions all on supervision by the moderator. The
Toolbox is populated with a tool for capacity planning using the performance evaluation
method applied in advanced dashboards. Within a controlled testing environment, the
problems detected in the VE operation phase are manually introduced and the discussions
are simulated in a distributed scenario using a number of PCs.
As already said, the Collaborative Discussion Environment has the goal to combine
HERMES system and Delphi method, and to adapt them to the desired decision philosophy.
In other words, it aimed at facing the partners’ autonomy and transparency requirements as
well as the need for a more structured way of deciding. The main adaptations include:
The creation of a moderator (role), who is responsible to evaluate and to make available
the arguments sent by members. Depending on the case, the moderator can be the own
VE coordinator;
The comparison of two different arguments using different connectors (better than;
worse than; equal to; as bad as; as good as). Each comparison assigns negative and/or
positive points to each argument, depending on the connector;
Voting: Partners can vote pro or against to each argument;
During the discussion, partner are guided by the Decision Protocol;
Is possible to use a previous evaluation decision tool, in order to evaluated the impact
of a new scenario into the VE operation.
ends when the best alternative has been chosen in the “Implementation” phase, where the
new scenario is put on practice. The sequence described below quickly explains figure 7.
1. Starting the discussion (to be conducted via the DDSS-VE):
The protocol ask some questions to delineate the better attitude for each case (e. g. if it is a
rigorous client constraint that avoids from choosing another supplier);
Each participant can use some tools to preview which different scenarios could be acceptable
to reschedule the activities that have to be done, choosing the best one, and publishing it as a
suggestion for the problem resolution:
a. Mr. Rui posts the first suggestion: ‘Buy from another supplier’ (Figure 7a);
b. Each partner can vote pro or against it (bottom Figure 7a);
c. Each suggestion can be compared with other suggestions using ‘COMPARE’ button
(Figure 7a). Figure 7b presents the list of suggestions and the possible logical
connectors. For example, a comparison using ‘is better than’ as the connector assigns
+1 point to the best suggestion and -1 to the worst;
d. Figure 7c shows a tree (associated to the detected problem: helmet strip allotment) with
the three posted suggestions (plus authors) and four comparisons among them. One of
them is not yet evaluated as it is ‘awaiting approval’;
The moderator (Mr. Ricardo) evaluates the different suggestions and the comparisons,
mainly to see if there is some confrontation among the participants:
a. Figure 7d shows the Moderator’s view. He can modify and/or simply approve Mr.
Rui‘s opinion (“RE: buy from another supplier is as good as …”) and send them to the
group;
b. Figure 7e represents the vision seen by the other two members before Mr. Rui‘s
opinion approval. Thus, they only see ‘message awaiting approval‘;
In what the final voting result is concerned:
a. It is possible to see the number of votes of each suggestion, which is +3 in relation to
the Mr. Rui’s one (Figure 7a), also meaning that the three consulted members
(including the VE coordinator) have agreed on it;
b. Figure 7c shows a signaled number beside each suggestion expressing the final sum of
voting with the weights of comparisons. In this case, ‘Buy from another supplier’ has
more positions in favor (+3 from direct voting) that is added to more 2 points from two
positive comparisons, resulting 5 points in favor;
2. Once agreed, the most suitable solution is settled on the VE plan and partners (re)start to work
based on it. This means that VE evolution is ended and the VE management goes back to the
operation phase.
number of resources looking for the integrations of the scheduling in order to calculate
another scenario for solving the problem in the discussion on DDSS-VE. Figure 8 shows the
developed dashboard.
5. General evaluation
The developed prototype passed through a sequence of exhaustive tests for the verification
and validation of the conceptual model for the collaborative discussion around a problem
emerged in the Virtual Enterprise operation phase forcing it to go on the evolution phase.
The conceptual model and the prototype were evaluated by experts on the main areas
studied in the model development process. In the evaluating average all experts agreed in
the contribution, relevance and attending the needs of the scientific problem to be solved: ‘to
find a more transparent and collaborative environment that puts autonomous partners in a
discussion around the partnership conflict using a set of computable tools and a decision
protocol to support the decision’. The methodology used to evaluate this work followed
three main steps: i) prototype evaluation in a sequence of stressed tests; ii) explanation the
conceptual model by a scientific article to the experts with a number of questions answered
164 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
with their evaluation; iii) explanation the prototype functionalities in an example execution
with another number of questions answered with their opinions.
Fig. 8. Previous Evaluation Scenarios Tool using Dashboards for Tasks Rescheduling.
5.1 Contributions
Main scientific contribution of this work is centered in using different techniques, tools and
methods already acceptable in an adequate semi-automated system that help managers in
the decision making process around a problem in the VE operation phase. The integration of
those different methods can offer a distributed and collaborative discussion with
transparency, controlled by moderation using previous analysis of the decision’s impact.
Central element is the human, who has the ability to feel and to decide what is the best
scenario respecting his knowledge. The framework can only support his decision offering
flexibility, calculus tools and communication availability through partners.
Compared with the state-of-the-art in the area, this work covers different aspects, which are
showed in the Table 1.
Considering the flexibility offered by the decision protocol, this framework could be
adapted to other strategic alliances models and also to the management of virtual
organization operation phase, only making the necessary modification on the some phases
and processes in the base protocol in order to attend different cases needs.
5.2 Limitations
The main limitation of this work is related to CNO concept that assumes each partner are
autonomous and has to participate in a collaborative way trying to help other partners in
difficulties. Some aspects related to VE concept is difficult to reach in the reality because
trust among partners has to be strong, and also it is necessary a well developed ICT
infrastructure to put this environment on work. But on the other hand, there are a number
Collaboration in Decision Making:
A Semi-Automated Support for Managing the Evolution of Virtual Enterprises 165
of VBE in execution in the world that feed expectative of a strong dissemination of the VE
concepts to these kinds of enterprises collaborative environment.
CNOs / VEs
Traditional CNOs / VEs
(proposed
Management Model (current approaches)
approach)
Decision Centralized Centralized Decentralized
Information sharing
No or eventual Yes Yes
between partners
Transparency in the
No or partial Partial Yes
decision
Decision quality
No Low and Eventual Yes
evaluation
Decision scope Intra-organizational Inter-organizational Inter-organizational
Flexible /
Decision process Inflexible / “Data Inflexible / “Data
Systemized /
Rigidity flow” flow”
Adaptable
Information
integration between Low / Medium Medium / High High / Very high
partners
Trust between Explicit /
Implicit Explicit
partners Reinforced
Good global results
Decision objective Best global results Good global results with previous
analysis
Full-fledged
Mutual help between Punctual
Cooperation Collaboration along
partners Collaboration
decision making
Methodological aid / Low efficiency and
No or partial Yes
Assisted decision without assistance
Source: Adapted from Drissen-Silva & Rabelo, 2008.
Table 1. Comparison between traditional management model, current and proposed
CNOs/VEs approaches.
Considering the prototype it was developed only with one tool for supporting previous
impact analysis of the decision, but the conceptual model can consider a big number of
available tools those could be put in a collaborative access environment for all partners.
Adequate the collaborative discussion environment, that uses ideas from HERMES
system and Delphi method to the Moodle system;
Creation of an ontology that describes formally the relations, hierarchies and concepts
associated to the explored domain on decision making in the Collaborative Networked
Organizations (CNO).
6. Conclusion
This chapter has presented a framework to support a collaborative discussion among VE
members for solving problems during the VE evolution phase. It is essentially composed of
a decision protocol, a distributed and collaborative decision support system, and of ICT
supporting tools and communication infrastructure. It was designed to cope with the VE
requirements, mainly in what members’ autonomy and decision transparency is concerned.
Developed based on project management methodologies, discussions are guided and
assisted by the system but preserving and counting on the members’ experience and
knowledge in order to reach a suitable/feasible solution for the given problem.
The proposed framework groups such requirements and organizes them into four pillars:
Human, Organizational, Knowledge and Technological. The essential rationale of these four
pillars is to enable humans to discuss and to decide about a problem related to a given
organizational process, applying a set of organizational procedures and methods, using
information and knowledge available in the VBE’s repositories, supported by a sort of ICT
tools. A crucial aspect in the proposed approach is the human intervention, i.e. the problem
is so complex that is unfeasible to try to automate the decisions. Instead, the approach is to
put the managers in the centre of the process, surrounding them with adequate tools and
methods.
All the framework’s elements operates in a methodological way by the human element, on
a democratic, transparent, decentralized, systematized and moderated basis, considering
their geographical distribution.
In order to offer more quality in the suggestions made by each partner during a discussion
around a problem resolution, different tools, techniques and methods for performance
evaluation are offered to provide a vision for a future capacity planning in order to evaluate
different scenarios for solving the problem in discussion. In this way, the participants have
conditions to previously evaluate the impact of the decision to be taken. This evaluation can
be made isolated by each participant during the conflict resolution process.
A software prototype has been implemented to evaluate the framework, and it was tested in
an open but controlled environment. The implementation copes with the required flexibility
and adaptability of the decision protocol to different VEs, applying BPM (Business Process
Management) e SOA (Service Oriented Architecture) technologies as a support for. The
developed framework fundamentally assumes that VE partners are all members of a kind of
cluster of companies. This presupposes the presence of a reasonable degree of trust among
members, of an adequate computing infrastructure, of common organization vision (in
terms of collaboration and enterprise networking) and operational procedures to be
followed when problems take place, and that VE managers are trained for that.
The implementation results have showed that the proposed mechanisms for supporting
partners’ autonomy, Internet-based decentralized decision-making, voting and transparency
have worked out in a controlled environment. During the discussions, selected partners can
have access to the problem, can freely exchange opinions about how to solve it, and can
Collaboration in Decision Making:
A Semi-Automated Support for Managing the Evolution of Virtual Enterprises 167
express their preferences via voting. This guarantees that the solution emerges from the
collaboration and trust among partners. The decision protocol helps participants to take the
right action on the right moment. The scenarios evaluation tools is capable to offer a pre-
evaluation of the decision impact.
This work’s evaluation was composed of a set of procedures that offers conditions to affirm
the final general research conclusion: “A semi-automated decision protocol, flexible and adaptable,
integrated with scenarios analysis tools and a collaborative discussion environment makes better the
quality and trust in the decision around a problem in a VE”.
7. Acknowledgment
This work is partially supported by CNPq – The Brazilian Council for Research and
Scientific Development (www.cnpq.br). The authors would like to thanks Ms. Cindy
Dalfovo, Mr. Leonardo G. Bilck and Mr. André C. Brunelli for the software implementation.
8. References
Afsarmanesh, H. & Camarinha-Matos, L. M. (2005). A Framework for Management of Virtual
Organization Breeding Environments. Proceedings 6th IFIP Working Conf. on Virtual
Enterprises, Kluwer Acad. Publishers, pp. 35-48.
Afsarmanesh, H.; Msanjila, S.; Erminova, E.; Wiesner, S.; Woelfel, W. & Seifert, M. (2008).
VBE Management System, in Methods and Tools for Collaborative Networked
Organizations, Eds. L.M. Camarinha-Matos, H. Afsarmanesh and M. Ollus,
Springer, pp. 119-154.
Baffo, I.; Confessore, G. & Liotta, G. (2008). A Reference Model for Distributed Decision Making
through a Multiagent Approach, in Pervasive Collab. Networks, Eds. L.M.
Camarinha-Matos and W. Picard., Springer, pp. 285-292.
Baldo, F.; Rabelo. R. J. & Vallejos, R. V. (2008). Modeling Performance Indicators’ Selection
Process for VO Partners’ Suggestions, in Proceedings BASYS’2008 – 8th IFIP Int. Conf.
on Information Technology for Balance Automation Systems, Springer, pp. 67-76.
Bernhard, R. (1992). CIM system planning toolbox, CIM-PLATO Project Survey and
Demonstrator, in Proceed. CIM-PLATO Workshop, Karlsruhe, Germany, pp. 94-107.
Camarinha-Matos, L. M.; Afsarmanesh, H. & Ollus, M. (2005). ECOLEAD: A Holistic
Approach to Creation and Management of Dynamic Virtual Organizations. In:
Collaborative Networks and Their Breeding Environments. Eds. L. M. Camarinha-
Matos, H. Afsarmanesh e A. Ortiz. Springer, pp. 3-16.
Bostrom, R.; Anson, R. & Clawson, V. (2003). Group facilitation and group support systems.
Group Support Systems: New Perspectives, Ed. Macmillan.
CMMI (2006). CMMI for Development Version 1.2. Tech. Report DEV, V1.2. Pittsburgh:
Carnegie Mellon – Software Engineering Institute.
Dalkey, N. C. & Helmer, O. (1963). An experimental application of the Delphi method to the case
of experts, Management Science; 9, pp. 458-467.
Drissen-Silva, M. V., Rabelo, R. J., (2008). A Model for Dynamic Generation of Collaborative
Decision Protocols for Managing the Evolution of Virtual Enterprises, in Proceedings
BASYS’2008 – 8th IFIP International Conference on Information Technology for
Balance Automation Systems, Springer, pp. 105-114.
168 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Muller, E.; Horbach, S. & Ackermann, J. (2008). Decentralized Decision Making in Non-
Hierarchical Networks, in Pervasive Collaborative Networks, Eds. L.M. Camarinha-
Matos and W. Picard., Springer, pp. 277-284.
Negretto, H.; Hodík, J.; Mulder, W.; Ollus, M.; Pondrelli, P. & Westphal, I. (2008). VO
Management Solutions: VO management e-services, in Methods and Tools for
Collaborative Networked Organizations, Eds. L.M. Camarinha-Matos, H.
Afsarmanesh and M. Ollus, Springer, pp. 257-274.
Ollus, M.; Jansson, K.; Karvonen, I.; Uoti, M. & Riikonen, H. (2009). On Services for
Collaborative Project Management. In: Leveraging Knowledge for Innovation in
Collaborative Networks, Eds. Luis M. Camarinha-Matos, Iraklis Paraskakis and
Hamideh Afsarmanesh, Springer, pp. 451-462.
O´Neill, H. (1995). Decision Support in the Extended Enterprise, Ph.D. Thesis, Cranfield
University, The CIM Institute.
Ordanini, A. & Pasisni, P. (2008). Service co-production and value co-creation: The case for a
service-oriented architecture (SOA), European Management Journal; 26 (5), pp. 289-
297.
PMBOK (2004). A Guide to the Project Management Body of Knowledge. PMI Standards
Committee.
Pěchouček, M. & Hodík, J. (2007). Virtual Organisation Management eServices version 1. Tech.
Report D34.5. ECOLEAD Project, www.ecolead.org.
Phillips--Wren, G. E. & Forgionne, G. A. (2001). Aided Search Strategy Enable by Decision
Support, Information Processing and Management, Vol. 42, No. 2, pp. 503-518.
Piccard, W. (2007). Support for power adaptation of social protocols for professional virtual
communities, in Establishing the Foundation of Collaborative Networks, Eds. L.M.
Camarinha-Matos, H. Afsarmanesh and P. Novaes, Springer, pp. 363-370.
Rabelo, R. J.; Pereira-Klen, A. A.; Spinosa, L. M. & Ferreira, A. C. (1998). Integrated Logistics
Management Support System: An Advanced Coordination Functionality for the Virtual
Environment, in Proceedings IMS'98 - 5th IFAC Workshop on Intelligent
Manufacturing Systems, pp. 195-202.
Rabelo, R. J.; Pereira-Klen, A. A. & Ferreira, A. C. (2000). For a Smart Coordination of
Distributed Business Processes, in Proceedings 4th IEEE/IFIP Int. Conf. on Balanced
Automation Systems, Berlin, Germany; pp. 378-385.
Rabelo, R. J. & Pereira-Klen, A. A. (2002). A Multi-agent System for Smart Co-ordination of
Dynamic Supply Chains, in Proceedings PRO-VE´2002; pp. 312-319.
Rabelo, R. J.; Pereira-Klen, A. A. & Klen, E. R. (2004). Effective management of dynamic supply
chains, Int. J. Networking and Virtual Organisations; Vol. 2, No. 3, pp. 193–208.
Rabelo, R. J.; Castro, M. R.; Conconi, A. & Sesana, M. (2008). The ECOLEAD Plug & Play
Collaborative Business Infrastructure, in Methods and Tools for Collaborative
Networked Organizations, Eds. L.M. Camarinha-Matos, H. Afsarmanesh and M.
Ollus, Springer, pp. 371-394.
Rozenfeld, H.; Forcellini, F. A.; Amaral, D. C.; Toledo, J. C.; Silva, S. L.; Alliprandini, D. H. &
Scalice, R. K. (2006). Products Development Management – A Reference for Process
Improvement. [in Portuguese] – 1st edition - São Paulo: Saraiva.
Sowa, G. & Sniezynski, T. (2007). Configurable multi-level security architecture for CNOs.
Technical Report Deliverable D64.1b, in www.ecolead.org.
9
1. Introduction
Kaplan & Norton’s Balanced Scorecard (BSC) first appeared in the Harvard Business Review
in 1992 (Kaplan and Norton, 1992). It described in general terms a method by which
management may improve the organization’s competitive advantage by broadening the
scope of evaluation from the usual Financial dimension to include: the organization’s
Customer base, the constitution and functioning of the firm’s Internal Business processes, and
the necessity of Innovation and Learning as a condition for growth. Over the years, these four
constituent elements of the BSC have remained largely unchanged, with the exception of a
modification in 1996 when Innovation and Learning was changed to Learning and Growth
(Kaplan and Norton, 1996).
As the BSC is now in its second decade of use, there have been a number of articles
suggesting that the BSC is in need of refocusing. Lusk, Halperin & Zhang (2006) and Van
der Woerd, F. & Van den Brink (2004). This refocusing suggests two changes in the Financial
Dimension of the BSC. First, the Financial Dimension of the BSC needs to be broadened from
measures that only address internal financial performance to those that are more market-
oriented. For example, according to the Hackett Group (2004: 67), the majority of Balanced
Scorecards are “out of balance because they are overweight with internal financial
measures”. Second, there seems to be a tendency to add evaluation variables with little
regard for their relationship to existing variables. This “piecemeal approach” results in a
lack of coherence and often causes Information Overload. This tendency is underscored by
Robert Paladino (2005), former vice president and global leader of the Telecommunications
and Utility Practice for Norton’s company, the Balanced Scorecard Collaborative,
http://www.bscol.com/, who suggests that a major failure of the BSC is that many
organizations adopt a piecemeal approach that generates an information set consisting of
too many highly associated variables without independent associational linkages to the
firm’s evaluation system. The observation is consistent with one of the classic issues first
addressed by Shannon (1948) often labelled as the Shannon-Weaver Communication Theory
which in turn lead to the concept of Information Overload.
Information Overload is particularly germane, because from the beginning of the Internet
Era, management has succumbed to the irresistible temptation to collect too many financial
performance variables due to the plethora of data sources offering simple downloads of
172 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
hundreds of financial performance variables. For example, Standard & Poors™ and
Bloomberg™, to mention a few, have data on thousands of organisations for hundreds of
variables.
In this study, we answer the call for refocusing the Financial Dimension of the BSC. The
purpose of this study is twofold. First, as suggested by Lusk, Halperin & Zhang (2006) we
add two market-oriented variables - the standard CAPM Market Beta and Tobin’s q to
broaden the financial dimension of the BSC. And second, we suggest, in detail, a simple
modelling procedure to avoid the “Information Overload and Variable Redundancy” so
often found in the Financial Dimension of the BSC. Although this study is focused only on
the Financial Dimension of the BSC, the modelling process that we suggest in this study
may be also applied to the other dimensions of the BSC.
This study is organized as follows:
1. First, we suggest a simple procedure for generating a “lean” i.e.,—parsimonious,
variable characterisation of the firm’s financial profile and then use that lean or
consolidated variable set as an input to a standard Delphi process.
2. Second, we present an empirical study to illustrate the lean BSC modelling system.
3. Lastly, we offer some concluding remarks as to use of our refined information set.
2.1 Three variable contexts of factor analysis and the Delphi Process: our lean
modelling system
In the process of using factor analysis to develop lean variable set characterizations, we
suggest performing factor analyses in three variable contexts: (1) the Industry, (2) a
Benchmarked Comparison Organisation and (3) the Particular Firm. These three variable
contexts1 taken together help management refocus on variables that may be productively
1The above lean modelling framework and benchmarking procedure was used by one of the authors as
a member of the Busch Center of the Wharton School of the University of Pennsylvania, the consulting
arm of the Department of Social Systems Science, as a way to focus decision-makers’ attention on
differences between their organisation and (1) the related industry as well as (2) a selected firm in the
A Lean Balanced Scorecard Using the Delphi Process: Enhancements for Decision Making 173
used in planning and executing the navigation of the firm. Specifically, the feedback from
these three variable contexts will be organized as a Delphi Process as the information
processing logic of the BSC. Consider now the details of the Information Processing logic for
the BSC as organized through the Delphi process.
2.2.1 Stage 1: unfreezing stage –familiarize the decision makers with “factors”
The goal of the Unfreezing stage is to have the DM feel comfortable with the logic of factor
analysis as a statistical technique that groups BSC Variables into Factors. To effect the
Unfreezing stage we recommend using an intuitive example dealing with the simple
question: What is a Computer? It is based upon an example, the data of which is presented
following in Table 1, used by Paul Green and Donald Tull (1975) in their classic text on
marketing research; they used it to illustrate the logic of Factor Analysis.
Advanced Maximum
Computers Basic Minimum Add-Ons Cycle
Processing Non-swap
n=15 Processing Storage Non-buffer Time
Storage
1 -0.28 -0.36 -0.49 -0.52 -0.48 -0.27
2 3.51 3.61 -0.55 -0.6 -0.87 3.74
3 -0.39 -0.34 -0.55 -0.53 -0.59 -0.27
4 -0.06 -0.28 -0.55 -1.07 -0.83 -0.26
5 0.38 -0.27 -0.46 -0.50 -0.88 -0.27
6 -0.43 -0.38 -0.55 -0.52 -0.48 -0.27
7 -0.26 0.37 -0.55 -0.52 -0.59 -0.27
8 0.70 0.68 -0.60 -0.61 -0.92 -0.27
9 -0.47 -0.39 -0.37 -0.52 -0.48 -0.27
10 -0.28 -0.23 -0.02 -0.14 -0.77 -0.27
11 -0.49 -0.39 -0.13 0.16 1.71 -0.27
12 -0.50 -0.39 1.32 2.47 1.08 -0.27
13 -0.51 -0.39 0.36 0.16 1.70 -0.27
14 -0.51 -0.39 3.26 2.47 1.08 -0.27
15 -0.52 -0.12 -0.13 -0.23 1.33 -0.27
industry used as a positive benchmark—that is, a best practices case that the study firm would like to
emulate. Sometimes, a negative benchmark was also used—in this case an organisation from which the
study firm wished to distance itself.
174 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
To start the Unfreezing process, we suggest presenting to the DM the following information
processing task:
Assume that you do not know anything about Computers, and you wish to better understand the
essential functioning of Computers. Unfortunately, as you are not gifted “technically, it is not
possible for you to reverse-engineer a computer and tinker with it so as to understand its essential
features. So, you are left with an “empirical” or analytic option. Therefore, with the aid of your tech-
friends you collect six performance Variables on 15 different models of computers: (1) Basic
Processing Speed, (2) Advanced Processing Speed, (3) Minimum Storage, (4) Maximum Non-
Swap Storage, (5) Add-Ons Non-buffer Capacity, and (6) Cycle Time. So now you have a
dataset with 15 computers measured on six performance variables—i.e., a matrix of size 15 rows and
6 columns as presented in Table 1.
As we continue with the Unfreezing stage, we distribute to all the DM a copy of the dataset
in Table 1 for anchoring their processing of the variable reduction from this dataset. We, the
group conducting the Unfreezing, enter the dataset into the Standard Harmon Factor Model
[SHFM] statistical program; the results produced are presented in Table 2. We recommend
that Table 2 be displayed either: (1) on the computers of the DM, (2) on a projection screen
for all to see or (3) distributed to the DM as printed copies. They need to be able to see and
discuss the information in Table 1 and the factor results as presented in Table 2.
2 We do not discuss with the DM the 0.5 loading rule because this concept requires a relatively
sophisticated understanding of the mathematical statistics of factor models. Rather we use the simple
loading rule that if a Variable had a weight of 0.71 or greater then that variable is a meaningful
descriptor of that Factor. In our experience, this level of detail is sufficient for the DM to use the results
of the Factor Model. This does assume, of course, that there will be someone in the firm with sufficient
training in statistical modeling to actually use the factor software that is to be used in the Delphi
process. This is usually the case.
A Lean Balanced Scorecard Using the Delphi Process: Enhancements for Decision Making 175
understand, via the comparison of the data in Table 1 and the results in Table 2, that there
were not really six variables but rather two variables: Speed and Storage with each of them
measured in three different ways.
In summary, to wind-down the Unfreezing stage, we emphasize that the initial set of six
variables was by definition “over-weight” or redundant and so the six variables were
consolidation by the SHFM to form only two dimensions: Speed and Storage in the lean-
version characterization of the dataset.
This simple computer example is critical to the understanding of the use of factor analysis to
reduce variable redundancy and so deals with “over-weight” variable characterizations. We
find that this simple and intuitive example unfreezes the DM in the sense that they knew
that computers were fast storage computation devices and this is exactly what the factor
analysis shows; this reinforcing their belief that the Factor Analysis can be “trusted” to
replicate the reality of the associated variable space with fewer variables.
To enrich understanding of how these five steps will be used for a particular firm, we will
now present a detailed and comprehensive example of all the steps that are needed to create
the Lean BSC.
Internet services (See Table 3 above for a brief description of these firms and their URL-
links). Our Industry Benchmark includes all 7372 SIC Firms that had data reported in
COMPUSTAT from 2003 to and including 2006. We selected this time period as it was after
Sarbanes-Oxley: 2002, and before the lead-up to the sub-prime debacle in 2008, which we
view as event partitions of reported firm data. This accrual yielded 411 firms and netted a
variable-panel of 1,161 observations.
than √0.5 were used in the description of the factors. For ease of reading, the variable
loadings are presented only to the second decimal, and those loading greater than √0.5 are
bolded. For the industry factor analysis, we excluded A.D.A.M. Inc. and Amdocs as they are
the study firm and the benchmark firm respectively.
Tobin’s q =(A25xA199+A130+A9)/A6
Current Ratio =A4/A5
Quick Ratio =(A1+A2)/A5
ROA =(A172-A19)/A6
EPS Growth =((A57t-A57t-1)/|A57t-1|
Gross Margin =A12-A41
Accounts Receivable Turnover =A12/((A2t+A2t-1)/2)
Where:
A1- Cash and short term investments
A2- Receivables
A4- Current Assets
A5- Current liabilities
A6- Total assets
A9- Total long-term liabilities
A12- Net sales
A19- Preferred Dividends
A25- Common shares outstanding
A41- Cost of goods sold
A57- Diluted earnings per share excluding extraordinary items
A130- Preferred stock-carrying value
A172- Net income (loss)
A199- Common Stock Price: Fiscal year end
Table 5. Computation of Ratios Based on COMPUSTAT Data
3.3.1 The Delphi Process judgmental interpretation of the representative variables for
the various factors
We, in the role of DM for A.D.A.M. Inc. were guided by the factor loading results and also
by those variables that did not load across the final factors. These latter variables are
interesting in that they are independent, in the strong scene, as they did not achieve
association in the rotated space greater than √0.5. We will note these non-associative
variables in the Tables 6, 7 and 8 using Bold-Italics. Therefore, we will have two groups of
variables that exhaust the Factor/Variable Space: Those variables that have loaded on a
factor such that the rotate loading is greater than √0.5 and those that did not exhibit such an
association. Both groups have guided our interpretation of the Delphi BSC.
3.3.2 The industry factor profile and its relation to the BSC
The industry Factor analysis is presented in Table 6. We remark that both Beta and Tobin’s q
do not align in association with any of the COMPUSTAT™ financial performance profile
variables. This suggests that relative market volatility and stockholder preference are
independent measures for the Pre-Packaged Software industry—in and of itself an interesting
A Lean Balanced Scorecard Using the Delphi Process: Enhancements for Decision Making 179
result. One strong implication of Table 6 for us as DM for A.D.A.M. is that insofar as the BSC
analysis is concerned the industry is a mixed portfolio with both disparate hedge and market
sub-groupings. See two recent articles that treat these topical relationships in the hedge fund
context: Vincent (2008) and Grene (2008). Confirmatory information is also
Beta 0.02 0 0.01 0 -0.1 0 0.03 0.98 0
Tobin's Q 0.02 0.11 0.1 0.03 0 0.04 0.99 0.03 0.03
Current Ratio 0 0.99 0.01 0 0.03 0.07 0.06 0 0.01
Quick Ratio 0 0.99 0 0 0.04 0.06 0.06 0 0
ROA 0.11 0.08 0.23 0.2 0.92 0.04 0 -0.1 0.15
Gross Margin 0.99 0 0.08 0 0.04 0.02 0.03 0 0.04
A/R Turnover 0.02 0.12 0.05 -0.1 0.03 0.99 0.04 0 0
EPS Growth 0.01 0 0.07 0.97 0.17 -0.1 0.03 0 0.12
Cash 0.94 0 0.05 0 0.04 0 0.01 0.02 0.03
Cash & Short-term
0.89 0.05 0.17 0.03 0.02 0.07 0 0.04 0.03
Investment
Cash from operations 0.98 0 0.05 0 0.04 0.02 0.05 0 0.06
Receivables Total 0.96 0 0 0.01 0.04 -0.1 0 0.01 0.04
Current Assets 0.96 0.02 0.14 0.02 0.03 0.04 0 0.02 0.03
Current Liabilities 0.98 -0.1 0 0 0.03 0 0 0.01 0.02
Total Assets 0.98 0 0.06 0.01 0.03 0 0 0.03 0
PPE Net 0.94 0 0.1 0.01 0.03 0 0 0.01 -0.1
Net Sales 0.98 0 0.09 0 0.04 0.04 0.01 0 0.04
Depreciation &
0.90 -0.1 0.2 0.07 0.03 0.02 -0.1 0.02 -0.2
Amortization
Common Shares
0.94 0 -0.2 0 0.02 0 0.04 0.01 0.06
Outstanding
COGS 0.82 0 0.09 0 0.03 0.08 0 0.01 0.05
Net Income(Loss) 0.92 0 0.01 0.01 0.05 0.03 0.08 0 0.18
Diluted EPS 0.10 0 0.48 0.29 0.34 0 0.06 0 0.73
Market Value- Fiscal Year
0.22 0.01 0.92 0.06 0.2 0.06 0.11 0.02 0.16
end
Table 6. Industry Factor Analysis: Mid-Range Year Randomly Selected 2004
provided by the fact that EPS Growth and Market Value are independent variables with
respect to the other factor defined variables. For this reason, we note: It will be important to
understand that A.D.A.M. Inc., our firm, does not have to compete on a market relative basis—i.e.,
against the industry as a portfolio. One possible reaction to this, that the decision-makers may
decide, is that it would be useful in a strategic planning context to partition the industry into
various profile groupings and then re-start the Delphi BSC using these industry sub-
groupings as additional benchmarks. We have opted for the other approach that is to use
the Amdocs benchmark and continue with the Delphi BSC.
3.3.3 The study firm—A.D.A.M., Inc. and the selected benchmark: Amdocs
We will now concentrate on the relative analysis of the study firm and Amdocs, our positive
benchmark. One of the underlying assumptions of this comparative analysis is the stability
180 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
of the factors in the panel. As we have an auto-correlated panel for these two firms, there is
no statistical test for stability for the particular factor arrangement that was produced. If one
seeks to have a demonstration of this stability then a simple boot-strapping test will give the
required information. See Tamhane and Dunlop (2000, p. 600). For our data test, we
conducted a relational test for this data set and found it to be stable over the two following
partitions: years 2005 and 2006 compared to 2003 to 2006 and this argues for factor stability.
Receivables. This suggests that more resources are associated with lower market volatility.
Further, we observe a positive association between Gross Margin (i.e., Net Sales less COGS)
and certain balance sheet variables such as Total Assets and Liabilities, but we do not
observe any relationship between Reported Net Income [RNI] and these balance sheet
variables, implying that although our resource configuration effort is influencing the gross
margin it is not aligned with our bottom line RNI improvement. It is also interesting to note
that Factor 1 and Beta are not associated with Factor 2 which seems to be best characterized
as the growth dimension of A.D.A.M. Inc. For the second factor, our RNI movement is
consistent with our cash management (Cash & Short-term Investments) and assets
management (ROA) as well as our potential to grow as measured by Tobin’s q.
Beta -0.94 -0.22 0.25
Tobin's Q 0.99 -0.07 0.16
Current Ratio -0.05 -0.99 -0.09
Quick Ratio -0.20 -0.98 -0.10
ROA 0.84 -0.50 -0.24
Gross Margin 0.98 0.05 0.17
A/R Turnover 0.30 -0.95 0.09
EPS Growth -0.73 0.67 -0.09
Cash -0.48 0.69 -0.54
Cash & Short-term Investment -0.96 -0.02 -0.28
Cash from Operations 0.71 0.70 0.07
Receivables Total 0.97 0.08 0.24
Current Assets 0.34 0.88 -0.32
Current Liabilities 0.03 0.99 0.09
Total Assets 0.91 0.34 0.23
PPE Net 0.37 0.81 0.45
Net Sales 0.99 0.01 0.16
Depreciation & Amortization 0.62 0.30 0.73
Common Shares Outstanding -0.48 0.86 0.17
COGS 0.99 -0.01 0.16
Net Income(Loss) 0.98 -0.20 -0.01
Diluted EPS 0.97 -0.23 -0.01
Market Value-Fiscal Year End 0.96 0.19 0.21
Table 8. Factor Analysis: Amdocs,Inc. [DOX]
Observation II : Relative to Amdocs,Inc.
In comparison, Table 8 presents a very different profile of Amdocs, Inc., our positive
benchmark. The first factor of Table 8 has many of the same resource configuration aspects
except that both Beta and Tobin's q are featured in Factor 1 as well as is RNI! This strongly
indicates that the resource configuration is aligned linearly with the income producing
potential. In this way, Amdocs, our positive benchmark, seems to have worked out the asset
employment relation to RNI that is not found for our firm, A.D.A.M. Inc. Further, Beta is
inversely associated with the “income machine” as reflected by Tobin's q, RNI and Net Sales
for Amdocs meaning that the more assets the more they are converted linearly to Reported
182 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Net Income and this has a dampening effect on Market volatility. This certainly is a
positive/desirable profile and is probably why Tobin's q is positively associated with this
profile; we do not see this for A.D.A.M. Inc. as Tobin’s q is only associated with growth and
is invariant to Beta and the resources employed as discussed above.
To continue with the simple two project example, if two projects are the same in terms of
profitability (e.g., ROA), we should consider the project that will offer richer strategic
opportunities. Also, if these two projects permit various asset contracting possibilities such
as Operating Leases, Capital Leasing, or Purchasing and there are relatively wide ranges for:
Useful Economic Life, Resale Market Valuation as well as the methods of depreciating the
asset, then we should consider the project that will have the more favorable impact on our
recorded book value given the same performance in profitability.
4. Conclusion
To “close-the-loop” we wish to note that the Financial Dimension information developed
from the Delphi BSC or Lean Modeling approach will be included in the BSC along with
information on the other three dimensions of the BSC: Customers, Internal Processes, and
Learning and Growth. These BSC dimensional encodings are the input to the firm DSS needed
to develop priority information for the various projects that the firm may be considering—
i.e., project prioritization is the fundamental reason that firms use the BSC or budgeting
models. This is to say that the BSC information is “intermediate” information in that this
BSC information will be used to characterize the projects that are the planned future of the
firm. The final task in the Delphi BSC process then is selecting from the BSC-encoded
projects the actual projects to be funded by the firm.
In this regard, to conclude our research report, we wish to note a simple way that one may
“prioritize” the navigation information from all four dimensions of the BSC as they are
encoded in the projects. For example, in our simple illustrative example where we have
focused on the Financial Dimension, we have proposed that there were two projects and the
iPhone Project dominated on both of the criteria variables: ROA and Growth Profile
Management. It is more likely the case that there will be multiple criteria that result from
the Delphi BSC and that the preference weights considering all four of the BSC dimensions
will be distributed over the various projects so that there would be no clear dominance. To
determine the actual project-action plan from the output of the Lean-version of the Delphi
BSC is indeed a daunting task even with a relatively small number of performance criteria.
In this regard, it is necessary for the firm to select a method to determine the final set of
projects in which the firm will invest so as to satisfy the Overall BSC navigational
imperatives for the firm.
There are many such preference or priority processing methods. Based upon our consulting
work, we prefer the Expert Choice Model™ [http://www.expertchoice.com/] developed by
Saaty (1980 and 1990). We find this Alternative/Project Ranking methodology to be the
simplest, easiest to communicate to the DM, and most researched Ranking Model according
the average number of annual citations in the literature. As a disclose note: we have no
financial interest in Expert Choice Inc. the firm which has developed the application software
for the Expert Choice Model™.
5. Acknowledgements
We wish to thank the following individuals for their detailed and constructive comments on
previous versions of the paper: The participants in Technical Session 9 of the 2010 Global
Accounting and Organizational Change conference at Babson College, Babson Park, MA
USA, in particular Professor Ruzita Jusoh, University of Malaya, Malaysia; the participants
184 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
at the 2009 National Meeting of the Decision Science Institute, New Orleans LA, USA in
particular Professor Sorensen, University of Denver, and additionally Professor Neuhauser
of the Department of Finance, Lamar University, Beaumont Texas, USA and Professor
Razvan Pascalau of the Department of Finance and Economics, SUNY; Plattsburgh,
Plattsburgh, NY, USA.
6. References
Certified Information Security Manager. (2009). CISM Review Manual, ISACA, Rolling
Meadows, IL
Green, P. E. & Tull, D.S. (1975). Research for Marketing Decisions, Upper Saddlewood, NJ,
Prentice Hall
Grene, S. (December 2008). Clarity to Top Agenda. London: Financial Times, 01.12.2010
Hackett Company Report. (2004). Balanced Scorecards: Are their 15 Minutes of Fame Over?
The Hackett Group Press, Report AGF/213
Harmon, H. H. (1960). Modern Factor Analysis, University of Chicago Press, Chicago, IL
Jung-Erceg, P.; Pandza, K.; Armbruster, A. & Dreher, C. (2007). Absorptive Capacity in
European Manufacturing: A Delphi study. Industrial Management + Data Systems,
Vol. 107, No. 1, pp. 37-51
Jones, J. A. (2006). Introduction to Factor Analysis of Information Risk. Risk Management
Insight [online], Available from
http://www.riskmanagementinsight.com/media/documents/FAIR_Introduction.
pdf
Kaplan, R & Norton, D. (1992). The Balanced Scorecard—Measures that Drive Performance.
Harvard Business Review, Vol. 83, No. 7/8, pp. 71-79
Kaplan, R. & Norton, D. (1996). Linking the Balanced Scorecard to Strategy. California
Management Review, Vol. 39, No. 1, pp. 53-79
Lusk, E.J.; Halperin, M.& Zhang, B-D. (2006). The Balanced Scorecard: Suggestions for
Rebalancing. Problems and Perspectives in Management, Vol. 4, No. 3, pp.100-114
Paladino, R. (2005). Balanced Forecasts Drive Value. Strategic Finance, Vol. 86, No. 7, pp. 36-
42
Saaty, T. L. (1980). The Analytic Hierarchical Process, McGraw-Hill, New York, NY
Saaty, T. L. (1990). How to Make a Decision: The Analytic Hierarchy Process. European
Journal. of Operational Research, Vol. 48, No. 1, pp. 9-26
Sall, J.; Lehman, A. & Creighton, L. (2005). JMPTM Start Statistic-Version 5, 2nd Ed, Duxbury
Press, Belmont CA
Shannon, C. (1948). A Mathematical Theory of Communication. Bell System Technical Journal,
No. 27 (July and October), pp. 379-423, 623-656, Available from
http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
Tamhane, A. C. & Dunlop, D. D. (2000). Statistics and Data Analysis, Prentice Hall, Upper
Saddle River, NJ
Van der Woerd, F. & Van den Brink, T. (2004). Feasibility of a Responsive Scorecard-A Pilot
Study. Journal of Business Ethics, Vol. 55, No. 2, pp. 173-187
Vincent, M. (2008). Ivy League Asset Allocation Excites Wall Street. London: Financial Times,
8 April: 9.
10
1. Introduction
Water is the lifeblood of the American West and the foundation of its economy, but it
remains its scarcest resource. The explosive population growth in the Western United States,
the emerging additional need for water for environmental uses, and the national importance
of the domestic food production are driving major conflicts between these competing water
uses. Irrigated agriculture in particular is by far the largest water user of diverted water –
80% country wide and 90% in the Western U.S – and since it is perceived to be a
comparatively inefficient user, it is frequently asked to decrease its water consumption. The
case of the Middle Rio Grande illustrates the problem very well. The Rio Grande is the
ecological backbone of the Chihuahuan Desert region in the western United States, and
supports its dynamic and diverse ecology, including the fish and wildlife habitat. The Rio
Grande Silvery Minnow is a federally listed as endangered species, and irrigated agriculture
in the Middle Rio Grande has come under increasing pressure to reduce its water
consumption, while maintaining the desired level of service to its water users.
Irrigated agriculture in the Western United States has traditionally been the backbone of the
rural economy. The climate in the American West with low annual rainfall of 10 to 14 inches
is not conducive to dry land farming. Topography in the West is characterized by the Rocky
Mountains which accumulate significant snowfall, and the peaks of the snowmelt
hydrograph are stored in reservoirs allowing for irrigation throughout the summer crop
growing season. Of the total available surface water irrigated agriculture uses roughly 80 to
90% (Oad and Kullman, 2006). The combined demands of agriculture, urban, and industrial
sectors in the past have left little water for fish and wildlife. Since irrigated agriculture uses
roughly 80 to 90% of surface water in the West, it is often targeted to decrease diversions.
Due to wildlife concerns and demands from an ever growing urban population, the
pressure for flow reductions on irrigated agriculture increases every year. In order to sustain
itself and deal with external pressure for reduced river diversions irrigated agriculture has
to become more efficient in its water consumption. This chapter focuses on research
regarding improving water delivery operations in the Middle Rio Grande Conservancy
District system through the use of a decision support system linked with advanced
infrastructure, methodologies, and technology.
188 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
The Middle Rio Grande (MRG) Valley runs north to south through central New Mexico
from Cochiti Reservoir to the headwaters of Elephant Butte Reservoir, a distance of
approximately 175 miles. The MRG Valley is displayed in Figure 1.
(Hybognathus amarus), and the southwestern willow fly catcher (Empidonax traillii extimus)
(USFWS, 2003). Figure 2 displays the Rio Grande silvery minnow and the southwestern willow fly
catcher.
Fig. 2. Federally-listed endangered species, the silvery minnow (Hybognathus amarus), and
the southwestern willow fly catcher (Empidonax traillii extimus).
This chapter will focus on research for improving water delivery operations in irrigation
systems through the innovative use of water delivery decision support systems (DSS) linked
with SCADA technology and infrastructure modernization. The chapter will present
decision support modeling of irrigation systems in a broad sense and present the model
development and structure of a decision support system developed specifically for the
Middle Rio Grande Conservancy District to provide for more efficient and effective
management of water supplies, and more timely delivery of irrigation water to agricultural
users. The developed DSS will be presented in detail and all three modules that function
together to create water delivery schedules will be explained and examined. The chapter
will address the development of the DSS and will also present the utility of linking the
developed DSS to the SCADA (Supervisory Control And Data Acquisition) system and
automated structure network that the Middle Rio Grande Conservancy District utilizes to
manage water delivery. Linking the DSS and SCADA allows water managers to implement
DSS water delivery schedules at the touch of a button by remotely controlling water
delivery structures. Linking the DSS water distribution recommendations with the MRGCD
SCADA provides a simple and effective medium for managers to implement DSS
recommended water delivery schedules. Additionally, the combination of both programs
allowed for real-time management that converged river diversions and the water required
by the DSS. As the demand for water increases globally throughout the next decades, water
managers will be faced with significant challenges. The use of decision support systems
linked with SCADA and automated structures presents an advanced, efficient, and
innovative approach for meeting water management challenges in the future.
Irrigated agriculture in the MRG Valley reached its greatest extent in the 1880s, but
thereafter underwent a significant decline partially caused by an overabundance of water.
By the early 1920s inadequate drainage and periodic flooding resulted in water logging
throughout the MRG Valley. Swamps, seeps, and salinization of agricultural lands were the
result. In 1925, the State of New Mexico passed the Conservancy Act, which allowed for the
creation of the MRGCD, by combining 79 independent acequia associations into a single
entity (Gensler et al. 2009; Shah, 2001). Over the next twenty years the MRGCD provided
benefits of irrigation, drainage, and flood control; however, by the late 1940’s, the MRGCD
was financially unstable and further rehabilitation of structures was required. In 1950, the
MRGCD established a 50-year contract termed the Middle Rio Grande Project with the
USBR to provide financial assistance, system rehabilitation, and system improvement.
System improvements and oversight from the USBR continued until 1975 when the MRGCD
resumed operation and maintenance of the system. The loan from the USBR to the MRGCD
for improvements and operational expenses was repaid in 1999 (Shah, 2001). Currently the
MRGCD operates and maintains nearly 1,200 miles of canals and drains throughout the
valley in addition to nearly 200 miles of levees for flood protection.
Water use in the MRG Valley has not been adjudicated but the MRGCD holds various water
rights and permits for irrigation (Oad and Kullman, 2006). Some users in the MRGCD hold
vested water rights that are surface rights claimed by land owners who irrigated prior to
1907 (SSPA, 2002). Most water users in the MRGCD receive water through state permits
held by the MRGCD. In 1930, the MRGCD filed two permits (#0620 and #1690) with the
Office of the State Engineer that allow for storage of water in El Vado reservoir (180,000 acre
feet capacity), release of the water to meet irrigation demand, and diversion rights from the
Rio Grande to irrigate lands served by the MRGCD. The permits allow the MRGCD to
irrigate 123,000 acres although only about 70,000 acres are currently served (MRGCD, 2007).
This acreage includes roughly 10,000 acres irrigated by pueblo farmers. The MRGCD
charges water users an annual service charge per acre to operate and maintain the irrigation
system. In 2000 the MRGCD charged $28 per acre per year for the right to irrigate land
within the district (Barta, 2003). The MRGCD services irrigators from Cochiti Reservoir to
the Bosque del Apache National Wildlife Refuge. An overview map of the MRGCD is
displayed in Figure 3.
Irrigation structures managed by the MRGCD divert water from the Rio Grande to service
agricultural lands that include both small urban landscapes and large scale production of
alfalfa, corn, vegetable crops such as chile, orchards, and grass pasture. The majority of the
planted acreage, approximately 85%, consists of alfalfa, grass hay, and corn. In the period
from 1991 to 1998, USBR crop production and water utilization data indicate that the
average irrigated acreage in the MRGCD, excluding pueblo lands, was 53,400 acres (21,600
ha) (SSPA, 2002). Analysis from 2003 through 2009 indicates that roughly 50,000 acres
(20,200 ha) are irrigated as non-pueblo or privately owned lands and 10,000 acres (4,000 ha)
are irrigated within the six Indian Pueblos (Cochiti, San Felipe, Santo Domingo, Santa Ana,
Sandia, and Isleta). Agriculture in the MRGCD is a $142 million a year industry (MRGCD,
2007). Water users in the MRGCD include large farmers, community ditch associations, six
Native American pueblos, independent acequia communities and urban landscape
irrigators. The MRGCD supplies water to its four divisions -- Cochiti, Albuquerque, Belen
and Socorro -- through Cochiti Dam and Angostura, Isleta and San Acacia diversion weirs,
respectively (Oad et al. 2009; Oad et al. 2006; Oad and Kinzli, 2006). In addition to
diversions, all divisions except Cochiti receive return flow from upstream divisions.
Linking a Developed Decision Support System
with Advanced Methodologies for Optimized Agricultural Water Delivery 191
acres. The Belen Division is the largest in terms of overall service area with a total irrigated
acreage of 31,400 acres. Irrigation in the Belen Division is comprised of large farms, pueblo
irrigators, and urban water users. In Belen the MRGCD employs one water master and 11
ditch-riders. The Socorro Division consists of mostly large parcel irrigators with a total
irrigated acreage of 13,600 acres. Water distribution in Socorro is straightforward when
compared to the Albuquerque and Belen Division, and is managed by one water master and
four ditch-riders. Water availability in Socorro can become problematic since the division
depends on return flows from upstream users.
Water in the MRGCD is delivered in hierarchical fashion; first, it is diverted from the river
into a main canal, then to a secondary canal or lateral, and eventually to an acequia or small
ditch. Figure 4 displays the organization of water delivery in the MRGCD. Conveyance
canals in the MRGCD are primarily earthen canals but concrete lined canals exist in areas
where bank stability and seepage are of special concern. After water is conveyed through
laterals or acequias it is delivered to the farm turnouts, with the aid of check structures
when necessary. Once water passes the farm turnout it is the responsibility of individual
farmers to apply water and it is applied to fields generally using basin or furrow irrigation
techniques.
and are in constant contact with water users via cellular phones. Ditch-riders are on call 24
hours a day to deal with emergencies and water disputes, in addition to daily operations.
Water delivery in the MRGCD is not metered at individual farm turnouts. To determine
water delivery the ditch-riders estimate the time required for irrigation. The historic practice
in the MRGCD was to operate main canals and laterals as full as possible throughout the
entire irrigation season. This practice provided for flexible and reliable water delivery with
minimal managerial and financial ramifications; also known as on-demand water delivery.
On-demand or continuous water delivery, however resulted in large diversions from the Rio
Grande. During the past decade, the MRGCD has voluntarily reduced river diversions by
switching to scheduled water delivery. The drawback to this approach is the increased
managerial involvement and the overall cost of water delivery. To aid with the operational
and managerial challenges posed by scheduled water delivery, the MRGCD has developed
and implemented a Decision Support System (DSS) to aid in facilitating scheduled water
delivery. Additionally, the MRGCD has begun to replace aging water delivery
infrastructure with automated control gates that allow for precise control of canal flow rates,
a requirement for scheduled water delivery.
3. Decision support system for the Middle Rio Grande Conservancy District
The New Mexico Interstate Stream Commission and the MRGCD, in cooperation with
Colorado State Univeristy have sponsored a research project from 2003 to 2010 to develop a
decision support system (DSS), to model and assist implementation of scheduled water
delivery for efficiency improvements in the MRGCD’s service area. A DSS is a logical
arrangement of information including engineering models, field data, GIS and graphical
user interfaces, and is used by managers to make informed decisions. In irrigation systems,
a DSS can organize information about water demand in the service area and then schedule
available water supplies to efficiently fulfill the demand.
The conceptual problem addressed by a DSS for an irrigation system, then, is: how best to
route water supply in a main canal to its laterals so that the required river water diversion is
minimized. The desirable solution to this problem should be “demand-driven”, in the sense
that it should be based on a realistic estimation of water demand. The water demand in a
lateral canal service area, or for an irrigated parcel, can be predicted throughout the season
through analysis of information on the irrigated area, crop type and soil characteristics. The
important demand concepts are: When is water supply needed to meet crop demand
(Irrigation Timing), How long is the water supply needed during an irrigation event
(Irrigation Duration), and How often must irrigation events occur for given service area
(Frequency of Irrigation). Decision support systems have found implementation throughout
the American West and are mostly used to regulate river flow. Decision support systems on
the river level are linked to gauging stations and are used to administer water rights at
diversions points. Although decision support systems have proved their worth in river
management, few have been implemented for modeling irrigation canals and laterals and
improving water delivery (NMISC, 2006). The following section will focus on developing a
decision support system capable of modeling flow on a canal and lateral level, with the
overall goal of efficient irrigation water delivery.
The DSS has been formulated to model and manage water delivery in the MRGCD. The DSS
was designed to optimize water scheduling and delivery to meet crop water demand, and,
specifically, to aid in the implementation of scheduled water delivery (Oad et al. 2009;
NMISC, 2006). The DSS consists of three elements, or modules:
194 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
A water demand module that calculates crop consumptive use and soil moisture
storage, aggregated by lateral service area;
A water supply network module that represents the layout of the conveyance system,
main canal inflow, conveyance system physical properties, and the relative location of
diversions for lateral service area; and,
A scheduling module that routes water through the supply network to meet irrigation
demand using a mass-balance approach and based on a ranking system that depends
on the existing water deficit in the root-zone.
A Graphical User Interface (GUI) was designed to link the three modules of the DSS
together allowing users to access data and output for the system. A schematic of the three
modules and the way that they relate within the DSS framework is shown in Figure 5.
Figure 6 displays a simplified view of the DSS. GIS information and data obtained from the
MRGCD were used to develop input for both the water demand and the supply network
modules. Some of the input is directly linked through the GUI and some is handled
externally (NMISC, 2006).
zone depth. The calculation of TAW is displayed in Equation 1 (Oad et al. 2009; NMISC,
2006).
output. In the present model version, the module displays results in tabular form. Mass
balance calculations used to schedule irrigation timing and duration for lateral canal service
areas are based on the consideration that the farm soil root-zone is a reservoir for water
storage, for which irrigation applications are inflows and CIR is an outflow (NMISC, 2006).
The mass balance approach is displayed in Equation 3:
laterals that represent crop water demand. The problem is similar to a transportation
problem, where the service areas are demand nodes and the inflows are supply nodes
(NMISC, 2006). Links are created between nodes where water can be routed. In a
transportation problem, the supply needs to equal the demand; in this case, however, both
under-supply (excess demand) and excess supply are possible. Therefore, to ensure that the
system balances, a “dummy” source node is added that computes the water shortage in the
event the system is water-short. Note that in a water-rich scenario, the dummy node is not
used because it calculates only water shortage (NMISC, 2006)
Minimize Z = MPD-0 XD-0 + MP D-1 XD-1 + MP D-2 XD-2 + MPD-3 XD-3 (4)
In this equation Z is the sum of a modified priority (MP) multiplied by the amount of
supply (X) from the dummy supply to each demand node. The subscripts refer to the nodal
points between which flow occurs, i.e., X D-1 refers to flow from the Dummy supply to Check
1, and MP D-1 refers to the modified priority of demand to be satisfied at Check 1 from the
Dummy supply node. The MP value reflects the need-based ranking system where demand
nodes with lower available soil moisture are favored for irrigation (NMISC, 2006). The
objective function is solved in conjunction with a system of mass balance equations
representing the actual water (and dummy water) delivered to demand nodes, along with
other physically-based constraints.
Linking a Developed Decision Support System
with Advanced Methodologies for Optimized Agricultural Water Delivery 201
The variables in the objective function represent the links in the network between the
dummy supply and the demand nodes. The coefficient of each variable represents the flow
“cost” of that link. In other words, delivery of water to a node without a need for water
results in a higher “cost”. As further discussed below, the ranking system has been assigned
such that minimization of this objective function will result in minimization of water
delivery to demand nodes that already have sufficient RAM (NMISC, 2006).
Constraints on the objective function solution reflect the mass balance relationships
throughout the link-node network and the capacity limits on flow (NMISC, 2006). A mass-
balance constraint is created for each node (including the dummy) that establishes the
inflow and outflow to that node. The coefficients of the variables for each constraint (each
row) are represented as a matrix, with a column for every variable in the objective function
and a row for every node (NMISC, 2006). Inflows are represented as negative values and
outflows as positive values. Outflow coefficients are always one, and inflow coefficients
equal the conveyance loss of the connection.
The objective function is subject to the following constraints:
XI-0 <= I
-XI-0 + X0-1 - XD-0 = R0
- L1X0-1 + X1-2 - XD-1 = R1
- L2X1-2+ X2-3 - XD-2 = R2
- L3X2-3 - XD-3 = R3
XD-1 + XD-2 + XD-3 < ∞
Where
X0-1 <= C0-1 X1-2 <= C1-2 X2-3 <= C2-3 All Xi-j >= 0
The variables used are:
I is the total available inflow
Xi-j is the flow in a canal reach between points i and j
Ci-j is the maximum capacity of the canal reach between points i and j
D refers to a dummy supply node that is used to force the demands and supplies to
balance. The subscript 0 refers to the inflow node, and subscripts 1, 2, 3, … refer to
nodal points, typically located at check structures
Li is the conveyance loss between in the canal reach
Ri is the demand (water requirement) at the nodal point indicated by the subscript
(can be zero if not associated with a lateral diversion point)
For example, the third row refers to activity at check 1. There is an inflow from the headgate
(- L1X0-1), and it is given a negative sign since by convention all inflows are negative. The
conveyance loss is represented by the coefficient L1. There is an outflow to check 2 (+ X1-2)
(positive sign, since by convention all outflows are positive). To ensure that the system
balances, there is also an inflow from the dummy source (- XD-1). Because this node
represents a demand, the solution for this row is constrained to be exactly the demand (R1).
If a node represented a source, then the solution for the row would be constrained to fall
between zero and the inflow. That allows the use of less than the total amount of water
available if the demands are less than the supplies, or if at some point in the network the
capacity is insufficient to route the inflow (NMISC, 2006). The first row in the constraint
equations represents this type of node.
202 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
The conveyance loss factor specified in the supply network module is a fractional value of
flow per mile. The conveyance loss (L) to be applied in the mass balance equation is
calculated by subtracting the fractional value from one and raising it to the power of the
number of miles of the canal segment between nodes. For example, a 3-mile reach with a
0.015 conveyance loss factor would have a loss of [1 – (1-0.015)3], or a loss of 0.0443 of the in-
stream flow to this reach.
The ranking system used to derive the modified priority (MP) values for the objective
function is a two-step process, involving assignment of a priority (P) based on the irrigation
need at demand nodes, and then a modified priority that effectively reverses the ranking so
that nodes with the least need are the preferred recipients for dummy water (NMISC, 2006).
This results in the actual available water being delivered to the demand nodes with highest
irrigation need. First, a priority (P) is assigned to each of the demand nodes, with smaller
values indicating higher needs for irrigation. The priority is based on the number of days
until the service area runs out of RAM (NMISC, 2006). If the service area is not being
irrigated, 100 is added to the priority, which forces the system to favor areas being irrigated
until the RAM is full again. Subsystems were added to give priority to remaining canals
within a group on the assumption that if one canal service area in a subsystem is being
irrigated then it is desirable that the remaining canal services areas in the same group be
irrigated as well. If a service area is not being irrigated, but is in a subsystem that is being
irrigated, 50, rather than 100, is added to the priority. This makes it a higher priority than
service areas that are not being irrigated but are not in the subsystem. Normally a service
area is irrigated only once during a schedule. However, when excess water is available,
service areas in need of water are added back into the scheduling algorithm with a higher
priority. The ranking system is implemented by modifying the priorities with respect to the
dummy connections, effectively reversing the priorities (NMISC, 2006). Currently the
modified priority (MP) for the “dummy -> node x” connection is 100,000/Px. For example, if
the node has a priority of 105, then the priority assigned to the connection is 100,000/105 or
952.38. This will force dummy water to be delivered first to the lower priority nodes, leaving
real water for the higher priority nodes. The modified priority (MP) values are represented
by the MP variables in the objective function. The linear programming software utilized in
the DSS is a package called GLPK (GNU Linear Programming Kit).
and the demand throughout the system. In 1996, crisis struck the MRGCD in the form of
drought, endangered species flow requirements, and development of municipal water
supplies. At the time, the MRGCD was operating only 15 gauges on 1200 miles of canals.
The following year, MRGCD officially embarked upon its modernization program. The
construction of new flow gauges was the first step in this program. New gauges were
constructed at key points in the canal system, notably at diversion structures and at return
flow points. Along with the increase in numbers of gauging stations, efforts were made to
improve the quality of measurement. Open channel gauging sites with no control structures
gave way to site specific measuring structures. A variety of flow measurement structures
were built in the MRGCD and include sharp crested weirs, broad crested weirs, adjustable
weirs and Parshall flumes. Currently, the MRGCD is operating over 100 gauges. Figure 10
displays a broad crested with telemetry utilized for flow measurement
Fig. 10. Broad Crested Weir Gauging Station with Radio Telemetry
4.3 Instrumentation
Flow measurement and automated control must include some level of instrumentation. In
the 1930’s, a float in a stilling well driving a pen across a revolving strip of paper was
adequate. In fact, at the beginning of modernization efforts, the MRGCD was still using 15
Stevens A-71 stage recorders. Diversions into the canal system were only known after the
strip charts were collected and reduced at the end of the irrigation season. Modernization
meant a device was needed to generate an electrical or electronic output that could be
digitally stored or transmitted. Initially, shaft encoders were used for this purpose,
providing input for electronic data loggers. Experimentation with submersible pressure
sensors soon followed, and these have been adopted, although a number of shaft encoders
are still in use. Sonar sensors have been used satisfactorily at a number of sites and recently
compressed air bubblers have been utilized. Different situations call for specific sensor types
and sensors are selected for applications where they are most appropriate.
4.4 Telemetry
Data from electronic data-loggers was initially downloaded manually and proved to be
only a minimal improvement over strip chart recording, though processing was much
faster. To address data downloading concerns telemetry was adopted to bring the
recorded data back to MRGCD headquarters at regular intervals (Figure 12). MRGCD's
initial exposure to telemetry was through the addition of GOES satellite transmitters to
existing electronic data loggers. This method worked, but presented limitations. Data
could only be transmitted periodically, and at regularly scheduled intervals. Of greater
consequence was that the GOES system, at least as used by MRGCD, was a one-way link.
Data could be received from gauging stations, but not sent back to them. As experiments
with automation progressed, it was clear that reliable 2-way communication would be a
necessity. To address the rising cost of phone service, experiments with FM radio
telemetry were conducted. These began as a way to bring multiple stream gage sites to a
central data logger, which would then be relayed via GOES to MRGCD. First attempts
with FM radio were not encouraging; however a successful system was eventually
developed. As this use of FM radio telemetry (licensed 450 mHz) expanded, and
Linking a Developed Decision Support System
with Advanced Methodologies for Optimized Agricultural Water Delivery 205
knowledge of radio telemetry grew, it was soon realized that data could be directly
transmitted to MRGCD headquarters without using the GOES system. Full conversion to
FM radio based telemetry also resulted in very low recurring costs.
The shift to FM radio produced what is one of the more unique features of the MRGCD
SCADA system. The data link proved so reliable, that there was no longer a need to store
data on site, and the use of data loggers was mostly discontinued, the exception being
weather stations. In effect, a single desktop computer at the MRGCD headquarters has
become the data-logger for the entire stream gauge and gate system, being connected to
sensors in the field through the FM radio link. Three repeater sites are used to relay data up
and down the length of the valley, with transmission being up to 75 miles. Also, this has the
benefit of being a 2-way link, so various setup and control parameters can be transmitted to
devices along the canals. This 2-way link allowed for the system to be linked to the DSS in
order to improve water delivery operations. The MRGCD telemetry network consists
exclusively of Control Design RTU’s. Several different types of these units are used,
depending on the application. The simplest units contain only a modem and radio, and
transmit collected and processed weather station data from Campbell Scientific CR10X
dataloggers.
The majority of the RTU’s contain a modem, radio, and an input/output (I/O) board
packaged into a single unit. Sensors can be connected directly to these and read remotely
over the radio link. A variety of analog (4-20ma, 0-20ma, 0-5v) and digital (SDI-12, RS-485)
output devices can be accommodated this way. Another type includes a programmable (RP-
52 BASIC) controller in the modem/radio/(I/O) unit. This style is used for all automatic
control sites and places where unusual processing of sensor outputs such as averaging
values, combining values, or timed functions, are required. At the present time, the MRGCD
telemetry network gathers data from over 100 stream flow gages and 18 ag-met stations, and
controls 60 automated gates (Figure 12).
4.5 Software
Measurement, automation, and telemetry components were developed simultaneously, but
largely independent of one another. While each component functioned as expected,
components did not exist as a harmonious whole, or what could truly be called a SCADA
system. The missing component was software to tie all the processes together. There are a
variety of commercially available software packages for such use and MRGCD
experimented with several. Ultimately, the MRGCD chose to purchase the commercial
software package Vsystem and to employ the vendor Vista Controls to develop new
features specific to the control of a canal network. Installation and setup was done by the
MRGCD.
This system, known as the Supervisory Hydro-data Acquisition and Handling System
(SHAHS), gathers data from RTU's on a regular basis. With the capability to define both
timed and event driven poll routines, and specify a virtually unlimited number of RTU’s
and MODBUS registers to collect, virtually any piece of information can be collected at any
desired time. The Vsystem software can process data through a myriad of mathematical
functions, and combine outputs from multiple stations. Vsystem also incorporates the ability
to permanently store data in its own internal database, MS Sequel databases, or export data
in other formats. Data can be displayed in a user-created graphical user interface (GUI)
which MRGCD water operations personnel use to monitor water movement. The screens
can also execute scripts to generate data, control parameters, control gate set points, and
206 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
monitor alarm conditions for automated control structures. Finally, the GUI’s can be used to
control automated structures by transmitting new parameters and setpoints.
the MRGCD SCADA Vsystem software converts the output from the DSS into the SHEF.A.
format. This subroutine allows for the creation of separate data streams from the DSS
schedule for each lateral canal service area. These data streams are the same data streams
that are used throughout the MRGCD SCADA network.
The third step in linking the DSS was to create a node for the DSS recommended flowrate
for each lateral service area in the SCADA GUI that also contains actual flowrate data. This
consisted of creating nodes in the SCADA GUI that display the actual flow passing into a
lateral on the left side of the node and the recommended flowrate from the DSS in
parentheses. The DSS recommended flow was linked to the correct data stream from the
DSS output in SHEF.A format. For the 2009 irrigation season actual flow and the DSS
recommended flow were displayed side by side for each lateral on the Peralta Main Canal in
the SCADA GUI. Figure 13 displays the revised SCADA GUI with the DSS
recommendations.
Fig. 13. MRGCD SCADA Screen with Actual Deliveries and DSS Recommendations
Whenever the recommendations for DSS flowrates are updated the DSS is run and the
scheduler output saves as a CSV file in Excel. This file is then converted using the Vsystems
subroutine and available as a SHEF.A. data stream for each individual lateral. Linking the
DSS water distribution recommendations with the MRGCD SCADA provides a simple and
effective medium for managers to implement and monitor DSS water delivery schedules.
Additionally, the combination of both programs allows for real-time management and the
convergence of river diversions and the water required based on crop demand.
208 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
6. Results
The DSS linked with SCADA has multiple benefits for the MRGCD. These benefits include
providing a method for predicting anticipated water use, sustaining efficient agriculture in
the valley, minimizing the difficulty of meeting flow requirements for environmental
concerns, and improved reservoir operations. The DSS in concert with SCADA also
provides the MRGCD with a powerful tool to manage water during periods of drought.
One of the main benefits that the DSS has for the MRGCD is that the DSS provides a tool to
determine anticipated water use. The need for this arises out of the fact that water delivery
to farms in the MRGCD is not metered and scheduling is inconsistent. This makes it
difficult, if not impossible, to place the appropriate amount of water in a canal at the proper
time. The MRGCD dealt with this in the past by typically operating all canals at maximum
capacity throughout the irrigation season. This provided a high level of convenience for
water users, and made the lack of scheduling unimportant. But this practice has had
significant negative consequences. Not least of these consequences was the public
perception of irrigated agriculture. The MRGCD practice of operating canals at maximum
capacity resulted in diversion rates from the Rio Grande that were large. Through the
creation of a thorough and systematic database in the DSS of cropping patterns, irrigated
acreage, automated processing of climate data into ET values, and incorporation of a flow-
routing component incorporating the physical characteristics of the MRGCD canals, it
became possible to predict how much water users will need. The DSS, with the myriad of
calculations it performs, provides the MRGCD water operations manager a method to
determine water requirements in advance and manage water using SCADA. Problems of
timing and crop stress are addressed and the MRGCD can operate at reduced diversion
levels, while serving its water users.
Use of the DSS linked with SCADA also has the benefit that it can aid the MRGCD in
sustaining irrigated agriculture in the Middle Rio Grande Valley. The agricultural tradition
in the Middle Rio Grande Valley dates back over 500 years and one of the main goals of
MRGCD is to sustain the agricultural culture, lifestyle and heritage of irrigation. The
problem facing the MRGCD is how to sustain irrigated agriculture amidst drought and
increased demands for water from the urban and environmental sectors. These demands for
water from other sectors will increase as the population in the Middle Rio Grande Valley
grows and expands. Additionally, the MRGCD will be faced with dealing with periodic
drought and climate change with the possibility of reduced snow melt runoff in the future.
The concept of scheduled water delivery, implemented through the use of the DSS linked
with SCADA provides the MRGCD with the ability to sustain irrigated agriculture with
reduced river diversions. Scheduled water delivery that is based on crop demand calculated
using the DSS and delivered through a highly efficient modernized system will allow the
MRGCD to continue supplying farmers with adequate water for irrigation, even though the
available water supply may be reduced due to natural or societal constraints.
Scheduled water delivery implemented through the use of the DSS and SCADA will also
benefit the MRGCD by easing competition with environmental water needs. Due to the
Endangered Species Act, water operations and river maintenance procedures have been
developed in the Middle Rio Grande Valley to ensure the survival and recovery of the Rio
Grande silvery minnow. These procedures include timing of flow requirements to initiate
Linking a Developed Decision Support System
with Advanced Methodologies for Optimized Agricultural Water Delivery 209
spawning and continuous flow throughout the year to provide suitable habitat.
Additionally, the entire Rio Grande in the MRGCD has been designated as critical habitat
for the RGSM. The use of the DSS, linked with SCADA provides the MRGCD with the
ability to reduce river diversions at certain times during the irrigation season. Reduced river
diversions from the MRGCD main canals may at times leave more water in the Rio Grande
for the benefit of the RGSM with credit toward the MRGCD for providing the flow
requirements for the recovery of the species. The DSS may also be useful in providing
deliveries of water specifically for the RGSM. Since the listing of the RGSM, the MRGCD
canal system has been used to deliver a specific volume of water to points along the Rio
Grande to meet flow requirements for the species. At certain times of the year, this is the
only efficient way to maintain RGSM flow targets. While not presently incorporated in the
DSS, it would be straightforward to specify delivery volumes at specific points in the
MRGCD system. These delivery volumes for the RGSM would be scheduled and routed in a
similar fashion to agricultural deliveries. Depending on the outcome of the current process
of developing RGSM management strategies, this could someday become a very important
component and benefit of the DSS.
One of the significant benefits of using the DSS with SCADA in the MRGCD is improved
reservoir operations. The storage reservoirs in the MRGCD are located in the high
mountains and it takes up to seven days for released water to reach irrigators. Prior to
scheduled water delivery utilizing the DSS, the MRGCD water manager had to guess at
what the demand for a main canal would be in advance and then release stored water to
meet the assumed demand. Through the use of the DSS and SCADA the water operations
manager can utilize historical climate data to predict what the agricultural demand will be
in the future. This allows for an accurate calculation of the required release from reservoir
storage and minimizes superfluous releases. These reduced reservoir releases have
significant benefits for the MRGCD. Since less water is released from storage reservoirs
during the irrigation season it allows for increased storage throughout the season. This
allows the MRGCD to stretch the limited storage further and minimize the impacts of
drought. Decreases in reservoir releases also have the added benefit of providing more
carryover storage for the following irrigation season, providing greater certainty for water
users in subsequent years. Larger carryover storage also translates to less empty space to fill
during spring runoff. This leads to three benefits for the MRGCD. The first is that reservoirs
can still be filled, even in a year when runoff is below average. The second benefit is that in
above average runoff years the reservoirs will fill quickly and much of the runoff will go
downstream, mimicking the hydrograph before the construction of upstream storage
reservoirs. This is a subtle but significant environmental benefit to the Middle Rio Grande
and RGSM as peaks in the hydrograph induce spawning, provide river channel equilibrium
and affect various other ecosystem processes. The third benefit is that increased downstream
movement of water during the spring will aid the state of New Mexico in meeting Rio
Grande compact obligations to Texas. Overall, scheduled water delivery in the MRGCD
provides significant benefits to reservoir operations and will allow the MRGCD to reduce
reservoir releases, provide more reliable deliveries, increase certainty of full deliveries, and
sustain irrigated agriculture.
In times of surplus water the MRGCD may not utilize the DSS for scheduled water delivery
because of the associated increase in management intensity and higher operational costs. In
drought years the available water supply will be lower and result in tighter and more
210 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
7. Conclusions
Water delivery through the use of a DSS linked with SCADA has applications and benefits
that can be realized throughout the arid regions of the world. The main benefit of scheduled
water delivery utilizing a DSS and SCADA is that diversions can more closely match crop
water requirements eliminating spills and drainage problems. In the American West urban
growth and environmental concerns have forced irrigated agriculture to reduce diversions.
In many irrigated systems in the United States that traditionally used surface application,
agriculture has opted to improve water use efficiency by changing water application
methods to sprinkler and drip irrigation systems. Irrigated agriculture throughout most of
the world still relies on surface application and cannot afford to upgrade to systems such as
sprinkler or drip irrigation. Therefore, water delivery utilizing a DSS with SCADA has the
potential to reduce diversions and sustain agriculture.
The DSS, SCADA, and scheduled water delivery also have the potential of meeting future
agricultural demands in developing regions throughout the world. As the world population
continues to grow there will be an increased demand for food production and in many cases
water resources available for agriculture are already fully utilized. Utilizing a DSS linked to
SCADA would allow water users in developing countries with surface application systems
to conserve water from their current practices and apply the saved water to increased food
production.
The DSS could also be used to refine water delivery scheduling. Many arid regions have
been dealing with water shortages for decades and have already implemented scheduled
water delivery. In most cases, water delivery schedules are based on a set interval of time
and do not coincide with crop demand. In areas where this type of scheduling is practiced
the DSS could be used to refine scheduling protocols to include crop demand. Scheduling
water deliveries based on crop demands would provide additional saving in areas where
scheduled water delivery is already implemented. The developed DSS could be utilized in
any irrigation system worldwide that practices surface irrigation techniques. Through
scheduling based on crop demand, overall diversions could be significantly reduced.
Reduced diversions could help irrigators deal with drought, and climate change by allowing
for increased utilization of stored water. Additionally, reduced diversions could be utilized
to grow supplementary crops to supply the needs of a growing population in the future.
8. Acknowledgements
The authors would like to thank Subhas Shah, the MRGCD Board of Directors and staff, the
New Mexico Interstate Stream Commission, the New Mexico Office of the State Engineer,
the Middle Rio Grande Endangered Species Act Collaborative Program, the United States
Army Corp of Engineers, and the United States Bureau of Reclamation for the assistance and
the financial support to undertake this project. Also, the exceptional support of Jim Conley
at Control Design, Gerald Robinson and Lee Allen at Aqua Systems 2000, and Cathy
Linking a Developed Decision Support System
with Advanced Methodologies for Optimized Agricultural Water Delivery 211
Laughlin and Peter Clout of Vista Control Systems is graciously recognized. The authors
would also like to acknowledge the support of the National Science Foundation
9. References
Barta, R. 2003. Improving Irrigation System Performance in the Middle Rio Grande
Conservancy District. Master’s Thesis. Department of Civil Engineering, Colorado
State University. Fort Collins, Colorado
Gensler, D. Oad, R. and Kinzli, K-D. 2009. Irrigation System Modernization: A Case Study
of the Middle Rio Grande Valley. ASCE Journal of Irrigation and Drainage. 135(2):
169-176.
King, J.P., A.S. Bawazir, and T.W. Sammis. 2000. Evapotranspiration Crop Coefficients as a
Function of Heat Units for Some Agricultural Crops in New Mexico. New Mexico
Water Resources Research Institute. Technical Completion Report. Project No.
01-4-23955.
MRGCD. 2009. http://www.mrgcd.com
MRGCD. 2007. Middle Rio Grande Conservancy District – Keeping the Valley
Green. Flyer of the Middle Rio Grande Conservancy District. Middle Rio Grande
Conservancy District. Albuquerque, New Mexico.
New Mexico Interstate Stream Commission (NMISC); 2006 (authors Ramchand Oad, Luis
Garcia, Dave Patterson and Kristoph-Dietrich Kinzli of Colorado State University,
and Deborah Hathaway, Dagmar Llewellyn and Rick Young of S. S. Papadopulos
& Assoc.); Water Management Decision-Support System for Irrigation in the Middle Rio
Grande Conservancy District; March, 2006.
Oad, R. Garcia, L. Kinzli, K-D, Patterson, D and N. Shafike. 2009. Decision Support Systems
for Efficient Irrigation in the Middle Rio Grande Valley. ASCE Journal of Irrigation
and Drainage. 135(2): 177-185.
Oad, Ramchand and K. Kinzli. 2006. SCADA Employed in Middle Rio Grande Valley to
Help Deliver Water Efficiently. Colorado Water – Neswletter of the Water Center
of Colorado State University. April 2006,
Oad, R. Garcia, L. Kinzli, K-D. Patterson, D. 2006. Decision Support Systems for Efficient
Irrigated Agriculture. (2006). Sustainable Irrigation Management, Technologies and
Policies. WIT Transactions on Ecology and Environment. Wessex Institute of
Technology, Southampton UK.
Oad, Ramchand and R. Kullman (2006). Managing Irrigated Agriculture for Better River
Ecosystems—A Case study of the Middle Rio Grande. Journal of Irrigation and
Drainage Engineering, Vol. 132, No. 6: 579-586. American Society of Civil
Engineers, Reston, VA.
Shah, S.K. 2001. The Middle Rio Grande Conservancy District. Water, Watersheds, and Land
use in New Mexico: Impacts of Population Growth on Natural Resources, New
Mexico Bureau of Mines and Mineral Resources. Socorro, New Mexico. 123-125.
S.S. Papadopulos and Associates, Inc. (SSPA). 2000. Middle Rio Grande Water Supply
Study. Boulder, Colorado.
212 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
S.S. Papadopulos and Associates, Inc. (SSPA). 2002. Evaluation of the Middle Rio Grande
Conservancy District Irrigation System and Measurement Program. Boulder,
Colorado.
United States Fish and Wildlife Service (USFWS). 2003. Federal Register 50 CFR Part 17.
Endangered and Threatened Wildlife and Plants; Designation of Critical Habitat
for the Rio Grande Silvery Minnow; Final Rule. Vol. 68, No. 33. U.S. Department of
Interior.
11
USA
1. Introduction
Non-point source pollution (NPP) is caused by the movement of water over and through the
ground, generally after each rainfall. The runoff picks up and carries away natural and man-
made pollutants, eventually depositing them in water bodies like lakes, rivers and coastal
waters. Thus the pollutants left on the surface from various sources accumulate in receiving
water bodies. For example, crop cultivation requires more use of chemicals and nutrients
than natural vegetative cover like forest and grasslands. Tillage operations affect the soil
structure and the level of chemicals in the soil. Such activities make the nutrient rich topsoil
fragile and cause it to lose more chemicals and soil particles during rainfall. Lands in
residential and development uses, such as lawns and gardens are managed more
intensively, which encourages the generation of even more pollutants. On the other hand,
urban areas have higher percentages of impervious to porous surfaces that result in low
percolation and higher runoff. During precipitation, the runoff carries more nutrients and
sediments from agricultural and residential lands resulting in higher chemical level and
turbidity in the water. Thus increasing urbanization coupled with increasing use of
nutrients and chemicals in agricultural lands create significant challenges to maintain water
quality.
Recent water quality studies have focused on developing and successfully applying various
biophysical simulation methods to model levels of NPP and to identify critical locations
from which these pollutants originate (Bhuyan et al., 2001; Mankin et al. 1999; Marzen et al.,
2000). These models collect and use various geospatial data, facilitating the spatial analysis
of sources and effects of point and non-point pollutants with reference to their origin and
geographical locations. The findings of such models help the environmental policy planners
to understand both the short-term and long-term effects of changes from alternative land
management scenarios and simulate how the pollution can be reduced effectively through
the institutionalization of best management practices. This study uses the BASINS-SWAT
modeling framework, one of the latest biophysical modeling tools to estimate the effects of
land use change in the quality of water. Simulations are done to estimate the level of
214 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
nitrogen, phosphorus and sediment loads using two time period land use maps. This study
demonstrates the use of geospatial technologies to gather and organize reliable and current
data for inputs into the BASINS-SWAT model runs.
3. Study area
This study is based on the Mulberry Creek Watershed a 10-digit coded Hydrological Unit
(HUC #0313000212) in Harris County, Georgia, nested within the Middle Chattahoochee
watershed (HUC# 03130002). The area is situated to the north of Columbus, GA and covers
approximately 50% of the county surface area. The map of study area is given in Figure 1.
Paulding
Cobb
k
Douglas Fulton
Fort
ee
k
ee
Cr
Cree
Cr
G in
ell
wd
k
Pa
Do
Bell Creek
Carroll
lm
ett
oC
Ba
N rne
r ee
sC k
reek Cree
k
ree
k yC erry
lberr Mulb
Mu
Wh
Coweta
ets
ek
ton
W E Randolph Heard erry
Cre
k
e
Mulb chie Cree
Bra
Ossahat
nc h
Meriwether
S
Troup (HUC #0331000212)
HUC # 03310002
5 0 5 Miles
Chambers Harris
Talbot
Lee
Legend
Muscogee
Stream Network (NHD)
Reach File V1
Study area sub-watershed
Main Watershed
County Boundary
in its population. As the demographic structure and other socioeconomic conditions in the
area changed between 1990 and 2000 census years, land use distribution has also changed.
This provided a very good experiment site to estimate the water quality impact of land use
change. Figure 2 compares the land use scenarios within the Mulberry Creek watershed
using the National Land Cover Datasets 1992 and 2001 (NLCD-1992 and NLCD-2001).
Legend
Developed Land
Agricultural Land
Forest Land
Other Land
Subwatershed Boundary
of impaired water, 13 total maximum daily loads (TMDLs) had been approved in the
Middle Chattahoochee Watershed between 1996 and 2004 (EPA, Surf Your Watershed).
4. Methodology
4.1 Data collection and processing
The study uses secondary data obtained from various sources. The sources and processing
of the individual data components is described below.
Core BASINS data: Once the BASINS project was opened, basic datasets were extracted from
various sources using the data download extension within the BASINS program. These files
included various boundary shape files at watershed, county and state levels, soil and land
use grids, locations of water quality stations, bacteria stations, USGS gage stations, permit
compliance systems and National Hydrography Data (NHD). The BASINS view was
projected to Albers Equal-Area Conic projection system. All subsequent data were projected
in the same system using grid projector tool in BASINS.
NEM data: The 1:24,000 scale 30x30m resolution national elevation model (NEM) data for the
entire area was downloaded from Seamless Data Distribution System of USGS Web Server.
The dataset was converted to grid files using ArcView Spatial Analysis tool.
Land use data: Two sets of spatial land use data were obtained from Seamless Data
Distribution System of USGS Web Server. The first land use map was the National Land
Cover Database 1992 (NLCD-1992), which was prepared based on satellite images taken
circa 1990. The second land use map was the National Land Cover Database 2001 (NLCD-
2001). This map was prepared from satellite images taken circa 2000. Both of the land cover
maps had 30x30m resolution at the scale of 1:24,000 (USGS MRLC metadata information).
Land use grids were re-projected to match with the NEM grids projection.
Climate Data: SWAT model uses climate information including precipitation, temperature,
wind speed, solar radiation and relative humidity data. However, when no such data are
available, the model has capability to simulate data based on historical weather observations
from the nearest of more than 800 climate stations (SWAT Users’ Manual). Observed daily
precipitation and minimum/maximum temperature data were obtained from the National
Climate Data Center (NCDC) database for ten nearby climate stations between January 1945
and December 2004. Raw precipitation and temperature data were converted into SWAT
readable individual database files for each station.
uniform size distribution of sub-basins. Sub-basin parameters were calculated within the
model that makes the use of DEM and stream network (Figure 4-5).
type of land use were set to common practices reported in various published extension
service papers. Sedimentation and nutrient deposition were evaluated for different land use
scenarios using NLCD-1992 and NLCD-2001. Since the data on wind, solar radiation and
humidity were not available, built-in data from the SWAT model were used. Curve number
and Preistley-Taylor methods were used for modeling the rainfall and evapotranspiration
respectively.
5. Results
The watershed was divided into 331 sub-basins. The mean elevation of all sub-basins was
216 meters above sea level ranging from 128 to 368 meters. The mean and median sizes of
sub-basins were 176.2 hectares and 174.4 hectares respectively. This sub-basins area ranged
from 14.3 hectares to 438.9 hectares.
HRUs are determined by the combination of land use and topographic parameters.
Therefore, the number of HRUs was different when two land use maps were used. A total of
2089 HRUs were identified by overlaying soils and land use map for land use condition in
1992. On average each sub-basin contained 6.3 HRUs with mean area of each HRU as 27.9
hectares. When NLCD-2001 land use map was overlaid on the same sub-basins with same
threshold level, a total of 2158 HRUs were identified. On average each sub-basin contained
6.52 HRUs with the mean area of 27.0 hectares.
Table 2 contains the results from the SWAT run with two sets of land cover data NLCD-1992
and NLCD-2001. Simulations were run for 30 years from 1975 to 2004. SWAT results for the
initial years are generally taken as warm up results. Results from the first 10 years were
used for calibration and the later twenty years of results (1985-2004) were used for
comparative analysis.
NLCD-1992 NLCD-2001
Variable Change p-value^
Mean Std. Dev. Mean Std. Dev.
pairwise t-test for the comparison of means and are significant at 5% level of significance.
Graphical comparison of sediment loadings and organic nitrogen and phosphorus runoff
under two land use scenarios are given in figures 7 – 9.
5.0
4.5
4.0
Nitrogen (kg/ha)
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.0
1985 1987 1989 1991 1993 1995 1997 1999 2001 2003
NLCD-1992 NLCD-2001
0.6
0.5
Phosphorus (kg/ha)
0.4
0.3
0.2
0.1
0.0
1985 1987 1989 1991 1993 1995 1997 1999 2001 2003
NLCD-1992 NLCD-2001
40
35
30
Sediment (t/ha)
25
20
15
10
0
1985 1987 1989 1991 1993 1995 1997 1999 2001 2003
NLCD-1992 NLCD-2001
6. Discussion
This study demonstrates how changes in land cover scenarios can affect the water quality in
the catchment area with quantitative findings. These results are basic to the understanding
of water quality impacts of land use change which ultimately helps regional planners and
watershed management policy makers by providing the estimates of changes in water
quality when land use changes in an area over long run. King and Balogh (2001) applied the
methodology to simulate water quality impacts associated with converting farmland and
forests to turfgrass. They found SWAT to be a useful tool in evaluating risk assessments
associated with land use conversions.
The application of the model can be extended to include much detailed land use and soil
distribution facilitating the short-run assessment of point and non-point source pollution.
Using recent satellite images to create maps with details of cropping patterns will help to
understand the impact of alternative best management practices such as minimum or no-
tillage practices and reduced use of fertilizers and pesticides (e.g. Santhi et al., 2001). The
fact that the watershed level results can be disaggregated to the individual hydrological
response units helps to achieve precise estimate of water quality impact at much smaller
level. While these biophysical models are extremely valuable in assessing the physical
impacts of BMPs on quantity and quality of water bodies, the results can be combined into a
more complex bioeconomic modeling to estimate the impacts of land use changes with
respect to economic profitability and the water quality. Bhattarai et al. (2008) demonstrated
that a reduction in cropland improved the water quality with an overall loss in agricultural
returns. Results from these biophysical results combined with economic analysis helps in
setting up a policy that maintains the balance between the water quality and economic
incentives among the stakeholders.
224 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
8. Conclusion
Change in land use distribution had a significant impact on the levels of organic nitrogen
and phosphorus runoff and sediment loadings coming out of the watershed. The forest land
decreased by 16.6% and mostly the developmental use and agricultural cultivation took that
share resulting in overall average annual nutrient runoff increase by 4.76% for nitrogen and
4.55% for phosphorus. Average annual sediment loadings at the basin level increased by
more than 15.04%. With the same amount of rainfall and climate conditions, water flowing
to the main channel from the catchment area was 2.85% higher. This result confirms to
hypothesis that with less vegetative cover, more impervious surfaces in urban areas and
increasing fragile agricultural land, there would be less percolation and higher runoff. This
resulted in higher organic nitrogen and phosphorus concentration in the discharged water.
This study also indicates SWAT’s relative strength in quantifying the effect of changes in
land use distribution and land management scenarios on water quality.
9. Acknowledgement
Partial funding for this research was provided by the Center for Forest Sustainability
through the Peaks of Excellence program at Auburn University. Additional funds were also
provided by Auburn University’s Environmental Institute. The views expressed in this
article are of authors and do not necessarily represent the views of the funding institution or
the institution the authors represent.
10. References
Bhattarai, G., P. Srivastava, L. Marzen, D. Hite, and U. Hatch. 2008. Assessment of Economic
and Water Quality Impacts of Land Use Change Using a Simple Bioeconomic
Model. Environmental Management 42(1): 122-131.
Bhuyan, S.J., L.J. Marzen, J.K. Koelliker, J.A. Harrington Jr., and P.L. Barnes. 2001.
“Assessment of Runoff and Sediment Yield Using Remote Sensing, GIS and
AGNPS.” Journal of Soil and Water Conservation 57(5):351-364.
Borah, D.K., and M. Bera. 2003. SWAT Model Background and Application Reviews. Paper
presented at American Society for Engineering in Agricultural, Food, and
Estimating the Impact on Water Quality under Alternate Land Use Scenarios:
A Watershed Level BASINS-SWAT Modeling in West Georgia, United States 225
Biological Systems Annual International Meeting, Las Vegas, Nevada, USA. July
27-July 30, 2003.
DiLuzio, M., R. Srinivasan, J.G. Arnold, and S.L. Neitsch. 2002. ArcView Interface for
SWAT2000: User’s Guide. TWRI Report TR-193, Texas Water Resources Institute,
Texas. 2002.
Intarapapong, W., D. Hite, and L. Reinschmiedt. 2002. “Water Quality Impacts of
Conservation Agricultural Practices in the Mississippi Delta.” Journal of the
American Water Resources Association, 38(2):507-515, April 2002.
King, K. W. and J. C. Balogh. 2001. Water Quality Impacts Associated with Converting
Farmland and Forests to Turfgrass. Transactions of the ASAE 44(3): 569-576.
Liao, H., and U.S. Tim. 1997. “An Interactive Modeling Environment for Non-point Source
Pollution Control." Journal of the American Water Resources Association 33(3):1-13.
Maidment, D. R. (eds.). 2002. Arc Hydro: GIS for Water Resources. ESRI Press, Redlands,
California (pp. 73).
Mankin, K. R., J. K. Koelliker, and P. K. Kalita. 1999. “Watershed and Lake Water Quality
Assessment: An Integrated Modeling Approach.” Journal of the American Water
Resources Association 35(5):1069-1080.
Marzen, L.J., S.J.Bhuyan, J.A. Harrington, J.K. Koelliker, L.D. Frees, and C.G. Volkman. 2000.
“Water Quality Modeling in the Red Rock Creek Watershed, Kansas.” Proceedings of
the Applied Geography Conference 23: 175-182.
Neitsch, S.L., J.G. Arnold, J.R. Kiniry, J.R. Williams, and K.W. King. 2002. Soil and Water
Assessment Tool: Theoretical Documentation, Version 2000. Texas Water Resources
Institute, College Station, Texas, TWRI Report TR-191.
Neitsch, S.L., J.G. Arnold, J.R. Kiniry, R. Srinivasan, and J.R. Williams. 2002. Soil and Water
Assessment Tool: User’s Guide, Version 2000. Texas Water Resources Institute,
College Station, Texas, TWRI Report TR-192.
Saleh, A., J.G. Arnold, P.W. Gassman, L.W. Hauck, W.D. Rosenthal, R.R. Williams, and
A.M.S. McFarland. 2000. Application of SWAT for the upper north Bosque
Watershed. Transactions of the ASAE 43(5):1077-1087.
Santhi, C., J. G. Arnold, J. R. Williams, L. M. Hauck, and W. A. Dugas. 2001. Applications of
a Watershed Model to Evaluated Management Effects on Point and Nonpoint
Source Pollution. Transactions of the ASAE 44(6): 1559-1570.
U.S. Department of Agriculture Natural Resources Conservation Service (USDA-NRCS).
2002. Major Land Resource Area Regional #15. Auburn, AL. Available online at
http://www.mo15.nrcs.usda.gov/alsoilnt/almlramp.html.
United States Census Bureau, 1990. States and County Quick Facts. Online at:
http://quickfacts.census.gov/qfd/index.html
United States Census Bureau, 2000. States and County Quick Facts. Online at:
http://quickfacts.census.gov/qfd/index.html
United States Environment Protection Agency (USEPA). Total Maximum Daily Loads
Factsheet. Available online at
http://www.epa.gov/owow/tmdl/ (accessed 11/18/2003)
United States Environmental Protection Agency (USEPA). 1998. “National Water Quality
Inventory: 1996 Report to Congress.” Washington, DC.
12
1. Introduction
People living in the lower valley of the St. John River, New Brunswick, Canada, frequently
experience flooding when the river overflows its banks during spring ice melt and rain.
Media reports reveal devastating effects of the latest floods that hit New Brunswick, Canada
during the summer of 2008 (CTV, 2008). The rising water levels forced the closure of the
New Brunswick Legislature, and also resulted in the temporary closures of the international
bridge that links the Province to the United States of America. It also closed the operation of
the Gagetown ferry. Fredericton experienced its worst floods in 1973, when the St John River
reached the 8.6 m mark (ENB-MAL, 1979).
The 2008 flood was recorded as one of the major floods experienced in Fredericton after the
1973 flood (see Figures 1 and 2). On April 24th 2008, due to rapid melting of snow set by an
unusually severe winter and combined with intense rainfall, the water level of St. John River
reached 7.2 m (TC, 2008). The water levels in Fredericton raised by a meter overnight to 8.33
m on May 1st. Raising St. John River levels peaked at 8.36 m on May 2nd, almost reaching the
previous record of 8.61 m set in 1973 (CIWD, 1974).
The closure of roads, government buildings followed by the evacuation of people and their
possessions during floods is necessary to avoid the loss of life and property. Raising river
water levels could affect electrical, water and telecommunication facilities. It could also
affect the sewage system. In such a situation, the public buildings washrooms cannot be
used. The question that the government officials are facing is: “When can an office be
declared risky to occupy?” The decisions to close public utilities require strong reasons.
Such decisions have social, economic and political impacts on the community.
City Managers will require reliable support for the decision to close down government
infrastructure. To facilitate this process, we propose the use of 3D flood modelling,
4Emergency Measures Organization, Victoria Health Centre, 65 Brunswick Street, Fredericton, NB, Canada
2. Previous research
Since ancient times, humans have developed means to monitor flood levels and to some
extent, predict the rate of flood rise. People in medieval times marked animals and pushed
them down the river to see how deep the river was. According to the director of EMO,
Ernest McGillivray, his grandfather had a self-calibrated stick that he used to insert in the
river to compare and forecast river flood levels.
Today, more innovative technologies have been developed to study floodings (Jones, 2004;
Marks & Bates 2000). These include satellite remote sensing, aerial photogrammetry and
LiDAR. Such technologies are combined with computer terrain modelling tools to create
scenarios for analysis, as in the case of Geographic Information Systems (GIS).
Previously, the New Brunswick Emergency Measures Organization (EMO) in collaboration
with the University of New Brunswick, Canada developed a flood model (available from
http://www.gnb.ca/public/Riverwatch/index-e.asp) for the area of lower St. John River,
New Brunswick, Canada (EMO, 2008).
The research done so far on early mapping and flood monitoring in the lower St. John
produced a web-based system for flood warning (Mioc et al., 2008; Mioc et al., 2010),
publicly available to the population living in this area (see Figures 3 and 4). However, the
accuracy of existing elevation data was a major problem in this project. The maps we were
able to produce were not accurate enough to effectively warn the population and to plan the
evacuation. Another problem we faced in the 2008 flood was that even though the buildings
were not affected by the flood the electrical and water facilities were not functioning, so the
230 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
buildings and houses were inhabitable. The elevated groundwater levels would affect the
power cables causing power outages and the old plumbing installations would cause sewer
back-propagation due to the pressure of the rising water from St. John River.
Current visualization methods cannot adequately represent the different perspectives of the
affected infrastructure. It can be seen that the building models are represented by polygons
only and the detailed 3D views of the buildings and terrain affected by the flooding do not
exist. As a result, the public may not have adequate technical or analytical know-how to
analyze the polygons. Therefore, they may not find the River watch website very useful (see
Figure 4). To overcome these problems we had to develop a new digital terrain model and
new techniques for flood modelling. In order to improve the existing digital elevation model
we needed to use new technologies for data acquisition (Moore et al., 2005).
cross-sections over a period of four days (May 1st to May 4th, 2008) are shown in Table 1.
Tidal gauge readings were extended to a profile across the river in order to facilitate spatial
analysis.
100
95
90
85
80
75
70
65
60
55
50
45
40
35
30
25
20
15
10
5
0
-5
-10
-15
together with Digital Terrain Model, the data originally available in the geographic
coordinates using the ellipsoid defined by the WGS84 geodetic datum are finally
transformed to UTM and the CGVD28 vertical datum. To reduce the dataset and improve
the processing speed, orthometric heights above 20m were filtered out, since we did not
expect flood levels to go above this height. The transformed coordinates are used as the
input to the GIS system in order to model a Digital Terrain Model (DTM). Layers containing
flood profiles and polygons are created for each day of the extreme flood (1st to 4th of May)
in 2008. In addition the extreme flood from 1973 was modelled as well and used for
comparison. Existing utilities and buildings in downtown Fredericton we modelled in 3D
and geo-referenced to the WGS 84 geodetic datum and the CGVD28 vertical datum. All the
developed models and DTMs are then combined under the same coordinate system.
For the efficient decision support system, different flood scenarios were modelled for
different flood levels (Sanders et al., 2005; Dal Cin et al., 2009). The technologies for 3D
modelling are available in commercial and public domain software. One such tool that was
used in this project was Google SketchUp (Chopra, 2007). Although limited in its interaction
with other spatial features, Google SketchUp provided the tools needed to model the main
features of the buildings (buildings geometry and the texture for the façade and the roof)
and surrounding utilities for the test area. As part of future work, advanced tools in the
CityGML (Kolbe et al., 2008) could be used for more interactive results.
Fig. 10. Extracted building polygons (thin brown lines) compared with Cadastral buildings
blueprints (thick red lines)
236 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Fig. 11. The workflow of detailed methods and techniques applied for modelling of flooded
buildings and infrastructure.
Fig. 13. The 3D model of buildings overlaid over the water and electrical utilities
areas and risk probabilities. However, water levels obtained by hydraulic modelling do not
tell much about the severity and extent of a flood. This motivates the modelling and a
visualization of predicted flood areas using GIS. The spatial delineation of flood zones using
GIS has become a new research area following the advancement of technologies for data
collection (Noman et al, 2001, 2003).
This research uses different spatial analysis tools to create floodplains from LiDAR data in
Fredericton area and water gauges for Saint John River. For hydrological modelling we
used DWOPER (Fread, 1992, 1993 ; Fread & Lewis, 1998) (as described in Mioc et al., 2010).
The results of hydrological modelling were then used within a GIS for 3-dimensional
modelling of flood extents (Mioc et al., 2010).
Following our processing workflow (see Figure 11), there were two main objectives in this
part of the research:
1. Compare the DTM resulting from LiDAR data with DTMs resulting from water gauges
to find the flood extents for Fredericton downtown area.
2. The second objective is to create a single TIN for both LiDAR data and water gauges to
calculate the difference of the volume, which represents the floodplain.
The flood progression from May 1st to May 4th, 2008 is analyzed (see Figure 12). Using the
results of spatial analysis, a flood prediction model was developed for emergency planning
during future floods. In this phase of our research, we were able to obtain the delineation of
flood zones using LiDAR and Tidal height information (available from water gauges). In
addition, we were able to integrate a number of processes that make flood forecast possible:
the acquisition and processing of the elevation data; the use of hydrological software to
simulate models of flow across floodplains; the use of spatial analysis software (GIS) that
turns the modelling results into maps and overlays them on other layers (thematic maps or
aerial photographs); and software that makes these models and predictions available on the
Internet in a flexible and user-friendly way.
From the computed floodplain, displayed as superimposed polygon layers, we visualize
that the major flood extent occurred from May 1st to May 2nd, with the peak for flood on
May 2nd. Furthermore, the flood subsided from May 3rd to May 4th. The system we
developed allows the computation of floodplain for predicted flood peak (shown in red on
Figure 12) that is critical for emergency managers. Furthermore, the flood modelling results
are used to develop a three dimensional model of flooded buildings combined with some
city infrastructure, roads, water and electrical utilities (see Figures 14, 15 and 16).
display a clear-to-nature scenario, but provides also a more realistic outlook of the buildings
and infrastructure during floods. Finally, a flood scene was produced for each of the
forecasted flood levels for visualization via Web interface.
Using different extrusion heights to represent different flood scenes, it is possible to
simulate different flood progression events. The pictorial scene, representation of the
building, floods and its effects can be clearly visualized and analyzed. Figure 15 shows a 3D
view of the flooding in May 2008. The 3D buildings and utilities can be seen consecutively
as a result of applying transparency to the thematic layers. In Figure 16, the utility lines are
embedded in the 3D models. Figures 17 and 18 present a 3D visual model of the submerged
newly built Public Washroom at the Military Guard Square, in Fredericton, Canada in an
extreme flood scenario. At this level of water rise, the electrical boxes would be flooded. It is
visible from the model that during flooding, surrounding areas including the Public Library,
the Armory and the Military depots will be out of use and certainly inaccessible.
Fig. 14. The 3D model of buildings overlaid over the water and electrical utilities that will be
affected by the flood
The DTMs in Figures 15 and 16 show the natural critical point at which water will begin to
flow upwards. When ground water rises up to this level, waste matter from the sewage
system will flow upward, under pressure from rising water from the river. The washroom
facility may not be used under these circumstances. Computing the floodplain for the 20
year statistical flood showed that many parts of Fredericton may have been build on a
floodplain. The new analysis of DTMs combined with the groundwater levels shows that, if
240 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
ground water levels rise across the city, many homes and governmental buildings will be
flooded. The electrical utilities and sewage system that are laid in the underground will be
affected as well resulting with the sewer back-propagation and the electric power outages.
The situation is worse in the downtown area, which has the lowest heights. Priority
emergency decisions can be made in this situation to close the downtown offices and
infrastructure first, at the start of rising water levels. Based on the results of the overlaying
the floodplains with existing utilities and infrastructure, it can be decided when it is risky to
occupy or use the buildings or the infrastructure.
Fig. 15. 3D models of selected buildings in Fredericton integrated with DTM and flood
model
Flood Progression Modelling and Impact Analysis 241
Daily automatic generation of flood polygons from the data provided by the existing online
River Watch application (see http://www.gnb.ca/public/Riverwatch/index-e.asp) can
produce an animated 3D video, which can be uploaded on the Riverwatch website to
provide updated 3D models to residents.
It can be seen clearly from the comparison of the model and the picture (shown in the Figure
19) captured during the 2008 flood, that the levels of flooding are the same. The water just
touches the back gates of the Fredericton Public library on both of these. The accuracy of the
flood model depends on the vertical accuracy of the LiDAR datasets and the accuracy of the
hydrological modelling.
The 3D GIS application provides a better platform for visualizing flood situations than
previously done in 2D maps. All exterior parts of a building could be visualized in detail
during a flood event. This provides a better tool for analyzing and preparing for emergency
measures. It also presents a near to reality situation that can easily be understood. Provincial
Ministers and decision makers who may not be familiar with GIS analytical tools and Query
Languages can now understand technical discussions on flood analysis through the use of
3D flood models, which are close to reality. It is also possible to simulate floodplain
polygons for different river water levels in order to produce different flood scenarios.
Simulation can also be used to trace and analyze underground utilities by making thematic
layers transparent. Flood scene animations can be published to website for public access.
Fig. 17. 3D Washroom, with power box, in the extreme100 year simulated flood
Flood Progression Modelling and Impact Analysis 243
Fig. 18. 3D model of a washroom, with electrical power box, in 1973 simulated flood
Fig. 19. 3D Model compared with Photograph taken during the flood in 2008
244 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
5. Conclusions
The new flood prediction model that computes accurately floodplain polygons directly from
the results of hydrological modelling allows emergency managers to access the impact of the
flood before it occurs and better prepare for evacuation of the population and flood rescue.
The method of simulating and predicting floods and its effects on utilities provides
powerful visual representation for decision making on when buildings in the flood zone
may be safe for people to occupy. Traditional paper maps and digital maps may not give us
the possibility to do a 3D visualization of the detailed effect of a flood on utilities and
infrastructure.
This research explores the application of 3D modeling using LiDAR data to provide an
analysis of the risk of floods on government buildings and utilities. LiDAR data provides a
cheaper, faster and denser coverage of features for 3D mapping. LiDAR data was processed
to generate 3D maps. By employing accurate coordinate conversion and transformations
with respect to the geoid, a Digital Terrain Model (DTM) was created. Floodplain
delineation was computed by intersecting the Digital Terrain Model with the simulated
water levels. Furthermore, to enhance visual perception of the upcoming flood, 3D
buildings, infrastructure and utilities were modelled for the city downtown area. The DTM
and the 3D models of the government buildings, infrastructure and utilities were overlaid
and presented as a 3D animation. The resulting 3D view does not only register a clear-to-
nature scenario, but also provides a more discerning outlook of the buildings and
infrastructure during floods. Finally, in this research we have clearly shown that GIS and
LiDAR technologies combined with hydrological modelling can significantly improve the
decision making and visualization of flood impact needed for early emergency planning
and flood rescue.
6. Acknowledgment
The authors would like to acknowledge the generous contribution of time, materials and
resources to this project by the New Brunswick Department of Transportation.
This project was financially supported in part by the N.B. Emergency Measures
Organization and the Canadian Department of Natural Resources Geoconnections program
as well as the University of New Brunswick and the New Brunswick Innovation Foundation
(NBIF). The authors would also like to acknowledge the financial contribution of EMO to
the acquisition of new LiDAR datasets (available for this project) after the flood in 2008.
7. References
Alharthy, A. & Bethel, J. (2002). Heuristic filtering and 3D feature extraction from LiDAR
data. International Archives Of Photogrammetry, Remote Sensing and Spatial Information
Sciences, Vol. 34, No.3/A, pp. 29-34.
Canada Inland Waters Directorate, Atlantic Region, (CIWD), (1974), New Brunswick Flood,
April-May, 1973, Ottawa: Inland Waters Directorate, Atlantic Region, 1974, 114
pages.
Chopra, A., (2007). Google SketchUp for Dummies, John Wiley & Sons Ltd., Hoboken, US, 398
pages.
Flood Progression Modelling and Impact Analysis 245
Canadian Television, CTV, (CTV), (2008). News Headlines, Accessed on May 1, 2008. from
http://www.ctv.ca.
Dal Cin, C., Moens, L., Dierickx, PH., Bastin, G., and Zech, Y. (2005). “An Integrated
Approach for Realtime Floodmap Forecasting on the Belgian Meuse River”. Natural
Hazards 36 (1-2), 237-256, Springer Netherlands. [Online] June 10, 2009.URL:
http://springerlink.com/content/tq643818443361k8/fulltext.pdf
Elaksher, A. F. & Bethel J. S. (2002a). Building extraction using LiDAR data. Proceedings of
ASPRS-ACSM annual conference and FIG XXII congress, 22 pages.
Elaksher, A.F. & Bethel J.S. (2002b). Reconstructing 3D buildings from LiDAR data.
International Archives Of Photogrammetry Remote Sensing and Spatial Information
Sciences Vol. 34, No. 3/A, pp. 102-107.
EMO Fredericton, (EMO), (2008). Flood Warning website, Available from:
http://www.gnb.ca/public/Riverwatch/index-e.asp.
Environment New Brunswick & MacLaren Atlantic Limited, New Brunswick, (ENB-MAL),
(1979). Canada-New Brunswick Flood Damage Reduction Program:
Hydrotechnical Studies of the Saint John River from McKinley Ferry to Lower
Jemseg, Fredericton, 116 pages.
Fread, D.L. (1992). Flow Routing, Chapter 10, Handbook of Hydrology. Editor E.R.
Maidment, pp. 10.1-10.36.
Fread, D.L. (1993). NWS FLDWAV Model: The Replacement of DAMBRK for Dam-Break
Flood Prediction, Dam Safety’93. Proceedings of the 10th Annual ASDSO Conference,
Kansas City, Missouri, pp. 177-184.
Fread, D.L. & Lewis, J.M. (1998). NWS FLDWAV MODEL: Theoretical description and User
documentation, Hydrologic Research Laboratory, Office of Hydrology, National
Weather Service (NWS), Silver Spring, Maryland USA, November, 335 pages.
Hodgson, M.E. & Bresnahan, P. (2004). Accuracy of airborne LiDAR-derived elevation:
Empirical assessment and error budget. Photogrammetric Engineering & Remote
Sensing, Vol. 70, No. 3, pp. 331-40.
Jones, J. L. (2004). "Mapping a Flood...Before It Happens". U.S. Geological Survey Fact Sheet
2004-3060. [Online] June 5, 2009., Available from
http://pubs.usgs.gov/fs/2004/3060/pdf/fs20043060.pdf
Kraus, K. & Pfeifer, N. (2001). Advanced DTM generation from LiDAR data. International
Archives Of Photogrammetry Remote Sensing And Spatial Information Sciences
34(3/W4):23-30.
Kolbe, T. H., Gröger, G., & Plümer, L. (2008). CityGML - 3D City Models and their Potential
for Emergency Response. In: Zlatanova, S. and Li, J. (Eds.) Geo-Information
technology for emergency response. ISPRS book series, Taylor&Francis, pp. 257-
274.
Marks, K. & Bates, P. (2000). Integration of high-resolution topographic data with floodplain
flow models. Hydrological Processes, Vol. 14, No. 11, pp. 2109-2122.
Mioc, D; Nickerson B.; McGillivray, E.; Morton A.; Anton, F.; Fraser, D.; Tang, P. & Liang, G.
(2008). Early warning and mapping for flood disasters, Proceedings of 21st ISPRS
Conference, China, Bejing, 2008, 6 pages.
Mioc, D., Nickerson, B., Anton, F., MacGillivray, E., Morton, A., Fraser, D., Tang, P. &
Kam, A. (2010). Early Warning And On-Line Mapping For Flood Events, Geoscience
13
1. Introduction
Green operations management (GOM) has emerged to address the environmental and social
issues in operations management, so that the Triple Bottom Line (3BL) sustainability can be
achieved simultaneously (Rao & Holt, 2005; Zhu, Sarkis & Geng, 2005). The concept of GOM
was originally formed in the 1990s. Early research on GOM mainly directed towards
segmented areas of operations management, such as quality management. Over the past
decades, GOM has attracted significant research interests from academia, many issues
remain under-addressed which have hindered the effectiveness of GOM practice (Zhao &
Gu, 2009; Yang et al, 2010), although the needs for and benefits of GOM cannot be
overemphasised for sustainable development (Svensson, 2007). One of the main reasons for
GOM lagging behind quality management advances has been identified as lack of true
integration of environmental and social objectives into business operations, i.e.
environmental management and social values were viewed as narrow corporate legal
functions, primarily concerned with reacting to environmental legislation and social codes
of practice. Subsequently research and managerial actions focused on buffering the
operations function from external forces in order to improve efficiency, reduce cost, and
increase quality (Carter & Rogers, 2008; White & Lee, 2009). Research further reveals that the
root cause behind the company’s isolated approach to the 3BL sustainability is not because
the managers do not appreciate the importance and urgency of addressing them, but lack of
efficient support for the management of the complexity of sustainable decisions, especially
the provision of powerful analysis approach to support effective decision evaluation (Hill,
2001; Taylor & Taylor, 2009; Zhao & Gu, 2009).
This paper proposes an integrated sustainability analysis approach to provide holistic
decision evaluation and support for GOM decision making. There are two key objectives to
explore the integrated approach: (a) to understand the GOM decision support requirements
from a whole life cycle perspective; (b) to address the GOM decision support issue using
multiple decision criteria. Based on a case study in production operations area, the paper
concludes that the integrated sustainability analysis can provide more efficient and effective
support to decision making in GOM.
248 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
This chapter is organised as follows. Next section reviews related work on methods and
tools that have been developed to address GOM decision issues, and identifies the gap in
literature. Section 3 proposes an integrated approach for systematic analysis of sustainability
in GOM. Application of the integrated approach to real operations situation is discussed in
Section 4. Then Section 5 reflects on the strengths and limitations of the proposed approach,
and draws conclusions.
2. Related work
Sustainability or sustainable development was first defined by the World Commission on
Environment and Development as the “development that meets the needs of present
generations while not compromising the ability of future generations to meet their needs”
(WCED, 1987). It has then been recognised as one of the greatest challenges facing the world
(Ulhoi, 1995; Wilkinson et al, 2001; Bateman, 2005; Espinosa et al, 2008). For development to
be sustainable, it is essential to integrate environmental, social and economic considerations
into the action of greening operations (i.e. the transformation processes which produce
usable goods and services) (Kelly, 1998; Gauthier, 2005; Lee & Klassen, 2008), as operations
have the greatest environmental and social impacts among all business functions of a
manufacturer (Rao, 2004; Nunes & Bennett, 2010). In the context of sustainable
development, operations have to be understood from a network’s perspective, that is,
operations include not only manufacturing, but also design and supply chain management
activities across products, processes and systems (Liu and Young, 2004; Geldermann et al,
2007). Without proper consideration of inter-relationships and coherent integration between
different operations activities, sustainability objectives cannot be achieved (Sarkis, 2003; Zhu
et al, 2005; Matthews et al, 2006). There has been a series of overlapping endeavours and
research efforts in literature aiming to address the environmental and social issues in
operations management, but earlier efforts have been mostly segmented and uncoordinated.
More recently, GOM has been investigated from a more integrative perspective instead of a
constraint perspective, where environmental management and corporate social
responsibility are viewed as integral components of an enterprise’s operations system (Yang
et al, 2010). It means that the research foci have shifted to the exploration of the coherent
integration of environmental and corporate social responsibility with operations
management through business managers’ proactive decision making (Ding et al, 2009; Liu et
al, 2010). Figure 1 illustrates the key ideas of achieving sustainability objectives through
holistic decision making in a sustainable operations system. Compared with traditional
open operations model, i.e. the input-transformation-output model (Slack et al, 2010), there
are four key features in the sustainable operations system:
1. A sustainable operations system is a closed rather than an open system, i.e. the material
(including waste and used products), energy and information produced by the
operations system should be treated and fed back as inputs to keep the system self-
sustainable;
2. The transformation process not only includes the functions and activities that produce
products and service, but also includes that of environmental and social improvements;
3. Wider stakeholders’ requirements need to be addressed. Apart from customers, other
important stakeholders include environment, community, employee, public etc.
Therefore, the requirements such as discharge from operations process, information
and social benefits have to be properly addressed;
Providing Efficient Decision Support
for Green Operations Management: An Integrated Perspective 249
4. The role of suppliers is changing. Apart from the provision of materials, energy,
information and human resources, suppliers are also responsible for recovering used
products and materials.
2003). Unfortunately, such situations are rare, while benefits from sustainability efforts have
been elusive. Practitioners continue to grapple with how sustainability analysis should be
undertaken, due to the complexities and uncertainties of environmental systems involved
and imperfections of human reasoning (Hertwich et al, 2000; Allenby, 2000). According to
Hall & Vredenburg (2005), innovating for sustainable development is usually ambiguous,
i.e. when it is not possible to identify key parameters or when conflicting pressures are
difficult to reconcile, such ambiguities make traditional risk assessment techniques
unsuitable for GOM. Researchers further argue that sustainability analysis frequently
involves a wide range of stakeholders, many of which are not directly involved with the
company’s production operations activities. Decision makers are thus likely to have
significant difficulties in reaching the right decisions if efficient support is not available.
Powerful systematic analysis methodologies have great potential in guiding the decision
makers to navigate through the complexities and ambiguities (Matos & Hall, 2007). This
section reviews the most influential analysis methodologies that could facilitate efficient and
effective GOM decision making: life cycle assessment and multi-criteria decision analysis.
these analytical tools can generate important insights into environmentally conscious
practices in organisations, and there are close interdependencies between PLC and OLC. For
example, in the PLC introduction phase, procurement is more influential than production for
sustainable practices, whilst in the maturity and decline stages of the PLC, efficient end-of-life
and reverse logistics are more influential than distribution operations. It is also not difficult
to understand that distribution decisions such as facility locations and modes of
transportation will not only influence the forward but also the reverse logistics networks
(Bayazit & Karpak, 2007; Chan et al, 2010). However, it is widely acknowledged that
environmental methods (LCA in general, PLC and OLC analysis in specific) should be
“connected” with social and economic dimensions to help address the 3BL, and that this is
only meaningful if they are applied to support decision making process and are not just a
“disintegrated” aggregation of facts (Matos & Hall, 2007). It is advantageous that PLC and
OLC analysis are conducted to obtain a more holistic picture of the economic and ecological
impacts of production operations (Neto, et al, 2010).
Sakis, 2010) and allows both interaction and feedback within clusters of elements (inner
dependence) and between clusters (outer dependence). Such interaction and feedback best
captures the complex effects of interplay in sustainable production operations decision
making (Gencer & Gurpinar, 2007). Both ANP and AHP derive ratio scale priorities for
elements and clusters of elements by making paired comparisons of elements on a common
property or criterion. ANP disadvantages may arise when the number of decision factors
and respective inter-relationships increase, requiring increasing effort by decision makers.
Saaty and Vargas (2006) suggested the usage of AHP to solve problems of independence
between decision alternatives or criteria, and the usage of ANP to solve problems of
dependence among alternatives or criteria.
Both AHP and ANP share the same drawbacks: (a) with numerous pairwise comparisons,
perfect consistency is difficult to achieve. In fact, some degree of inconsistency can be
expected to exist in almost any set of pairwise comparisons. (b) They can only deal with
definite scales in reality, i.e. decision makers are able to give fixed value judgements to the
relative importance of the pair wise attributes. In fact, decision makers are usually more
confident giving interval judgements rather than fixed value judgements (Kahraman et al,
2010). Furthermore, on some occasions, decision makers may not be able to compare two
attributes at all due to the lack of adequate information. In these cases, a typical AHP/ANP
method will become unsuitable because of the existence of fuzzy or incomplete
comparisons. It is believed that if uncertainty (or fuzziness) of human decision making is not
taken into account, the results can be misleading.
To deal quantitatively with such imprecision or uncertainty, fuzzy set theory is appropriate
(Huang et al, 2009; Kahraman et al, 2010). Fuzzy set theory was designed specifically to
mathematically represent uncertainty and vagueness, and to provide formalised tools for
dealing with the imprecision intrinsic to multi-criteria decision problems (Beskese et al,
2004; Mehrabad & Anvari, 2010). The main benefit of extending crisp analysis methods to
fuzzy technique is in its strength that it can solve real-world problems, which have
imprecision in the variables and parameters measured and processed for the application
(Lee, 2009).
To solve decision problems with uncertainty and vague information where decision makers
cannot give fixed value judgements, whilst also taking advantage of the systematic
weighting system presented by AHP/ANP, many researchers have explored the integration
of AHP/ANP and fuzzy set theory to perform more robust decision analysis. The result is
the emergence of the advanced analytical method - fuzzy AHP/ANP (Huang et al, 2009).
Fuzzy AHP/ANP is considered as an important extension of the conventional AHP/ANP
(Kahraman et al, 2010). A key advantage of the fuzzy AHP/ANP is that it allows decision
makers to flexibly use a large evaluation pool including linguistic terms, fuzzy numbers,
precise numerical values and ranges of numerical values. Hence, it provides the capability
of taking care of more comprehensive evaluations to provide more effective decision
support (Bozbura et al, 2007). Details of the key features, strengths and weaknesses of
different MCDA methods are compared in Table 1.
perspective. By breaking down the environmental problems into specific issues at different
life cycle stages that can be articulated by operations managers, it helps decision makers to
explicitly capture, code and implement corresponding environmental objectives in their
decision making process. MCDA’s main merit is in its competence in handling complex
decision situations by incorporating multiple decision criteria to resolve conflicting interests
and preferences.
Analysis Selected
Key elements Strengths Weaknesses
methods references
Can handle situations Relationships between
Multi-criteria and in which decision decision factors are not
multi-attributes maker’s subjective considered;
(Saaty, 2005;
hierarchy; judgements constitute inconsistency of the
Anderson et al,
Pair wise comparison; a key part of the pairwise judgements;
AHP 2009)
graphical decision making cannot deal with
representation. process uncertainty and
vagueness
Inconsistency of the
pairwise judgements;
Allows interaction and
Control network with cannot handle situations (Saaty & Vergas,
feedback between
sub-networks of where decision makers 2006; Dou &
ANP different decision
influence can only give interval Sarkis, 2010)
factors
value judgements or
cannot give values at all
Mathematical
representation; handle
Can solve real-world
uncertainty, vagueness (Beskese et al,
decision problems Lack of a systematic
Fuzzy set and imprecision; 2004; Mehrabad &
with imprecision weighting system
theory grouping data with Anvari, 2010)
variables
loosely defined
boundaries.
Fuzzy membership
Fuzzy Combined strengths of
functions together with Time consuming; (Kahraman et al,
AHP/ fuzzy set theory and
priority weights of complexity. 2010)
ANP AHP/ANP
attributes
Table 1. Comparison between different multi-criteria decision analysis methods
GOM decisions need to address the 3BL, which undoubtedly require MCDA methods. In
the meantime, it is critical that environmental and social concerns be addressed right from
the early stage of product and operational life cycles, so that the adverse impact can be
minimised or mitigated. Therefore, GOM decision making requires MCDA and LCA to be
explored in an integrated rather than isolated manner. By considering both LCA and
MCDA technologies together, it could provide decision makers with the vital analysis
tools that enable systematic evaluation for improved decision making capabilities. It
therefore could allow operations managers to take concerted decisions (and actions as the
implementation of the decisions), not only to limit, but also to reverse any long term
environmental damage, and thus ensuring that operations activities are undertaken in a
sustainable manner.
Despite the urgent requirements from GOM for powerful analysis support, there is little
report in the literature discussing the successful integration of both LCA and MCDA
254 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
technologies in support of GOM decision making. Next Section of this paper proposes an
integrated approach to fill the gap.
3.1 Performing OLC analysis to understand decision problems from life cycle
perspective
During the OLC process – procurement, production, distribution, use, end-of-life treatment
and reverse chain, different green issues need to be addressed at different stages. Therefore
environmental objectives may be defined in different forms for GOM decision making. For
example, greener material selection at procurement stage, cutting down greenhouse gas
emission at production stage, or reducing energy consumption at the use and distribution
stages, and safe waste management for end-of-life treatment, and product recovery through
reverse logistics. Figure 3 illustrates more comprehensive environmental objectives used at
different OLC stages for green operations decision making.
Cost
Flexibility
Reduce Reuse recycle
Dependability
3. Socially responsible
4. Alternatives
operations
Community Alternative 1
Customer
benef its
saf ety
Alternative 2
1 2 3 4
4
w41 w42 w43 0
In the above equation, subscript number 1 shows the criteria cluster belonging to
conventional operations performance objectives; subscript number 2 shows the criteria
cluster belonging to environmental friendly operations; subscript number 3 shows the
criteria cluster belonging to socially responsible operations; subscript number 4 shows the
criteria cluster belonging to alternative operations. In the super-matrix, w11, w12, w13, and so
on represent the sub-matrices. The cluster which has no interactions is shown in the super-
matrix with a value of zero. Those non-zero values mean that there are dependencies
between the clusters. For example, w12 means that cluster 1 depends on cluster 2. Similar to
that in AHP, the 1 – 9 scale system developed by Saaty (Saaty & Vargas, 2006) is used in this
research, and pairwise comparisons are made to create the super-matrix.
Step 5. yield the weighted super-matrix. The un-weighted super-matrix from step 4 must
be stochastic to obtain meaningful limiting results. This step is to transform the un-
weighted into a weighted super-matrix. To do this, firstly the influence of the
clusters on each other is determined, which generates an eigenvector of the
influences. Then the un-weighted super-matrix is multiplied by the priority
weights from the clusters, which yields the weighted super-matrix.
Providing Efficient Decision Support
for Green Operations Management: An Integrated Perspective 257
Step 6. stabilise the super-matrix. This step involves multiplying the weighted super-
matrix by itself until the row values convergence to the same value for each column
of the matrix.
By the end of step 6, the limiting priorities of all the alternatives should be computed and
shown in the matrix. The alternative with the highest priority should become transparent
and will become the optimal choice to decision makers.
and ANP model development: one is through master data management, and the other is
through meta-data management.
In a GOM decision support system, master data is a very important concept which supports
data integrity and consistence. Master data are persisted, long lived data which need to be
stored centrally. They can be used by all business functional units and at all levels of
organization (Monk & Wagner, 2009). Examples of master data in an operations system are
material master, vendor/supplier master, and customer master records. Additionally,
master data also includes hierarchies of how individual products, customers and accounts
aggregate and form the dimensions which can be analyzed. Master data management is
carried out to ensure that material master, vendor/ supplier master and customer master for
example are consistently stored and maintained, so that all information users, both people
(including decision makers) and computer systems, can access the right information at all
times.
As the demand for business intelligence grows, so do the available data sources, data
warehouses, and business reports that decision makers depend on for business decisions.
While master data management can be used to effectively integrate business data across
business functions, metadata management provides an extra layer of reliability when GOM
decision support systems use multiple data sources. Separate departmental deployments of
business solutions (resulting from being in charge of different OLC stages) have inevitably
created information silos and islands, which makes it significantly more difficult to manage
the information needed to support holistic decision making. This is especially the case
where the data sources change, which will have significant impact on the GOM decisions.
Decision makers will also tend to put various trust in the available information with various
origins. Metadata management can provide powerful support for data traceability and give
decision makers essential assurance of the integrity of information on which their decisions
are based. A generic definition of metadata is “data about data” (Babin & Cheung, 2008). In
green operations management, typical use of metadata has been identified as helping to
provide quick answers to the questions such as: What data do I have? What does it mean?
Where is it? How did it get there? How do I get it? And so on. The answers to these
questions will have a profound impact on the decisions to be made.
much (Chow, 2010). Therefore, Chinese manufacturing industry can provide perfect cases
for researchers to study the GOM issues. This paper looks at a case from a Chinese Plastics
Manufacturing company.
One of the most influential products from Plastic Manufacturers is plastic bags. Highly
convenient, strong and inexpensive, plastics bags were appealing to business and
consumers as a reliable way to deliver goods from the store to home. However, many issues
associated with the production, use and disposal of plastic bags may not be initially
apparent to most users, but now are recognised extremely important and need to be
addressed urgently. By exploring the integrated OLC and ANP approach with the case
study, this paper aims to help decision makers achieve better understanding of the full
ecological footprint of the products, and to provide efficient decision support in dealing
with the associated negative impacts on environment and social equity.
Gas
Fuel
Electricity
Oil
Threat to wildlif e
Pollution to
Pollution to Damage to human health
Air & water
Sulphur land & water
(toxic chemicals, Loss of livestock
CO2) Impact on tourism
social issues resulting from plastic bags, many crucial decisions that plastic manufacturer
needs to make during the whole life-cycle. Four decision alternatives can be derived from
the OLC analysis:
1. To make recyclable plastic bags. This alternative seems to be taken at rather late stage of
the life cycle, but corresponding considerations are required at early stages such as in
the material selection stage so that recyclable plastic bags can be sorted into proper
categories and processed later.
2. To make reusable plastic bags. This alternative requires appropriate actions to be taken
at early stages of the life cycle. For example, at the material selection and manufacturing
stages, appropriate considerations should be taken so that the reusable bags have the
strengths to be reused for a certain number of times.
3. To make degradable plastic bags, such as those which degrade under micro-organisms,
heat, ultraviolet light, mechanical stress or water.
4. To replace plastic bags with paper bags. For some time, manufacturers were forced to
make a key decision – “plastic or paper“. Research clearly showed that paper shopping
bags make a much larger carbon footprint from production through recycling. For
example, a paper bag requires four times more energy to produce a plastic bag. In the
manufacturing process, paper bags generate 70 percent more air and 50 times more
water pollutants than plastic bags (FMI, 2008).
4.3.1 Development of the network control hierarchy for the Plastic Bags case
The generic ANP models discussed in Section 3 include comprehensive factors of GOM
decision situations, this Section applies the GOM analytical models to a customised situation
of dealing with the Plastic Bags. Therefore, a simplified version of the generic ANP Control
Hierarchy for the case has been developed, as shown in Figure 7. The analytical models
discussed in the case study were developed using Super Decisions® software. As can be seen
from the Figure 7, the GOM network consists of four clusters. Each cluster has one or more
elements that represent the key attributes of the cluster. Elements for Clusters 2, 3 and 4
have derived from the case OLC process (Section 4.2). Connections between the clusters
indicate the influence and dependency. A reflexive relationship on a cluster in the model
(such as the one for Cluster 2) means that there is inter-dependency between the elements in
the same cluster.
Fig. 7. GOM network control hierarchy for the Plastic Bags case
aspects, i.e. economic, environmental and social. For example, Table 3 represents the
comparison among the four clusters from the point of view of socially responsible
operations. The priority vectors are calculated and shown in the last column of the Table.
4.3.3 Super-matrix formation and global priorities for the Plastic Bags case
The result of all pairwise comparisons is then input for computation to formulate a super-
matrix. In the Plastic Bags case, three different super-matrices have been generated: a Un-
weighted, a Weighted super-matrix and a Limit Super-matrices. Super-matrices are
arranged with the clusters in alphabetical order across the top and down the left side, and
with the elements within each cluster in alphabetical order across the top and down the left
side. An Un-weighted Super-matrix contains the local priorities derived from the pair-wise
Providing Efficient Decision Support
for Green Operations Management: An Integrated Perspective 263
comparisons throughout the network. Figure 9 shows part of the Un-weighted matrix for
the Plastic Bags case (because of the space limit, part of the super-matrix is hidden. In the
real software environment, the whole super-matrix can be seen by scrolling the bars on the
interface).
2. Plastic bags recycling is not a preferred choice in terms of achieving economic, the cost,
objective.
3. Making degradable bags is a relatively ideal choice (with an overall value of 0.98 in the
Figure 12).
4. Making reusable bags is the preferred choice because it has the highest overall score
based on the data collected from the company.
Fig. 10. The Weighted Super-matrix for the Plastic Bags case
Fig. 11. The Limit Super-matrix for the Plastic Bags case
Providing Efficient Decision Support
for Green Operations Management: An Integrated Perspective 265
Fig. 12. Visualised representations of the global priorities of the four alternatives
The evaluation of the integrated sustainability analysis approach has been illustrated
through a decision case from the plastic manufacturing industry. The case study shows that
the approach has great potential in providing scientific evidence to support GOM decision
making under complex situations and with multiple decision criteria.
Limitations of the approach include:
- It is developed for and evaluated in production operations case. Its applicability to
service operations needs further exploration.
- At this stage, the research has not considered feedback of clusters to the decision
support system yet. Further work needs to explore a mechanism and tool to manage the
feedback.
6. Acknowledgement
The work reported in this paper has been undertaken at the University of Plymouth, UK,
and is funded by International Research Networking and Collaboration scheme under grant
number UOP/IRNC/GH102036-102/1 for the project “Green Operations Management -
integration challenges and decision support (also known as GOM project)”
7. References
Allenby, B.R., 2000. Implementing industrial ecology: the AT&T matrix system. Interfaces,
30(3), 42-54.
Anderson, D.R., Sweeney, D.J., Williams, T.A. & Wisniewski, M., 2009. An Introduction to
Management Science: Quantitative Approaches to Decision Making. Cengage Learning
EMEA, London.
Babin, G. & Cheung, W.M., A meta-database supported shell for distributed processing and
system integration, Knowledge-Based Systems 21(2008), 672-680.
Bateman, N., 2005. Sustainability: the elusive element of process improvement. Journal of
Operations Management, 25(3), 261-598276.
Bayazit, O. & Karpak, B., 2007. An analytical network process-based framework for
successful total quality management: an assessment of Turkish manufacturing
industry readiness. International Journal of Production Economics, 105, 79-96.
Bell, S. & Morse, S., 1999. Sustainability indicators: measuring the immeasurable. Earthscan,
London.
Beskese, A., Kahraman, C. & Irani, Z., 2004. Quantification of flexibility in advanced
manufacturing systems using fuzzy concept. International Journal of Production
Economics, 89(1), 45-56.
Bevilacqua, M., Ciarapica, F.E. & Giacchetta, G., 2007. Development of a sustainable product
lifecycle in manufacturing firms: a case study. International Journal of Production
Research, 45(18-19), 4073-4098.
Bottero, M., Mondini, G. & Valle, M., 2007. The use of analytic network process for the
sustainability assessment of an urban transformation project. In proceedings of
International Conference on Whole Life Urban Sustainability and its Assessment.
Glasgow, 2007.
Bozbura, F.T., Beskese, A. & Kahraman, C., 2007. Prioritisation of human capital
measurement indicators using fuzzy AHP. Experts with Applications, 32(4), 1100-
1112.
Providing Efficient Decision Support
for Green Operations Management: An Integrated Perspective 267
Carter, C.R. & Rogers, D.S., 2008. A framework of sustainable supply chain management:
moving toward new theory. International Journal of Physical Distribution and Logistics
Management 38(5): 360-387.
Chan, H.K., Yin, S. & Chan, F.T.S., 2010. Implementing just-in-time philosophy to reverse
logistics systems: a review. International Journal of Production Research, 48 (21), 6293–
6313.
Chituc, C.M., Azevedo, A. & Toscano, C., 2009. Collaborative business frameworks
comparison, analysis and selection: an analytic perspective. International Journal of
Production Research, 47 (17), 4855–4885.
Choudhari, S.C., Adil, G.K. & Ananthakumar, U., 2010. Congruence of manufacturing
decision areas in a production system: a research framework. International Journal of
Production Research, 48 (20), 5963–5989.
Chow, G.C., 2010. China’s Environmental Policy: a Critical Survey. CEPS Working Paper
No. 206. April 2010.
Ding, L., Davis, D. & McMahon, C.A., 2009. The integration of lightweight representation
and annotation for collaborative design representation. Research in Engineering
Design, 19(4), 223-234.
Dou, Y. & Sakis, J., 2010. A joint location and outsourcing sustainability analysis for a
strategic off-shoring decision. International Journal of Production Research, 48(15),
567-592.
Espinosa, A., Harnden, R. & Walker, J., 2008. A complexity approach to sustainability –
Stafford Beer revisited. European Journal of Operational Research, 187, 636-651.
FMI, 2008. Plastic Grocery Bags – Challenges and Opportunities. Food Marketing Institute.
September 2008.
Fuller, D.A. & Ottman, J.A., 2004. Moderating unintended pollution: the role of sustainable
product design. Journal of Business Research, 57(11), 1231-1238.
Galasso, F., Merce, C. & Grabot, B., 2009. Decision support framework for supply chain
planning with flexible demand. International Journal of Production Research, 47(2),
455-478.
Gauthier, C., 2005. Measuring corporate social and environmental performance: the
extended life-cycle assessment. Journal of Business Ethics, 59(1-2), 199-206.
Geldermann, J., Treitz, M. & Rentz, O., 2007. Towards sustainable production networks. .
International Journal of Production Research, 45(18-19), 4207-4224.
Gencer, C. & Gurpinar, D., 2007. Analytical network process in supplier selection: a case
study in an electronic firm. Applied Mathematical Modelling, 31, 2475-2486.
Gunendran, A.G. & Young, R.I.M., 2010. Methods for the capture of manufacture best
practice in product lifecycle management. International Journal of Production
Research, 48 (20), 5885–5904.
Hall, J., Vredenburg, H., 2005. Managing the dynamics of stakeholder ambiguity. MIT Sloan
Management Review, 47(1), 11-13.
Hertwich, E., Hammitt, J. & Pease, W., 2000. A theoretical foundation for life-cycle
assessment. Journal of Industrial Ecology 4(1), 13-28.
Hill, M., 2001. Sustainability, greenhouse gas emissions and international operations
management. International Journal of Operations and Production Management 21(12):
1503-1520.
268 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Mihelcic, J.R., Crittenden, J.C., Small, M.J., Shonnard, D.R., Hokanson, D.R., Zhang, Q.,
Chen, H., Sorby, S.A., James, V.U., Sutherland, J.E. and Schnoor, J.L., 2003.
Sustainability science and engineering: the emergence of a new meta-discipline.
Environmental Science and Technology, 37, 5314-5324.
Miltenburg, J., 2005. Manufacturing strategy – how to formulate and implement a winning plan.
Portland: Productivity Press.
Monk, E. & Wagner, B., 2009. Concepts in Enterprise Resource Planning. Course Technology
Cengage Learning. Boston, USA.
Neto, J.Q.F., Walther, G., Bloemhof, J., van Nunen, J.A.E.E. & Spengler, T., 2010. From
closed-loop to sustainable supply chains: the WEEE case. International Journal of
Production Research, 48(15), 4463-4481.
Noran, O., 2010. A decision support framework for collaborative networks. International
Journal of Production Research, 47(17), 4813-4832.
Nunes, B. & Bennet, D. (2010), Green operations initiatives in the automotive industry: an
environmental reports analysis and benchmarking study. Benchmarking: An
International Journal 17(3): 396-418.
OECD, 1991. Environmental Indicators. OECD, Paris.
Rao, P., 2004. Greening production: a South-East Asian experience. International Journal of
Operations and Production Management, 24(3): 289-320.
Rao, P. & Holt, D., 2005. Do green supply chains lead to competitiveness and economic
performance? International Journal of Operations and Production Management 25(9):
898-916.
Saaty, T.L., 2005. Theory and Applications of the Analytic Network Process: Decision Making With
Benefits, Opportunities, Costs, and Risks. 3 edition. Rws Pubns. ISBN-13: 978-
1888603064.
Saaty, T.L. & Vargas, L.G., 2006. Decision Making with the Analytic Network Process. Springer,
New York.
Sarkis, J., 2003. A strategic decision framework for green supply chain management. Journal
of Clearner Production 11: 397-409.
Slack, N. Chambers, S. & Johnston, R., 2010. Operations Management (6th Edition), FT Prentice
Hall, London.
Svensson, G., 2007. Aspects of sustainable SCM: conceptual framework and empirical
example, Supply Chain Management – An International Journal 12(4): 262-266.
Taylor, A. & Taylor, M., 2009. Operations management research: contemporary themes,
trends and potential future directions. International Journal of Operations and
Production Management 29(12): 1316-1340.
Ulhoi, J.P., 1995.Coporate environmental and resource management: in search of a new
managerial paradigm. European Journal of Operational Research, 80, 2-15.
Verghese, K. & Lewis, H., 2007. Environmental innovation in industrial packaging: a supply
chain approach. International Journal of Production Research, 45(18-19), 4381-4401.
WCED, 1987. Our Common Future. World Commission on Environment and Development.
White, L. & Lee, G.J., 2009. Operational research and sustainable development: tackling the
social dimension. European Journal of Operational Research 193: 683-692.
Wilkinson, A., Hill, M. & Gollan, P., 2001. The sustainability debate. International Journal of
Operations and Production Management, 21(12), 1492-1502.
Part 3
Applications in Agriculture
14
1. Introduction
In agricultural domain application, it is becoming increasingly important to preserve
planting material behavior when interact with an environment that is not under its control.
However, the uncertainty always inherent such inaccurate decisions when a present of
incomplete information in sampling data set (Chao, et al., 2005; Latkowski, 2002; Michelini,
et al., 1995). As a result, the proper decision may need to adapt changes in their environment
by adjusting its own behavior. Many different methods for dealing uncertainty have been
developed. This research work proposes incomplete information with fuzzy representation
in objective function for decision modeling. Firstly, we integrate expert knowledge and
planting material data to provide meaningful training data sets. Secondly, fuzzy
representation is used to partition data by taking full advantages of the observed
information to achieve the better performance. Finally, we optimally generalize decision tree
algorithms using decision tree technique to provide simpler and more understandable
models. The output of this intelligent decision system can be highly beneficial to users in
designing effective policies and decision making.
2. Preliminary study
The major problems in decision modeling are missing information of ecological system, such
as weather, fertilizer, land degradation, soil erosion and climate variability during planting
material selection in physiological analysis. This underlying obstacle will return poor results
when the aggregation of all the databases in planting material. If we try to develop a
decision modeling to apply knowledge to user, then we have to integrate historical records
of planting material behavior, expert knowledge’s perception and decision algorithm
learning. From an information or knowledge provider's perspective, the challenge is to gain
an expert user's attention in order to assure that the incomplete information is valued
(Rouse, 2002). One of the more widely applied in representing uncertainty is fuzzy
representation (Michelini et al., 1995; Mendonca et al., 2007). This representation required a
mathematical support to make further treatment for interpretation of missing values as
information, which is compatible with the observed data. We analyze the complexity of
ecological system which have missing values caused by the uncertainty to the user, then
representing the observed data into plausible values and discuss the outcome of an
empirical study for the missing information in induction learning. We have observed the
274 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
most commonly used algorithms for decision modeling of missing values (Tokumaru et al.,
2009) and resulted in better understanding and comprehensible rules in planting material
selection.
the concept of different levels of suitability for learner biases, the fact that no algorithm biases
can be suitable for every target concept, the idea that there is no universally better algorithm is
fast maturing on the machine learning community. It might do better to map different
algorithms to different groups of problems with practical importance.
Often cases with incomplete descriptions are encountered in which one or more of the
values are missing. This may occur, for example, because these values could not be
measured or because they were believed to be irrelevant during the data collection stage.
Considerable attention and significant progress has been devoted in the last decade toward
acquiring “classification knowledge”, and various methods for automatically inducing
classifiers from data are available. Quinlan (1996) had focused several approaches of
measurement in high levels of continuous values with small cases on a collection of data
sets. A variety of strategies are used in such domain and returned poor result as some
imprecise data is ignored.
The construction of optimal decision trees has been proven to be NP-complete, under
several aspects of optimality and even for simple concepts (Murthy, et al., 1993). Current
inductive learning algorithms use variants of impurity functions like information gain,
gain ratio, gini-index, distance measure to guide the search. Fayyad (Fayyad, 1994)
discussed several deficiencies of impurity measures. He pointed out that impurity measures
are insensitive to inter-class separation and intra-class fragmentation, as well as insensitive
to permutations of the class probability distribution. Furthermore, several authors have
provided evidence that the presence of irrelevant attributes can mislead the impurity
functions towards producing bigger, less comprehensible, more error-prone classifiers.
There is an active debate on whether less greedy heuristics can improve the quality of the
produced trees. Others showed that greedy algorithms can be made to perform arbitrarily
worse than the optimal. On the other hand, Murthy and Salzberg (1993) found that one-level
look-ahead yield larger, less accurate trees on many tasks. (Quinlan, et al., 1995) reported
similar findings and hypothesized that look-ahead can fit the training data but have poor
predictive accuracy. The problem of searching for the optimal sub-tree can be formulated.
The coefficient matrix of the defining constraints satisfies the totally uni-modular property
by solving the integer program (Zhang et al., 2005). He provided a new optimality proof of
this efficient procedure in building and pruning phase. The cost is imposed on the number
of nodes that a tree has. By increasing the unit node cost, a sequence of sub-trees with the
minimal total cost of misclassification cost and node cost can be generated. A separate test
set of data is then used to select from these candidates the best sub-tree with the minimal
misclassification error out of the original overly grown tree. Besides cost-complexity
pruning, a number of other pruning algorithms have been invented during the past decade
such as reduced error pruning, pessimistic error pruning, minimum error pruning, critical
value pruning, etc. It is clear that the aim of the pruning is to find the best sub-tree of the
initially grown tree with the minimum error for the test set. However, the number of sub-
trees of a tree is exponential in the number of its nodes and it is impractical computationally
to search all the sub-trees. The main idea of the cost-complexity pruning is to limit the
search space by introducing selection criterion on the number of nodes. While it is true that
the tree size decides the variance-bias trade-off of the problem, it is questionable to apply an
identical punishment weight on all the nodes. As the importance of different nodes may not
be identical, it is foreseeable that the optimal sub-tree may not be included in the cost-
complexity sub-tree candidates and hence it will not be selected. An interesting question to
ask is whether there is an alternative good algorithm to identify the true optimal sub-tree.
276 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Most of the data mining application reduced to search for the intervals after discretisation
phase. All values that lie within this interval are then mapped to the same value. It is
necessary to ensure that the rules induced are not too specific. Several experiments have
shown that the quality of the decision tree is heavily affected by the maximal number of
discretisation intervals chosen. Recent development of estimation uncertainty using fuzzy
set theory is introduced. The fuzzy practicable interval is used as the estimation parameter
for the uncertainty of measured values. When the distribution of measured values is
unknown, results that are very near to the true values can be obtained in this method. In
particular, the boundaries among classes are not always clearly defined; there are usually
uncertainties in diagnoses based on data. Such uncertainties make the prediction be more
difficult than noise-free data. To avoid such problems, the idea of fuzzy classification is
proposed. The new model of classification trees which integrates the fuzzy classifiers with
decision trees is introduced (Chiang, et.al., 2002 ). The algorithm can work well in
classifying the data with noise. Instead of determining a single class for any given instance,
fuzzy classification predicts the degree of possibility for every class. Classes are considered
vague classes if there is more than one value for the decision attribute. The vague nature of
human perception, which allows the same object to be classified into different classes with
different degrees, is utilized for building fuzzy decision trees (Yuan et al., 1995). The
extension of earlier work of decision tree C4.5 by Quinlan (1996), each path from root to leaf
of a fuzzy decision tree is converted into a rule with a single conclusion. A new method of
fuzzy decision trees called soft decision trees (Olaru et.al., 2003) is presented. This method
combines tree growing and pruning, to determine the structure of the soft decision tree,
with refitting and back-fitting to improve its generalization capabilities. A comparative
study shows that the soft decision trees produced by this method are significantly more
accurate than standard decision trees. Moreover, a global model variance study shows a
much lower variance for soft decision trees than for standard trees as a direct cause of the
improved accuracy. Fuzzy reasoning process allows two or more rules or multi branches
with various certainty degrees to be simultaneously validated with gradual certainty and
the end result will be the outcome of combining several results. Yuan (1995) proposed a
novel criterion based on the measurement of cognitive uncertainty and criterion based on
fuzzy mutual entropy in possibility domain. In these approaches, the continuous attributes
are needed to be partitioned into several fuzzy sets prior to the tree induction, heuristically
based on expert experiences and the data characteristics. The effectiveness of the proposed
soft discretization method has been verified in an industrial application and results showed
that, comparing to the classical decision tree, higher classification accuracy was obtained in
testing. The soft discretization based on fuzzy set theory one inherent disadvantage in these
methods is that the use of sharp (crisp) cut points makes the induced decision trees sensitive
to noise. As opposed to a classical decision tree, the soft discretization based decision tree
associates a set of possibilities to several or all classes for an unknown object. As a result,
even if uncertainties existed in the object, the decision tree would not give a completely
wrong result, but a set of possibility values. Experimental results showed that, by using soft
discretization, better classification accuracy has been obtained in both training and testing
than classical decision tree, which suggest that the robustness of decision trees could be
improved by means of soft discretization.
Basically, the true value is unknown and it could be estimated by an approximation.
Sometimes, we found some of the attributes may be form the similar classes with the
Uncertainty Analysis Using Fuzzy Sets for Decision Support System 277
presence of missing values. Rough Sets are efficient and useful tools in the field of
knowledge discovery to generate discriminant and characteristic rules. This method
provides relative reduct that contains enough information to discern objects in one class
from all the other classes. From the relative reduct produced by the quick algorithm, rules
are formed. If a new object is introduced into the data set with the decision value missing,
one could attempt to determine this value by using the previously generated rules.
Although rough set have been used for classification and concept learning tasks (De Jong et
al, 1991; Janikow, 1993; Congdon, 1995), there is rather little work on their utility as a tool to
evolve decision trees. (Jenhani, 2005) proposed an algorithm to generate a decision tree
under uncertainty within the belief function framework.
3
% of Significance
2.5
2 Overall
1.5 Active
1 Missing
0.5
0
d
t
o u rea
ia
nt
es s
p
ss
le ht
el
TF
ui
ro
es
ar
D
g
te
rn
Fr
ro
O
P
A
ei
oc
en
on
ke
To
H
un
nd
af
ip
C
Le
Tr
ll
io
il
he
O
et
Fr
S
P
Attributes
n
freq(C i , T ) (C j ). (1)
j 1
p
j (x j ) 1 (2)
j 1
Some data sets can not be separated with clearly defined boundaries due to the data bases
which contain incomplete information, but it is possible to sum the weight of data items
belong to the nearest predicted class.
p 2
n
l ( k ( xi ))m xi c k (3)
k 1 i 1
It can be seen that the characteristic functions constitute between ambiguity and certainty to
the closed subsets xi of the boundary with respect to the set interval (Heiko et al., 2004). We
use more robust method of viewing the compatibility measurement for query xi takes the
weighted average of the predicate truth values. The equation show how the weighted
average compatibility index is computed. The average membership approach computes the
mean of the entire membership values.
n n
xi ( ( i ( pi )) wi ) / wi (4)
i 1 i 1
The degrees of membership to which an item sets belongs to the different clusters are
computed from the distances of the data point to the cluster centers. With combination
expert knowledge and most relevant ecological information, the membership degree can be
calculated to determine some plausible data point lies to center of the nearest cluster. An
iterative algorithm is used to solve the classification problem in objective function based
clustering: since the objective function cannot be minimized directly, the nearest cluster and
the membership degrees are alternately optimized. In fuzzy clustering analysis, the
calculation of cluster centre value is given as follow, and each of observed data is assigned
to the centre using Euclidean distance.
n freq(C i , Tj ) freq(C i , Tj )
Tj
log 2
Tj
. (6)
j 1
In the measured value xi, the possible subset is uniquely determined by the characteristic
function. The measurement of the dispersal range of value xi is relative to the true value
xo.
280 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Fig. 2(b). Decision Tree Construction with expert knowledge and additional information
evaluate the qualitative attribute values, a set of evaluation grades may first be supplied
from existing possible values. It provides a complete set of distinct standards for assessing
qualitative attributes. In accomplishing this objective, an important aspect to analyze is the
level of discrimination among different counting of evaluation grades, in other words, the
cardinality of the set used to express the information. The cardinality of the set must be
small enough so as not to impose useless precision on the users and must be rich enough in
order to allow discrimination of the assessments in a limited number of degrees. We are able
to make a decision when we know the exact values of the maximized quantity. However,
the method has difficulties for knowledge discovery at the level of a set of possible values,
although it is suitable for finding knowledge. This is because the number of possible tables
exponentially increases as the number of imprecise attribute value increases. We explore the
relation between optimal feature subset selection and relevance. The motivation for
compound operators is that the feature subsets can be partitioned into strongly relevant,
weakly relevant and irrelevant features (John et al, 1994). The wrapper method (kohavi et
al., 1994) searches for an optimal feature subset tailored to a particular algorithm and a
domain. This approach showed the feature subset selection is done using the induction
algorithm as a black box, which is no knowledge of the algorithm is needed during the
interface. We compare the wrapper approach to induction without feature subset selection
and to Relief (Kira et al., 1992), a filter approach to feature subset selection, and FOCUS
(Dietterich, et al., 1996). The improvement in accuracy also is achieved for some data sets in
Naive-Bayes algorithm. The Maximum Acceptable Error (MAE) provides an improved
estimate of the confidence bounds of concentration estimates. This method accommodates
even strongly nonlinear curve models to obtain the confidence bounds. The method
describes how to define and calculate the minimum and maximum acceptable
concentrations of dose-response curves by locating the concentrations where the size of the
error, defined in terms of the size of the concentration confidence interval, exceeds the
threshold of acceptability determined for the application.
1 xi A
GA ( X )
0 x i A
We build the functions G A ( X ) constitute between ambiguous and certainty to the closed
subsets Xi of the boundary with respect to the set inclusion. The function builds operators
join and meet to provide the least upper and lower approximation in the set boundary
respectively. From set theory viewpoint, the concepts represent complete maximal wide
range of subsets which is described by the elements. It may be reduced to the discovery of
the closed sets to the test rules. The algorithm used to discover the concept set comes from
the set theory. It generates possible subsets in an iterative manner, starting by the most
specific test value. At each step, new subsets are generated as members coupled, where the
subset is the intersection of the intents of the already existing subsets. We define the border
of a partially constructed subset through an element-wise insertion to the root node. The
overall decision tree construction is supported by a structure that once the algorithm has
finished its work contains the entire three-valued decision tree. Firstly, the set in the root
node is a dynamic structure that changes when all elements remain possible to be tested.
When the attribute depends on its entropy computation among the rest of the attributes, it
can be simply formulated the new border always includes the new element whereas all
elements of the old border that are greater than new elements are dropped out.
1 x p U p N p { v}
G A ( X ) unknown x U p { v} Dp
0 v xp
To consider the universal set X and its power set P(X), let K be an arbitrary index set, it can
proved that a function Nec is a necessity function if and only if it satisfies the following
relationship:
While the function Pos is a possibility function if and only if it satisfies the following
relationship:
1 m
S( xu ) Si ( xu )
m i 1
xu Rxv S( xu ) S( x v )
4. We calculate the overall contributions to the agreement for all the decision makers:
w1 ,......, wm .
a. if wi 0 for every i {1,...., m} , then we obtain the new collective scores by :
1 m
S w ( xu ) wi Si ( xu )
m i 1
x u R w x v S w ( xu ) S w ( x v )
Decision
Table
Table
Training Testing
data Split data
Table Table
Classify
Fuzzy
Result
Representation
Generate
Rules OK
?
cut
Fig. 3. Framework to determine rough set and induce desirable values to validate a classifier
and collective scoring vectors. We measure the agreement in each subset of possible values,
and a weight is assigned to each decision makers, the overall contribution to the agreement.
Those decision makers whose overall contribution to the agreement is not positive are
expelled and we re-initiate the process with the opinions of the decision makers which is
positively contribute to the agreement. The sequential process is repeated until it determines
a final subset of decision makers where all of them positively contribute to the agreement.
Uncertainty Analysis Using Fuzzy Sets for Decision Support System 285
Then, we apply a weighted procedure where the scores each decision makers indirectly
assigns to the alternatives are multiplied by the weight of the corresponding decision maker,
and we obtain the final ranking of the alternatives. As a result, the mutually exclusive
attribute value can be easily extracted from a decision tree in the form of ambiguity
reduction. Consequently, we assign the examples with missing values of the test attribute to
the ‘yes’, ‘no’ and ‘yes-or-no’ outgoing branches of a tree node.
depth is reached, i.e. 3 when T3 is used or 2 when T2 is used. In the second case, building
stops at that node only all the records remaining there to be classified to the same class in a
minimum proportion of 70 to 100 percent. We used eight different version of T3 in our
experiments, with different depth and length with MAE set to 0.1 – 0.3.
We associate a score to each value and aggregate each individual values by means of the
average of the individual scores, providing a collective weak order on the set of alternatives.
Then we assign an index to each value which measures their overall contribution to the
alternatives. Taking into account these indices, we weight individual scores and obtain a
new collective ranking of alternatives. Once the exact value is chosen, a branch relative to
each value of the selected attribute will be created. The data are allocated to a node
according to the value of the selected attribute. This node is declared as a leaf when the gain
ratio values of the remaining attributes do after excluding the opinions of those decision
makers whose overall contributions to the agreement are not positive. The new collective
ranking of alternatives provides the final decision. Since overall contribution to the
agreement indices usually is irrational numbers, it is unlikely that the weighted procedure
provides ties among alternatives. The proposed decision procedure penalizes those
individuals that are far from consensus positions, this fact incentives decision maker to
moderate their opinions. Otherwise, they can be excluded or their opinions can be
underestimated. However, it is worth emphasizing that our proposal only requires a single
judgment to each individual about the alternatives. We can generalize our group decision
procedure by considering different aggregation operators for obtaining the collective scores.
Another generalization consists in measuring distances among individual and collective
scoring vectors by means of different metrics.
4. Experimental result
The presence of the incomplete information in ecological system impacts on classification
evaluation of planting material behavior in physiological analysis, since the semantics are no
longer obvious and uncertainty is introduced. In figure 3, the graph shows the observed
data are more likely to satisfy with additional information. The experiment combines
planting material with expert knowledge and ecological information to generate an ROC
curve. It can be seen that, in most of the data sets, the number of data belonging to the
various categories do not exactly match the results of physiological analysis. This is because
some of the planting material that should be classified in same type A has been misclassified
into type B and vice versa.
In this experiment, we examine gradually the selected records of complete, ranked features
and missing records of real physiological traits. The result, in Table 1 shows that the features in
missing values compare to the others complete records. By adding additional features of
ecological information to physiological trait, it shows less correlation between each feature in
missing values and the others. The possible clusters rearrange their structure and rules are
generated mostly come from combination of planting material and ecological information. As
a result, the selected proper fuzzy values in decision tree construction provides less and simple
rules. It removes some irrelevant features during the selection of subset of training data.
Figure 4 shows some of the rule production which obtained from the experiment. From it,
the program obtained two decision tables for each clinician, one consisting of those with an
‘induce’ response and the other with a ‘Don’t induce’ response (these tables are not shown).
Rather than attempt to create a reduct from indistinguishable rules at this point we
Uncertainty Analysis Using Fuzzy Sets for Decision Support System 287
6. Conclusion
A missing value is often described as an uncertainty in the development of a decision
system. It is especially difficult when there is a genuine disagreement between experts in the
Uncertainty Analysis Using Fuzzy Sets for Decision Support System 289
field and also complex and unstated relationships between the variables that are used in the
decision. The field of planting materials is particularly difficult to study because of the wide
variation in environmental factors, and the differing population groups of genotypes serve.
This decision support system allows the breeders or experts to make decisions in a way that
is similar to their normal practice, rather than having to declare their knowledge in a
knowledge engineering sense. Feedback from the experts in their domain should also be
collected to refine the system especially in the evaluation of the decision trees themselves.
In this research work, we points out the extension of the splitting criterion in the missing
values together. In addition to this, the splitting criterion also discards the irrelevant predictor
attributes for each interior node. From the experiments reported, the classification error using
our method was still low as compared to the original complete data sets by including
additional attributes. It showed that discovered rules have a strong predictive power in
missing values; this rule was not captured by C4.5 algorithm. On the other hand, the proposed
method with his splitting criteria allows normally undeclared rules to be discovered. An
added advantage here is that every expert has the opportunity to study the rules and make
decisions on planting materials selection. On the other hand, the rough sets technique produce
coarse splitting criterion and are often used for knowledge discovery from databases. It is also
useful in any situation where a decision table can be constructed. It has an advantage in
producing a set of comprehensible rules. The fact that this technique produces a lower and
upper approximation of the true value, it allows a degree of uncertainty to be represented. The
rules that are generated by the decision tree could be applied to a database of ‘standard’
procedures, or one that reflects their planting materials to obtain standard rules or drawing up
guidelines for the development of an expert system in the oil palm industry.
We have focused on measurement of uncertainty in decision modeling. In this study, a special
treatment of uncertainty is presented using fuzzy representation and clustering analysis
approach in constructing the decision model. The uncertainty is not only due to the lack of
precision in measured features, but is often present in the model itself since the available
features may not sufficient to provide a complete model to the system. The result of the study
shows that uncertainty is reduced and several plausible attributes should be considered
during classification process. This formalization allows us to the better understanding and
flexibility for selecting planting material in acceptance of the classification process.
7. Acknowledgement
The author would like to thank Universiti Tun Hussein Onn Malaysia for funding the
research grant.
8. References
Chao, S., Xing Z.,Ling X. & Li, S., (2005). Missing is Useful: Missing Values in Cost-Sensitive
Decision Trees. IEEE transactions on knowledge and data engineering, vol. 17, no. 12.
Chiang I-Jen, Jane Hsu Y.J., (2002). Fuzzy classification trees for data analysis. Fuzzy Sets
and Systems, International Journal in Information Science and Engineering, Fuzzy Sets
and Systems, Volume 130, Issue 1, Pages 87-99
Congdon, C.B., (1995). A comparison of genetic algorithms and other machine learning
systems on a complex classification task from common disease research, The
University of Michigan.
290 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
De Jong, & Spears. M., (1991). “ Learning Concept Classification Rules using Genetic
Algorithms”. Proceedings of the Twelfth International Joint Conference on
Artificial Intelligence .
Dietterich, T., Kearns, M., & Mansour, Y. (1996). Applying the weak learning framework to
understand and improve C4.5. 13th International Conference on Machine Learning, pp.
96–104.
Fayyad, U. M. & Irani, K. B. (1993). Multi-interval discretization of continuous valued
attributes for classification learning. 13th International Joint Conference on Artificial
Intelligence, pp. 1022–1027.
Fayyad, U. M. (1994). Branching on attribute values in decision tree generation. 12th National
Conference on Artificial Intelligence, pp. 601–606.
Heiko, T., Christian, D. & Rudolf, K. (2004). Different Approaches to Fuzzy Clustering of
Incomplete Datasets. International Journal of Approximate Reasoning 35 239-249.
I., Jenhani, Z., Elouedi, N. B., Amor, K., Mellouli. Qualitative Inference in Possibilistic
Option Decision Trees. In Proceedings of ECSQARU'2005. pp.944~955
Janikow., Z. (1993). Fuzzy Processing in Decision Tree. In Proceedings of International
Symposium on Artifical Intelligence.
Janikow C.Z, (1996). Exemplar learning in fuzzy decision trees, in Proceedings of FUZZ-IEEE.
Koller, D. and Sahami, M. (1996). Toward Optimal Feature Selection, 13th International
Conference on Machine Learning, pages 284-292.
Latkowski, R. (2002). Incomplete Data Decomposition for Classification. 3rd International
Conference, Malvern, PA, USA.
Mendonca, F., Vieira, M., & Sousa,.M. (2007). Decision Tree search Methods in Fuzzy
Modeling and Classification, International Journal of Approximate Reasoning.
Michelini, R. & Rossi, G. (1995). Measurement uncertainty: a probabilistic theory for
intensive entities Measurement, 15 143–57.
Murthy, S., Kasif, S., Salzberg, S., and Beigel, R. (1993). OC1: Randomized induction of
oblique decision trees. 11th National Conference on Artificial Intelligence, pp. 322–327.
Olaru, C. & Wehenkel, L. (2003). A complete fuzzy decision tree technique. Fuzzy sets and
systems 138, 221 – 254, Elsevier.
Passam., C., Tocatlidou. A., Mahaman, B. D. & Sideridis., A. B. (2003). Methods for decision
making with insufficient knowledge in agriculture. EFITA Conference, 5-9 July.
Debrecen, Hungary.
Quinlan, J. (1993). Introduction to Decision Tree. Morgan Kaufman Publisher.
Quinlan, J. (1996). Improved use of continuous attributes in c4.5. Journal of Artificial
Intelligence Research, 4:77-90.
Rouse, B. (2002). Need to know - Information, Knowledge and Decision Maker. IEEE Transaction
On Systems, Man and Cybernatics-Part C: Applications and Reviews, Vol 32, No 4.
Schafer, J., (1997). Analysis of Incomplete Multivariate Data. Chapman and Hall, London.UK.
Schafer, J., and Graham, J., (2002). Missing data: Our view of the state of the art.
Physchological Methods 7, 147-177.
Tokumaru, M. & Muranaka, N. (2009). Product-Impression Analysis Using Fuzzy C4.5
Decision Tree. Journal of Advanced Computational Intelligence and Intelligent
Informatics, Vol.13 No 6.
Wang, X. & Borgelt. C. (2004). Information Measures in Fuzzy Decision Tree. Fuzzy Systems,
Proceedings and IEEE International Conference.
Yuan, Y. & Shaw., J. (1995). Induction of fuzzy decision trees. Fuzzy Sets System. 69:125–139.
Zhang, Y. & Huang, C. (2005). Decision Tree Pruning via Integer Programming.
15
1. Introduction
Surface irrigation systems have the largest share in irrigated agriculture all over the World.
The performance of surface irrigation systems highly depends upon the design process,
which is related to the appropriateness and precision of land leveling, field shape and
dimensions, and inflow discharge. Moreover, the irrigation performance also depends on
farmer operative decisions, mainly in relation to land leveling maintenance, timeliness and
time duration of every irrigation event, and water supply uncertainties (Pereira, 1999;
Pereira et al., 2002).
The design procedures of farm surface irrigation drastically changed in recent years. The
classical ones based upon empirical rules (Criddle et al., 1956; Wilke & Smerdon, 1965). A
quasi-rational methodology taking into consideration the main design factors was
developed by the Soil Conservation Service and based upon intensive field observations
(SCS, 1974, 1979). This methodology was widely applied and adopted with optimization
procedures (Reddy & Clyma 1981a, 1981b). It assumes the soil classification in infiltration
families related with soil texture and obtained from infiltrometer observations (Hart et al.,
1980). For furrows design, an empirical advance curve was applied relating inflow
discharge, slope, length, and infiltration. Other classical methods refer to the volume-
balance models using the continuity equation and empirical information (Walker &
Skogerboe, 1987; Yu & Sing, 1989, 1990; Clemmens, 2007). These type of models also apply
to irrigation management (Latimer & Reddel, 1990; Camacho et al., 1997; Mailhol et al.,
2005).
Numerous mathematical computer models for surface irrigation simulation were
developed. They originated a new age of design methods, with increased quality of
procedures because they allow the quantification of the integrated effect of main irrigation
factors (length, discharge, slope, soil roughness, shape, and infiltration) on performance,
thus, enlarging the solution set with higher precision and effectiveness than the traditionally
empirical methods. Strelkoff & Katopodes (1977) first presented an application of zero-
inertia modeling for border irrigation. Further developments were applied to borders,
basins and furrows (Fangemeier & Strelkoff, 1978; Clemmens, 1979; Elliott et al., 1982), and
were followed by furrow surge flow modeling (Oweis & Walker, 1990). The kinematics
wave and the hydrodynamics model for furrows were later adopted (Walker & Humpherys,
1983; Strelkoff & Souza, 1984). Computer models for design of basin irrigation include
BASCAD (Boonstra & Jurriens, 1978) and BASIN (Clemmens et al., 1993), and for border
292 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
irrigation include the BORDER model (Strelkoff et al., 1996). The models SRFR (Strelkoff,
1993) and SIRMOD (Walker, 1998) apply to furrows, basin and border irrigation and adopt
various approaches for solving the continuity and momentum equations. Reviews were
recently produced by Pereira et al. (2006) and Strelkoff & Clemmens (2007).
In addition to hydraulics simulation models, surface irrigation design requires the
application of other type of models such as for irrigation scheduling, land leveling,
distribution systems, and cost and environmental analysis. In practice, it is usually difficult
to manage data for an interactive application of these models in design when they are not
integrated with a common database. The decision support systems (DSS) methodology
provides the framework to explore the synergy between mathematical simulation models,
data and user knowledge through its integration aimed to help the decision-maker to solve
complex problems. The DSS methodology makes handling data of various types easier and
effective, and favors the integration of simulation models and their interactive application. It
provides for a decision-maker learning process and it supports a decision process and
related choices through multicriteria analysis. DSS models are often applied to irrigation
planning and policy analysis (Bazzani, 2005; Riesgo & Gómez-Limón, 2006), as well as to
performance assessment and water demand and delivery simulation (Raju & Duckstein,
2002; Rao et al., 2004; Oad et al., 2006; Raju et al., 2006). However, few applications are
developed for irrigation design (McClymont, 1999; Hornbuckle et al., 2005; Gonçalves et al.,
2009; Pedras et al., 2009).
The variety of aspects influencing irrigation performance (Burt et al., 1997; Pereira and
Trout, 1999; Pereira et al., 2002) makes the design process quite complex, and a multicriteria
analysis approach is then advantageous. Alternative design solutions may be ranked
following various objectives and criteria, such as improving the irrigation performance,
achieving water saving, attaining high water productivity, or maximizing farm incomes.
Using a DSS and multicriteria analysis is helpful to produce appropriate comparisons
among alternative design solutions and to perform a trade-off analysis (Roy & Bouyssou,
1993; Pomerol & Romero, 2000). In this line, the DSS SADREG for surface irrigation design
was developed, tested and applied to different agricultural, economical and environmental
conditions, considering several irrigation methods, equipments and practices. Applications
include the Lower Mondego Valley, Portugal, improving basin irrigation for water savings
and salinization control in the Upper Yellow River Basin, China, improving furrow
irrigation in Fergana Valley, Aral Sea Basin, Uzbekistan, and modernizing furrows and
border irrigation in Euphrates Basin, Syria. The objective of this chapter is to present the
DSS model and its Web application, describing procedures to design and users support.
The development of DSS for Web allows a better application flexibility, improving the user
support for database access, and enlarging the number of users, particularly in the world
areas where the water scarcity demand for a better use of irrigation water. The application
version here presented comprises a Web and a simulation engine module, which includes
the models integrated in SADREG. The Web access of DSS allows an easier transfer of
knowledge and tools to improve the procedures to evaluate and design field irrigation
systems. The DSS location on a server allows data sharing and comparison of results by
different users. Some examples of Web applications in the irrigation domain show its
usefulness (Thysen and Detlefsen, 2006; Car et al., 2007).
The organization of this chapter considers, first, the description of DSS model, with details
about the process of design and selection of surface irrigation alternative; after, in sub-
chapter 3, the explanation of Web methodologies in a DSS context, referring the
A Web-Based Decision Support System for Surface Irrigation Design 293
programming languages and the architecture of Web client and server; and, in sub-chapter
4, the description of the DSS software use and Web interface.
2. DSS model
SADREG is a DSS designed to assist designers and managers in the process of design and
planning improvements in farm surface irrigation systems – furrow, basin and border
irrigation. It includes a database, simulation models, user-friendly interfaces and
multicriteria analysis models.
DSS
Design
Data
component
Design User
Interface
models (designer)
Selection
Alternatives
component
Multicriteria User
Interface
models (manager)
ranked
alternatives
3. Data created through simulations performed with the models SIRMOD for surface
irrigation simulation (Walker, 1998) and ISAREG for irrigation scheduling simulation
(Pereira et al., 2003);
4. Projects data, referring to data characterizing the design alternatives; and
5. Selection data, referring to the value functions and decision priorities.
Input data
for design
Output simulation
data
Soils and
SIRMOD results
Environment Projects Value functions
Crops Design
alternatives
Equipment
Multicriteria
analysis
Economic, costs
and incomes
Fig. 2. Data base components in relation with respective data sources and uses in the design
process
SADREG includes various computational models and tools (Fig. 3). The simulation of the
surface irrigation systems – furrows, borders, and basins – is performed by the surface
irrigation simulation model SIRMOD (Walker, 1998), which is integrated within SADREG.
The water balance simulation to define the application depths and timings is performed by
the ISAREG model (Pereira et al., 2003), which is linked (loose integration) with SADREG
and explored interactively. Calculations relative to land leveling and farm water distribution
systems are performed through specific built-in tools. These computational tools provide for
the characterization of each design alternative, including a complete set of performance
indicators. The resulting data are later handled by an impact analysis tool, so creating all
data required for multicriteria analysis. The impact analysis tool performs calculations
relative to crop yields and related incomes (benefits), costs, and environmental impacts as
described later. Ranking and selection of alternatives are performed with composite
programming and the ELECTRE II models (Roy & Bouyssou, 1993; Pomerol & Romero,
2000).
The SADREG applications scope comprises: (a) a single field analysis relative to alternative
design options for furrow, basin, or border irrigation, considering several decision variables
such as field slopes, farm water distribution systems and runoff reuse; and (b) an irrigation
A Web-Based Decision Support System for Surface Irrigation Design 295
sector analysis, when a spatially distributed database relative to the farm systems is
available. In this case, the alternatives are assessed jointly with modernization options
relative to the conveyance and distribution network. It applies to farm land parcels or fields,
of rectangular shape, with a well known geographical location, supplied by a hydrant, a
gate, or other facility at its upstream end.
DESIGN SELECTION
Impact analysis
Integrated models Built-in tools
tool
SIRMOD
Land levelling
Surface irrigation
design
simulation Multicriteria analysis models:
Composite programming
and/or ELECTRE II
ISAREG Distribution
water balance systems
Multicriteria analysis:
Ranking of alternatives inside Groups
and/or among Groups of alternatives
Fig. 4. Scheme of the creation of field design alternatives using a multilevel approach for
design and application of multicriteria ranking and selection
The design alternatives are clustered into groups included in a project and relative to:
1. The upstream distribution system, which depends upon the selected irrigation method
and equipment available; and
2. The tail end management system, which also depends upon the irrigation method and
the equipment available.
The alternatives constitute complete design solutions. Within a group, they are differentiated
by the operative parameters: the inflow rate per unit width of land being irrigated or per
furrow, and the number of subunits.
Soil surface
Irrigation method Field slopes Field inflow conditions Tail end conditions
condition
Point inflow at one or
Flat or
Level basin Zero in all directions
various locations, or to Diked
furrowed
individual furrows
Point inflow at one or Diked for basins
Graded basin and Flat or Longitudinal slope 0
various locations, or to and open for
borders furrowed and cross slope = 0
individual furrows borders
Longitudinal and Inflow to individual
Graded furrows Furrowed Open or diked
cross slope 0 furrows
Table 1. Irrigation methods
Start
no cut
1< <1.2 ?
fill
yes
Results
Z K a f0 (2)
where Z = cumulative infiltration (m3 m-1); = infiltration time (min), K (m3 m-1 min-a) and a
(dimensionless) are empirically adjusted parameters; and f0 = basic infiltration rate
(m3 min-1 m-1).
For surge-flow, the procedure developed by Walker & Humpherys (1983) is adopted:
For infiltration on dry soil (first wetting), Eq. (2) is applied;
For infiltration on wetted soil (third and successive wettings) the parameters a, K and f0
in Eq. (2) are modified into Ks, as, and f0s, thus, producing the surge infiltration equation;
and
For infiltration during the second wetting, a transition curve is applied.
This transition equation balances the effects represented by the equations for dry and wetted
soil (Walker, 1998; Horst et al., 2007):
Z = [K + (K - Ks) FP] [a + (a - as) FP]+ [f0 + (f0 – f0s) FP] (3)
where FP (dimensionless) = distance-based factor computed from the advance distances xi-2
and xi-1 relative to the surge cycles i-2 and i-1.
To characterize each field, SADREG includes a set of infiltration data concerning families of
infiltration curves for continuous and surge flow, typical of seasonal irrigation events (first,
second, and later irrigations) under flat soil infiltration conditions. Field observations of
infiltration can be added to this set of infiltration curves and be used for the respective
design case study. When no field infiltration data are available, the user selects the curves to
be used considering the available soil data. To adjust the parameters for furrow irrigation,
the procedure proposed by SCS (1979) and Walker (1989) is applied. It is based on the
average wetted perimeter WP (m) and the adjusting coefficient (Cadj). WP is given by
0.425
qn
WP 0.265 0.227 (4)
S
0
where q = furrow inflow discharge (l s-1); n = Manning’s roughness coefficient (s m-1/3); and
So = furrow longitudinal slope (m m-1). The adjusting coefficient Cadj is estimated as
WP
C adj 0.5, if 0.5 , else C adj WP (5)
FS FS
where FS (m) = furrow spacing. The parameters K and f0 [Eq. (2)] and the surge-flow
infiltration parameters are adjusted as
SIRMOD model is applied for several input conditions that cover all situations relative to
the user options to create alternatives. The continuous input variables consist of: field length
A Web-Based Decision Support System for Surface Irrigation Design 299
Decision variables
- upstream distribution side (OX or OY)
Field layout and land - field length (FL) and field width (FW)
levelling - cross slope (SoC)
- longitudinal slope (So)
- number of outlets (No)
- number of units (Nu)
Water supply
conditions - total field supply discharge (QF)
- outlet discharge (Qo) and hydraulic head (Ho)
- time duration of field delivery (tF)
Zreq
100 Z lq Z req
Application efficiency Ea D %
Zlq 100 Z lq Z req
D
Zlq
Distribution uniformity DU 100 %
Z avg
Zavg
Infiltration efficiency IE 100 %
D
Zrun
Tail water runoff ratio TWR 100 %
D
Zdp
Deep percolation ratio DPR 100 %
D
Zreq - the average water depth (mm) required to refill the root zone in the lower quarter of
the field
D - the average water depth applied to the irrigated area (mm)
Zlq - the average depth of water infiltrated in the lower quarter of the field (mm)
Zavg - the average depth of water infiltrated in the whole irrigated area (mm)
Zavg (root) - the average depth of water infiltrated stored in the root zone (mm)
Zrun - the depth of water that runs off at the tail end of the field (mm)
Zdp - the depth of water that percolates below the root zone (mm)
Results are then saved in the database to be further applied to other alternatives. The results
for intermediate values for each of the continuous input variables are calculated by
interpolation between those stored in the database instead of running the model several
times.
Symbols:
Ncomp=Number of a equipment components
A=field area (ha)
Nirrig=annual number of irrigation events
a0 =leveling machines fixed cost
NLT=equipment component life time (years)
(€/operation)
ns=number of subunit per unit;
a1=hourly cost of the land leveling
Nsub=number of component replacements
machines (€/h)
during NAP
a2=unitary maintenance leveling machine
A Web-Based Decision Support System for Surface Irrigation Design 303
where W = actual water available for crop use during the irrigation season (mm);
Wopt = seasonal water required for achieving maximum crop yield (mm); Y and Yopt = crop
yields (kg ha-1) corresponding to W and Wopt, respectively; and ki (i = 1, ..., 4) are empirical
coefficients typical for the crop, and environmental and agronomical conditions under
consideration. The decreasing branch of this function is related with soil drainage conditions
and excess water impacts on yields. To consider these effects, the user should adjust to the
local conditions the descending branch of the quadratic function, as indicated in Fig. 6
where 3 types of descending branches are presented.
The environmental attributes considered for selection of the design alternatives are:
1. The total irrigation water use during the crop season;
2. The nonreused runoff volume, that represents a nonconsumed and nonbeneficial
fraction of water use and is an indicator for potential degradation of surface waters;
3. The deep percolation volume, which represents also a nonconsumed and nonbeneficial
fraction of water use and is an indicator for potential degradation of ground waters;
4. The potential land leveling impacts on the soil quality; and
5. The potential for soil erosion.
The total water use, runoff and deep percolation volumes are calculated with SIRMOD for
every single event of the irrigation season. The seasonal values are obtained by summing up
these results. The attribute relative to land leveling impacts is expressed by the average cut
depth because the smaller are the cut depths, the lower are the impacts on soil quality.
304 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
0,8
0,4
0,2
0
0 0,5 1 1,5 2 2,5 3 3,5 4
Relative water application (W/Wopt)
q So (l s-1 %) <0.30 0.3-0.5 0.5-0.62 0.62-0.75 0.75-0.87 0.87-1.0 1.0-1.25 1.25-1.5 >1.5
EI 1 2 3 4 5 6 7 8 9
Table 5. Potential erosion index EI for furrows in a silt-loam soil
Value functions
Evaluation of alternatives
relative to each criterion
Payoff matrix
Ranking
The following linear utility function is applied for the economic criteria:
U j (x j ) x j (9)
where xj = attribute: = graph slope, negative for costs and positive for benefits; and the
parameter = utility value Uj(xj) for a null value of the attribute. A logistic utility function
(Fig. 9) is adopted for the environmental criteria:
K bj x Mj x j
Uj xj
e
K aj
K bj x Mj x j
(10)
e e
where Kaj and Kbj = function parameters; and xMj = maximum attribute value corresponding
to a null utility for the j criterion.
1
0.9
0.8
0.7
0.6
Utility
0.5
0.4
0.3
0.2
0.1
0
0 20 40 60 80 100 120 140 160 180 200
De e p per colation (m m /ye ar)
Fig. 8. Logistic utility function relative to deep percolation where xa = 40 mm and xb= 110
mm, and U(xa) = 0.95 and U(xb) = 0.05
To adjust this function (10) to user preferences, it is necessary to select the attribute values xa
and xb that correspond to a very low and a very significant impact, e.g., such that
U(xa) = 0.95 and U(xb) = 0.05, as shown in Fig. 8. Based on these values, and for a specific
criterion, the parameters Ka and Kb are then calculated as follows (Janssen, 1992):
1 U ( x ) .e Kb M xa
a
K a ln (11)
U(xa )
U ( x a ) 1 U ( xb )
ln
1 U( xa ) U ( xb )
Kb (12)
xb x a
A Web-Based Decision Support System for Surface Irrigation Design 307
The dominance preanalysis is a procedure to select the nondominated alternatives. For these
alternatives do not exist any other feasible alternative that could improve the performance
relative to any criterion without decreasing the performance of any other criterion. The
multicriteria selection applies to those nondominated alternatives. The satisfaction
preanalysis screens the alternatives set by selecting the user acceptable ones, i.e., those that
for every criterion perform better than a minimum level required by the decision maker.
To apply the multicriteria methods, the user needs to assign priorities by selecting the
weights j that represent the relative importance of each criterion j as viewed by the decision
maker. These can be directly defined by the decision maker or calculated by the Analytical
Hierarquical Process (AHP) method (Saaty, 1990).
Two multicriteria methods may be applied: the composite programming (Bogardi &
Bardossy, 1983) and the ELECTRE II (Roy & Boussiou, 1993; Roy, 1996). The composite
programming is an aggregative multicriteria method that leads to a unique global criterion.
It is a distance-based technique designed to identify the alternative closest to an ideal
solution using a quasi-distance measure. This method allows the analysis of a
multidimensional selection problem by a partial representation in a two dimensions trade-
off surface.
The distance to the ideal point (Lj = 1-Uj) relative to each alternative ak, is a performance
measure of ak according to the criterion j. The ideal point represents the point on the trade-
off surface where an irrigation design would be placed if the criteria under consideration
were at their best possible level. If this distance is short, this performance is near the
optimum. The composite distance L is computed for each set of N criteria as
1/p
N
L j Lpj (13)
j 1
where Lj = distance to the ideal point relative to criterion j; and p = balancing factor between
criteria. Each composite distance corresponds to a distance-based average, arithmetic or
geometric, respectively when p = 1 or p = 2. The balancing factor p indicates the importance
of the maximal deviations of the criteria and limits the ability of one criterion to be
substituted by another. A high balancing factor gives more importance to large negative
impacts (a larger distance to the ideal point) relative to any criterion, rather than allowing
these impacts to be obscured by the trade-off process.
The ELECTRE II is an outranking method that aims to rank alternatives. It is based on the
dominance relationship for each pair of alternatives, which is calculated from the
concordance and discordance indices. The concordance represents the degree to which an
alternative k is better than another alternative m. A concordance index is then defined as the
sum of weights of the criteria included in the concordance set relative to the criteria for
which the alternative k is at least equally attractive as the alternative m. The discordance
reflects the degree to which an alternative k is worse than alternative m. For each criterion
from a discordance set, that includes the criteria for which alternative k is worse than m, the
differences between the scores of k and m are calculated. The discordance index, defined as
the largest of these differences, reflects the idea that, beyond a certain level, bad
performances on one criterion cannot be compensated by a good performance relative to
another criterion. The decision maker indicates the thresholds that are used to establish a
weak and a strong outranking relationship between each pair of alternatives.
308 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
For every project of a given field (Fig. 4), SADREG produces a large set of alternatives as a
result of numerous combinations of design and operation variables. As mentioned before,
these alternatives are clustered by groups relative to different water distribution equipment
and tail end management options. The multicriteria analysis allows the alternative selection
for each group and the ranking and comparison of the groups of a given project. This
analysis plays an important role on automatic management of large amount of data,
screening the alternatives, removing those not satisfactory or dominated, and ranking and
selecting the most adequate according to the user priorities.
applications based on this service. The user does not interact directly with the simulation
engine to shorten the time required to complete the tasks. When the user requests a
simulation, the order remains in a waiting list until the simulation engine is available to
process it. Then the produced results are recorded in the simulation data base, the user is
notified and the output becomes available on the Web application.
External
Services
CExternalRequest
CListenAndOperate Clients
CExecuteModels CFacadeDataLayer
CListen
CAbstractFactoryPersistence CDBConnection
CSingletonMain
CWindowsFactoryPersistence CDBConnectionODBC
MVC_Model
DataAcess_Model .....
.....
.....
Login_Model Field_Model
MVC_HTTPRequest cls_ServerComm
cls_FacadeDataLayer
MVC_Controler MVC_ModelReturn .....
.....
.....
cls_Field
MVC_ViewReturn
DataAcess_View .....
.....
.....
Field_View
Login_View
MVC_View
share data and experiences by groups of users. The access control to user data is guaranteed
at level of server. In the server the data abstraction is provided by pattern AbstractFactory
(Fig. 12) to guarantee without impact on models source code a platform change. The
communications with server are made with TCP/IP sockets applying a protocol developed
specifically. The use of norms like SOAP and WebServices will be included, in a higher
layer, like a protection SSL layer, with a guarantee of security, without state and atomic
operations, will be published to permit create other applications based on this service.
SADREG server
Abstract Factory
Facade
CommandParser
Models
Listener
SIRMOD
select irrigation
method Surface irrigat.
Select the crop (mod. SIRMOD) Choose
selection
Create Land profile
leveling Create irrigation Create
solution scheduling Alternatives
Multicriteria
select the field Irrigation analysis
slopes scheduling Simulation of
(mod. ISAREG) Alternatives
Ranking of
Land leveling
irrigation Design design
simulation alternatives
scheduling Alternatives
4.2 Projects
Once the user has completed the field data register, he(she) builds projects to create
alternatives. Each project refers to a particular method of irrigation and a land leveling
solution. This solution is obtained by simulation the land leveling, using the field
topography of the parcel and the slopes selected by user. The user will have access to the
simulation results, which included the volumes handled, the spatial distribution of the
depths of excavation and fill and the estimated time of execution of work and its cost.
Interactively, the user can do several simulations to find a solution that he(she) considers
appropriate for the project.
In the next step, the user chooses a crop to be considered in the project within a list of
options. The determination of the irrigation scheduling, i.e., the appropriate application
depths and the irrigation dates along the crop period, is obtained with the ISAREG model.
This calculation requires the application of climatic data from the region where the field is
located, and information about the type of soil and crop characteristics. This process is also
interactive with the user, because there are several possible solutions to these calendars. The
user will then choose the one more appropriate to the project.
312 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Fig. 14. Example of view access to general options data on Web interface (example of soil
type)
Fig. 17. Example of alternative view on Web interface, showing the main results itens
5. Conclusion
Appropriate design of surface irrigation systems requires a complex manipulation of data,
models, and decisions. The multicriteria approach integrated in a DSS enables us to solve
that complexity while creating and ranking a large number of design alternatives. The DSS
SADREG has shown to be an appropriate tool for (1) generating design alternatives
associated with attributes of technical, economic, and environmental nature; (2) handling
and evaluating a large number of input and output data; (3) evaluating and ranking design
alternatives using multicriteria analysis where criteria are weighted according to the
priorities and perception of the designer and users; and (4) providing an appropriate
dialogue between the designer and the user.
The Web-based DSS allows a simple way to support surface irrigation designers to improve
the project procedures, particularly in the world water scarcity areas where the surface
irrigation requires improvements and the expert technical support is more incipient. With a
simple Web user friendly interface, with several optional languages and an online help, this
tool would contribute to an effective support to enlarge the knowledge of surface irrigation,
its design procedures and field practices.
In conjunction with web application SADREG, tools are available towards the resolution of
specific problems such as land leveling, pipe sizing and economic calculation, as well as
documents on good equipment and irrigation practices. The application maintenance and
user support is the responsibility of the multidisciplinary team that developed the software,
with computer engineers and agronomists, the objective being that of a dynamically adjusts
the system to answer the most frequent difficulties and new challenges more exciting. This
software is available on http://sadreg.safe-net.eu Web site.
6. Acknowledgment
The support of CEER - Biosystems Engineering (POCTI/U0245/2003) is acknowledged.
Studies referred herein are developed in the framework of the research project PTDC/AGR-
AAM/105432/2008 founded by Portuguese Foundation for Science and Technology.
7. References
Bazzani, G. M. (2005). “An integrated decision support system for irrigation and water
policy design: DSIRR.” Environmental Modelling & Software, 20(2), 153-163.
Bogardi, I. & Bardossy, A. (1983). “Application of MCDM to geological exploration,” In: P.
Hansen, ed., Essays and Surveys on Multiple Criteria Decision Making. Springer
Verlag, New York.
Boonstra, J. & Jurriens, M. (1978). BASCAD - A Mathematical Model for Level Basin
Irrigation. ILRI Publication 43, Wageningen.
Burt, C. M.; Clemmens, A. J., Strelkoff, T. S., Solomon, K. H.; Bliesner, R. D.; Hardy, L. A.;
Howell, T. A. & Eisenhauer, D. E. (1997). “Irrigation performance measures:
efficiency and uniformity.” J. Irrig. Drain. Eng., 123, 423-442.
Camacho, E.; Pérez, C.; Roldán, J. & Alcalde, M. (1997). “IPE: Model for management and
control of furrow irrigation in real time.” J. Irrig. Drain. Eng., 123(IR4), 264-269.
Car, N.J.; Christen, E.W.; Hornbuckle, J.W. & Moore, G. 2007. Towards a New Generation of
Irrigation Decision Support Systems – Irrigation Informatics? In: Oxley, L. &
316 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Mjeld, J.W.; Lacewell, R.D.; Talpaz, H. & Taylor, C.R. (1990). Economics of irrigation
systems. In: Hoffman, G.F.; Howell , T. A. & Solomon, K. H., eds., Management of
Farm Irrigation Systems, ASAE, St. Joseph, MI, 461-493.
Oad, R.; Garcia, L.; Kinzli, K. D. & Patterson, D. (2006). “Decision support systems for
efficient irrigated agriculture.” In: Lorenzini, G. & Brebbia, C. A., eds., WIT
Transactions on Ecology and the Environment, Vol 96, WIT Press, Wessex, 247-256.
Oweis, T. Y. & Walker, W. R. (1990). “Zero-inertia model for surge flow furrow irrigation.”
Irrig. Sci., 11(3), 131-136.
Pedras C.M.G., Pereira L.S., Gonçalves J.M. (2009). “MIRRIG: A decision support system for
design and evaluation of microirrigation systems.” Agric. Water Manage 96, 691-701
Pereira, L. S. (1999). “Higher performances through combined improvements in irrigation
methods and scheduling: A discussion.” Agric. Water Manage. 40(2-3), 153-169.
Pereira, L. S., and Trout, T. J. (1999). “Irrigation methods.” In: van Lier HN, Pereira LS,
Steiner FR, eds., CIGR Handbook of Agricultural Engineering, Land and Water
Engineering, vol. I. ASAE, St. Joseph, MI, 297–379.
Pereira, L. S.; Oweis, T. & Zairi, A. (2002). “Irrigation management under water scarcity.”
Agric. Water Manage., 57, 175-206.
Pereira, L. S.; Teodoro, P. R.; Rodrigues, P. N. & J. L. Teixeira. (2003). “Irrigation scheduling
simulation: the model ISAREG.” In: Rossi, G.; Cancelliere, A.; Pereira, L. S.; Oweis,
T.; Shatanawi, M. & Zairi, A., eds., Tools for Drought Mitigation in Mediterranean
Regions. Kluwer, Dordrecht, 161-180.
Pereira, L. S.; Zairi, A. & Mailhol, J. C. (2006). “Irrigation de Surface.” In: Tiercelin, J. R. &
Vidal, A., eds., Traité d’Irrigation, 2ème edition, Lavoisier, Technique &
Documentation, Paris, 513-549.
Pereira, LS, Cordery, I, Iacovides, I, 2009. Coping with Water Scarcity. Addressing the
Challenges. Springer, Dordrecht, 382 p.
Pomerol, J. C. & Romero, S. B. (2000). Multicriterion Decision in Management: Principles and
Practice. Dordrecht, The Netherlands: Kluwer Academic Publishers.
Raju, K. S. & Duckstein, L. (2002). “Multicriterion analysis for ranking an irrigation system:
an Indian case study.” J. Decision Systems, 11, 499–511.
Raju, K. S.; Kumar, D. N. & Duckstein, L. (2006). “Artificial neural networks and
multicriterion analysis for sustainable irrigation planning,” Computers & Operations
Research, 33(4), 1138–1153.
Rao, N. H.; Brownee, S. M. & Sarma, P. B. (2004). “GIS-based decision support system for
real time water demand estimation in canal irrigation systems.” Current Science, 87,
5-10.
Reddy, J. M. & Clyma, W. (1981a). “Optimal design of furrow irrigation systems.” Trans.
ASAE, 24(3), 617-623.
Reddy, J. M. & Clyma, W. (1981b). “Optimal design of border irrigation system.” J Irrig.
Drain. Eng., 17(IR3), 289-306.
Riesgo, L. & Gómez-Limón, J. A. (2006). “Multi-criteria policy scenario analysis for public
regulation of irrigated agriculture.” Agric. Systems 91, 1–28.
Roy, B. (1996). Multicriteria Methodology for Decision Aiding. Kluwer Academic Publishers,
Dordrecht, The Netherlands.
Roy, B. & Bouyssou, D. (1993). Aide Multicritère: Méthodes et Cas. Economica, Paris.
Saaty, T. L. (1990). ”How to make a decision: the AHP.” Europ. J. Operat. Res., 48(1), 9-26.
318 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
SCS (1974). Soil Conservation Service National Engineering Handbook. Section 15. Irrigation,
Chapter 4, Border Irrigation, USDA Soil Conservation Service, Washington.
SCS (1979). Soil Conservation Service National Engineering Handbook, Section 15, Irrigation,
Chap.5, Furrow Irrigation, USDA Soil Conservation Service, Washington.
Solomon, K. H.; El-Gindy, A. M. & Ibatullin, S. R. (2007). “Planning and system selection.”
In: Hoffman, G. J.; Evans, R. G.; Jensen, M. E.; Martin, D. L. & Elliot, R. L., eds.,
Design and Operation of Farm Irrigation Systems (2nd Edition), ASABE, St. Joseph, MI,
57-75.
Solomon, K.H. (1984). “Yield related interpretations of irrigation uniformity and efficiency
measures” Irrig. Sci., 5(3), 161-172.
Strelkoff, T. (1993). SRFR. A Computer Program for Simulating Flow in Surface Irrigation.
Furrows, Basins, Borders. USDA, ARS, USWCL, Phoenix.
Strelkoff, T. & Clemmens, A. J. (2007). “Hydraulics of surface systems.” In: Hoffman, G. J.;
Evans, R. G.; Jensen, M. E.; Martin, D. L. & Elliot, R. L., eds., Design and Operation of
Farm Irrigation Systems (2nd Edition), ASABE, St. Joseph, MI, 436-498.
Strelkoff, T. & Katapodes, N. D. (1977). “Border irrigation with zero inertia.” J. Irrig. Drain.
Div., 103(IR3), 325-342.
Strelkoff, T.; Clemmens, A. J.; Schmidt, B. V. & Slosky, E. J. (1996). BORDER - A Design and
Management Aid for Sloping Border Irrigation Systems. Version 1.0. WCL Report #21.
USDA, ARS, USWCL, Phoenix.
Strelkoff, T. & Souza, F. (1984). “Modelling the effect of depth on furrow infiltration.” J. Irrig.
Drain. Eng., 110 (4), 375-387.
Thysen, I. & N.K. Detlefsen. 2006. Online decision support for irrigation for farmers. Agric.
Water Manage., 86, p.269–276.
Walker, W. R. (1989). Guidelines for Designing and Evaluating Surface Irrigation Systems.
Irrigation and Drainage Paper No. 45, FAO, Rome.
Walker, W. R. 1998. SIRMOD – Surface Irrigation Modeling Software. Utah State University,
Logan.
Walker, W. R. & Humpherys, A. S. (1983). “Kinematic-wave furrow irrigation model.” J.
Irrig. Drain. Div., 109(IR4), 377-392.
Walker, W. R. & Skogerboe, G. V. (1987). Surface Irrigation: Theory and Practice. Prentice-Hall,
Inc., Englewood Cliffs, New Jersey.
Wilke, O. & Smerdon, E. T. (1965). “A solution of the irrigation advance problem.” J. Irrig.
Drain. Div., 91 (IR3), 23-24.
Yu, F. X. & Singh, V. P. (1989). “Analytical model for border irrigation.” J. Irrig. Drain. Eng.,
115(6), 982-999.
Yu, F. X. & Singh, V. P. (1990). “Analytical model for furrow irrigation.” J. Irrig. Drain. Eng.,
116(2), 154-171.
Part 4
1. Introduction
Taxi systems and their municipal organization are generally problem area in the
metropolitan cities of developing countries. The locations of taxicab stands can be market-
oriented and can cause chaos. A decision support system is needed for the related agencies
to solve such kind of problems. In this chapter, a decision support system for taxicab stands
that can be used in any metropolitan area or municipality is presented. The study attempts
to create a scientific basis for decision makers to evaluate the location choices of taxicab
stands in major cities with the help of GIS and fuzzy logic.
Taxi, which has been worldwide used since the 19th century, is an indispensable component
of urban transport. Compared to other modes of transport, taxi has a relative advantage
with the comfort and convenience that it provides 24 hours a day to its users. However, it is
criticized because of its low occupancy rates and traffic burden it loads on urban streets.
Taxicab stands offer a viable service by providing an identifiable, orderly, efficient, and
quick means to secure a taxi that benefits both drivers and passengers (Giuliani et.al, 2001).
Stands are normally located at high-traffic locations such as airports, hotel driveways,
railway stations, subway stations, bus depots, shopping centers and major street
intersections, where large number of passengers are likely to be found. The choice of
location for taxicab stands depends only on legal permissions. From the legal authorities’
side, there is no evidence of taking some scientific criteria into account while giving the
permission. The entrepreneurs willing to manage a taxicab stand have limitless
opportunities to select any point on urban land. These free market conditions cause debates
on location choices of taxicab stands.
In some cities, taxi companies operate independently and in some other cities, the activity of
taxi fleets is monitored and controlled by a central office, which provides dispatching,
accounting, and human resources services to one or more taxi companies. In both systems,
the optimum organization of taxi companies on urban space is a problem. The taxi company
in the system should provide the best service to the customer, which includes reliability and
minimum waiting time.
As private entrepreneurs, taxi companies tend to locate in some certain parts of the city,
where they believe they can find more passengers. The decision makers, on the other hand,
should find an optimum between taxi companies’ demands and the city’s real needs. They
322 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
would rather improve the effectiveness of taxicab stands as a tool to reduce congestion,
while improving convenience for passengers and taxi drivers.
There are two major obstacles confronting the decision makers, who try to evaluate location
choices of taxi companies: The first is about the scale of the analyzed area. Taxicab stands
are the problem of major cities and managing data in this scale requires specific methods. In
this study, GIS based service area analyses helped to define the area, which can be accessed
from taxicab stands within a psychologically accepted time limit (and critical areas that are
out of service range).
The second problem of the decision makers is the lack of a certain measure for this specific
decision. An alternative way of thinking, which allows modeling complex systems using a
higher level of abstraction originating from our knowledge and experience, is necessary.
Fuzzy logic is an organized and mathematical method of handling inherently imprecise
concepts. It has proven to be an excellent choice for many control system applications since
it mimics human control logic. It uses an imprecise but very descriptive language to deal
with input data more like a human operator.
This study appeared as a response to the demands of the municipalities and the taxi
associations in Turkey. The research is focused on both issues: how well the existing taxicab
stands are located and where the most appropriate places for the incoming demand should
be. The pioneering steps of the study have already been presented in a scientific paper
(Ocalir, et. al, 2010). An integrated model of GIS and fuzzy logic for taxicab stand location
decision is built. GIS were used for generating some of the major inputs for the fuzzy logic
model. The location decisions of taxicab stands in big cities, where organizational problems
of taxicab stands slow down a better quality of service, have been examined by this
integrated model and their appropriateness has also been evaluated according to the
selected parameters. With the fuzzy logic application, evaluation of the existing taxicab
stands is done and decision for new taxicab stands is given. The equations are obtained by
artificial neural network (ANN) approach to predict the number of taxicab stands in each
traffic zone.
In the next section, background information for the tools used (the integration of GIS and
fuzzy logic) is provided. It is followed by the introduction of a decision support system, the
Fuzzy Logic Model for Location Decisions of Taxicab Stands (FMOTS), with a case study in
Konya (Turkey). The applicability of the FMOTS model for big cities for the selected case
study Konya (Turkey) with a population of approximately 2000000 will be examined.
The taxicab stand allocating problem came in agenda as taxi use gained importance in recent
years parallel to the development of university and its facilities. Having sprawled to a large
area in a plain, Konya accommodates different sustainable transportation systems. FMOTS
should be applied to this city for understanding supply and demand of taxicab stands.
Advanced GIS techniques on networks, fuzzy logic 3D illustrations and artificial neural
network equations for new demands will be given for taxicab stands in Konya, Turkey. The
paper closes with conclusions.
distance and travel time were used as acceptable limits for transit users. Another study with
GIS was performed (Walsh et al., 1997) for exploring a variety of healthcare scenarios, where
changes in the supply, demand, and impedance parameters were examined within a spatial
context. They used network analysis for modeling location/allocation to optimize travel
time and integrated measures of supply, demand, and impedance.
Fuzzy systems, describe the relationship between the inputs and the output of a system
using a set of fuzzy IF-THEN set theory (Zadeh, 1965). The classical theory of crisp sets can
describe only the membership or non-membership of an item to a set. Fuzzy logic, on the
other hand, is based on the theory of fuzzy sets, which relates to classes of objects with
unsharp boundaries, in which membership is matter of degree. In this approach, the
classical notion of binary membership in a set has been modified to include partial
membership ranging between 0 and 1.
Fuzzy Logic will be increasingly important for GIS. Fuzzy Logic accounts a lot better to
uncertainty and impreciseness in data as well as to vagueness in decisions and
classifications than Boolean Algorithms do. Many implementations have proved to get
better output data. In the recent years, fuzzy logic has been implemented successfully in
various GIS processes (Steiner, 2001). The most important contributions are in the fields of
classification, measurement, integrated with remote sensing (Rashed, 2008), raster GIS
analysis and experimental scenarios of development (Liu and Phinn, 2003), risk mapping
with GIS (Nobre et al., 2007; Ghayoumian, et.al,2007; Galderisi et.al, 2008), ecological
modeling (Malins and Metternicht, 2006; Strobl, et.al, 2007), qualitative spatial query (Yao
and Thill, 2006), data matching (Meng and Meng, 2007), site selection (Alesheikh et.al, 2008;
Puente et al., 2007) and finally in road network applications (Petrik et al., 2003).
In the scientific literature, there are many studies, which implements fuzzy logic onto some
transport problems. Some of these studies focus on transport networks. Choi and Chung
(2002) for instance, developed an information fusion algorithm based on a voting technique,
fuzzy regression, and Bayesian pooling technique for estimating dynamic link travel time in
congested urban road networks. In another study (Chen et.al, 2008) used fuzzy logic
techniques for determining the satisfaction degrees of routes on road networks. Ghatee and
Hashemi (2009) studied on a traffic assignment model in which the travel costs of links
depend on their congestion. Some other studies are on signalized junctions, which are
closely related with queuing theory: Murat and Baskan (2006) developed a vehicle delay
estimation model especially for the cases of over-saturation or non-uniform conditions at
signalized junctions. Murat and Gedizlioglu (2005) developed a model for isolated
signalized intersections that arranges phase green times (duration) and the other phase
sequences using traffic volumes. In another study, Murat and Gedizlioglu (2007) examined
the vehicle time headways of arriving traffic flows at signalized intersections under low
traffic conditions, with data from signalized intersections.
The model brings some innovation by implementing fuzzy logic to a transport problem for
an area based analysis. Point and linear analyses could not bring solutions to the mentioned
problem. An area-based analysis on traffic zones is required to give transportation planning
decisions about an urban area with respect to some land use data such as employment and
population. An analytic process and fuzzy logic functions have been considered for the
evaluation of model stages. The fuzzy logic permits a more gradual assessment of factors.
The implementation of the whole model requires a big amount of information, so the model
validation has been done only at the traffic zone level.
Increasing awareness about the need of designing and performing new development and
site selection models has made necessary the implementation of many more new factors and
324 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
parameters than those presented in traditional location models, which results major
complexity for the decision making processes. That is the reason why in this research a GIS
platform has been used to spatially analyze the service area of taxicab stands, bearing in
mind the hierarchical structure of location factors and considering the fuzzy logic attributes.
The new proposed methodology gathers the necessary tools: GIS software, to organize the
datasets and to apply geo-processing functions on a clear interface; Fuzzy Logic software, to
define and execute the evaluation methodology and to carry out the evaluation process. The
creation of an expert system based on GIS software allows the user to query the system
using different groups of criteria. This makes the planning process for the decision makers
easier.
The fuzzy logic gives the system a type of evaluation closer to the complex reality of urban
planning. The FMOTS is a helpful tool based on a multi-criteria evaluation methodology.
Location attributes are an important source of information for empirical studies in spatial
sciences. Spatial data consist of one or few cross-sections of observations, for which the
absolute location and/or relative positioning is explicitly taken into account on the basis of
different measuring units. Using fuzzy logic operators for modeling spatial potentials brings
more sophisticated results because the information about membership values gives a more
differentiated spatial pattern. A lot of the events, which are analyzed using spatial analysis
techniques, are not crisp in nature. To analyze such events, it is common to commute fuzzy
data into crisp data, which stands for a loss of information (Wanek, 2003).
GIS operations for data interpretation can be viewed as library of maps with many layers.
Each layer is partitioned into zones, where the zones are sets of locations with a common
attribute value. Data interpretation operations available in GIS characterize locations within
zones. The data to be processed by these operations include the zonal values associated with
each location in many layers (Stefanakis et al., 1996). A new class of data with measurement
operations in this field that correspond to the area and length characterizing the traffic zones
is computed. The transformation of these values is accomplished through fuzzification. This
study gives a methodology to quantify taxicab stand complexity using fuzzy. In essence
fuzzy logic opens the door to computers that understand and react to the language and the
behavior of human beings rather than machines (Narayanan et al., 2003).
3. System design
In this study, an expert system based on fuzzy logic with related modelling software and the
GIS software is developed. The model permits us generate digital suitable locations and can
also be used to evaluate existing taxicab stands from a sustainable point of view. It is indeed
a decision support system, which is an integrated model of GIS and fuzzy logic applications.
The design of the proposed model is given in Fig. 1. After the definition of the problem, GIS
application begins. The tabular data, together with geo-referenced data, are processed by
GIS applications. However, some outputs of this phase needs some further process with a
fuzzy logic application. In this fuzzy logic application, in addition to some former tabular
data, the amount of roads which have (AND don’t have) adequate taxi service, are
calculated by network analysis tool of GIS and used as basic inputs.
The location analysis of the taxicab stands in Konya begins with GIS. The first level of a GIS
study is the database creation process, which is necessary for network analysis. Some points
should be noticed before dealing with the networking database, such as the data structure,
data source and the consistency of the database.
A Decision Support System (FMOTS) for Location Decision of Taxicab Stands 325
START
PROBLEM
DEFINITION
Need to evaluate
taxicab stand locations GEO-REFERENCED
in major cities DATA
STOP
boundaries and urban social facilities. These data were used as the criteria for the site
selection of taxicab stands. The results were matched with the spatial data and overlaid with
districts.
The accessibility and service area analyses were produced by network analysis tool of GIS.
The road network of Konya, including all updated directions, district borders and the
location of taxicab stands are overlaid as thematic maps. A psychological threshold of 3
minutes is accepted for taxi users to have access to any taxi. Service area analysis of 53
taxicab stands was performed for each of the stands in 3 minutes drive time according to the
road network map. With the help of an additional network analysis tool, travel time was
specified in a problem definition dialog. The defined service area and the network according
to the given travel time (3 minutes) were displayed as cosmetic layers in the view. These
cosmetic layers were added onto one by one and then they were turned to a thematic layer
as 3 minute network from the location of taxicab stands (Fig. 2). A geodatabase was built
showing the lengths of roads within and outside the service area. The obtained data were
used as inputs of a fuzzy logic application.
Fig. 2. The road network and three minutes psychological threshold (in red) for the
accessibility of existing taxicab stands
With the fuzzy logic application, evaluation of the existing taxicab stands is done and
decision for new taxicab stands is given. The equations are obtained by artificial neural
network (ANN) approach to predict the number of taxicab stands in each traffic zone.
the company Basarsoft for the year 2009. Although there are 213 districts in Konya, 99 of
them, which are close to the central areas, are included in the FMOTS model.
City centre is composed of two adjacent parts, the traditional and the modern centres. The
development is in industrial areas located in the northern and the north eastern parts of the
city. The dense settlements are in the north western parts of Konya. Population in these
districts are expected to increase in the following years. In the northern part of the city, an
important university campus is located. The southern, the south-eastern and the eastern
parts of the city are lower-density settlements .
Bicycle is an important mode of travel in Konya. The flat structure of the city supports
bicycle networks. However, developments in the recent years have brought a more intensive
use of motorised vehicles.
Fig. 3. The selected urban districts (esp. city center) which are included in the study and the
locations of existing taxicab stands
There are no restricted conditions about site selection of taxicab stands in Konya. A fuzzy
model is developed to build up a decision support system for decision makers.
(Emp/km2) and population density (Pop/km2), are related with the urban morphology and
describe the patterns of land use. The third parameter is the car ownership level (PCP),
which is accepted as an indicator of the income level. The fourth and fifth parameters
(ROSA and RWSA), the amounts of roads within and out of taxicab stands’ catchment area,
which are limited to 3 minutes as a psychological threshold, are the supply and the potential
of taxi service respectively. The last two parameters have been determined by GIS. The
proposed model is suggested to be a useful support tool for decision makers for defining the
optimum number of taxicab stands especially in the newly developed regions and zones. It
is assumed that each taxicab stand is available to supply necessary number of taxis when
needed.
Fuzzy Inference
Input Linguistic Output Linguistic
(Fuzzy) (Fuzzy)
Fuzzification
0, x 1
x 1 1 x 2
2 1
(x)
3 x 2 x 3
3 2
0, x 3
x
1.0
0 1 2 3 x
Degree of membership, µ
1.0
0.8
0.6
Low Medium High
0.4
0.2
0.0
0 2500 4910
Employment density (emp-km2), people/km2
a)
Degree of membership, µ
1.0
0.8
0.6
Low Medium High
0.4
0.2
0.0
0 15000 36000
Population density (pop-km2), people/km2
b)
Degree of membership, µ
1.0
0.8
0.6
Low Medium High
0.4
0.2
0.0
0 50 95
Private car ownership (PCP), car/1000 people
c)
Degree of membership, µ
1.0
0.8
0.6
Low Medium High
0.4
0.2
0.0
0 25 50
Amount of roads out of service area (ROSA), km
d)
A Decision Support System (FMOTS) for Location Decision of Taxicab Stands 331
Degree of membership, µ
1.0
0.8
0.6
Low Medium High
0.4
0.2
0.0
0 3 5 8 11
Amount of roads within service area (RWSA), km
e)
Very Low Low Medium High Very High
Degree of membership, µ
1.0
0.8
0.6
0.4
0.2
0.0
0 1.5 2.5 3.5 5
Number of taxicab stands
f)
Fig. 6. Membership function of input parameters (a-e) and output parameter (f)
If Then
Private
Roads Roads
Car per
Outside Within
1000
Emp/km2 Pop/km2 Service Service FMOTS
Persons
Number Area (km) Area (km)
of the rule (ROSA) (RWSA)
(PCP)
1 low low low low medium very_low
2 low low high low medium medium
3 medium medium medium low medium medium
4 high high medium large low very_high
5 medium medium high high low medium
Table 1. A part of the rule base of the FMOTS
By developing FMOTS, a fuzzy logic expert system software has been used. In the model,
CoM defuzzification mechanism has been managed. No weightings were applied, which
means no rule was emphasized as more important than others. In this phase, a crisp value is
achieved by defuzzification of the fuzzy value, which is defined with respect to the
membership function. In other words, FMOTS defines the necessary number of taxicab
stands for each zone, with respect to the input parameters.
332 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
(a)
(b)
A Decision Support System (FMOTS) for Location Decision of Taxicab Stands 333
(c)
(d)
Fig. 7. Three dimensional representation of some selected variables on FMOTS: (a) RWSA–
ROSA relationship; (b) Emp/km2-Pop/km2 relationship; (c) RWSA-Pop/km2 relationship;
(d) ROSA-Emp/km2 relationship.
334 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
In Fig.7-a, with an increase of roads within accessed area and an increase of roads outside
accessed area, an increase in necessary taxicab stands is represented. In the researched area, in
the worst condition (ROSA=50 and RWSA= 0) the number of maximum taxicab stands is 4.
In Fig.7-b, population and employment potentials have a strong influence on the number of
taxicab stands. Increase of both of these inputs increase also the number of necessary taxicab
stands.
In Fig 7-c, increase of population density, together with a decrease in roads within service
area decrease the number of necessary taxicab stands in that zone. A low population density
together with a high level of roads within accessed area decrease the number of necessary
taxicab stands in that zone.
In Fig. 7-d, the relationship between the employment density and the amount of roads
without service area is demonstrated. In the model, if the amount of roads without service
area exceed 25 km, the need for taxicab stands increase in that area. For a population density
of 3000 people per km2 and for maximum amount of roads outside accessed, 4 taxicab
stands are needed.
6. Conclusions
In this study, an integrated model of GIS and fuzzy logic, FMOTS, for taxicab stand location
decision is used. The model is based on fuzzy logic approach, which uses GIS outputs as
inputs. The study brings some innovation by implementing fuzzy logic to a transport
problem for an area based analysis, different from the others that focus on networks and
queuing theory.
The model for location decisions of taxicab stands brings an alternative and flexible way of
thinking in the problem of a complex set of parameters by integrating a GIS study and fuzzy
logic.
FMOTS gives some results to evaluate the necessary number of taxicab stands in traffic zone
level, which are consistent with the observations of the planners.
The scale of the analyzed area requires specific methods to compute the data, such as the
areas which do not have adequate taxi service. GIS based service area analyses helped to
define the area, which can be accessed from taxicab stands within a psychologically
accepted time limit (and critical areas that are out of service range). The obtained data were
useful in fuzzy operations.
FMOTS addresses many of the inherent weaknesses of current systems by implementing: a)
fuzzy set membership as a method for representing the performance of decision alternatives
on evaluation criteria, b) fuzzy methods for both parameter weighting and capturing
geographic preferences, and c) a fuzzy object oriented spatial database for feature storage.
These make it possible to both store and represent query results more precisely. The end
result of all of these enhancements is to provide spatial decision makers with more
information so that their decisions will be more accurate.
The location decisions of taxicab stands in Konya have been examined by this integrated
model and their appropriateness has been evaluated according to the selected parameters.
Despite lack of useful data in some newly developed areas, consistent results could be
achieved in determining the necessary number of taxicab stands for city cells. The
consistency of the model can be increased if the infrastructure of the districts is improved
and some other necessary data are collected. In addition, definition of some other input
parameters would help development of the FMOTS model.
A Decision Support System (FMOTS) for Location Decision of Taxicab Stands 335
The proposed fuzzy logic model is indeed a decision support system for decision makers, who
wish to alleviate taxicab stand complexity in the short to medium term. Taxi and its control
systems can make use of the vague information derived from the natural environment that in
turn can be fed into expert systems and so provide accurate recommendations to taxi drivers,
the customers, motoring organizations and local authorities.
Broad future research can be suggested with the study. The model can be developed for a
decision support system for determining the necessary number of taxicabs assigned for each
of the taxicab stands in an urban area.
7. Acknowledgement
The authors are grateful to the GIS software company Basarsoft (Turkey) and Konya
Municipality for supply of the necessary data.
8. References
Alesheikh, A. A.,Soltani, M. J.,Nouri, N.,Khalilzadeh, M., Fal 2008. Land assessment for
flood spreading site selection using geospatial information system. International
Journal of Environmental Science and Technology (5) : 4, 455-462.
Chen, M.H., Ishii, H., Wu, C.X., May 2008. Transportation problems on a fuzzy network.
International Journal of Innovative Computing Information and Control (4): 5,
1105-1109.
Choi, K., Chung, Y.S., Jul-Dec 2002. A data fusion algorithm for estimating link travel time,
ITS Journal (7): 3-4, 235-260.
Galderisi, A., Ceudech, A., Pistucci, M., August 2008. A method for na-tech risk assessment
as supporting tool for land use planning mitigation strategies. Natural Hazards
(46): 2 ,221-241.
Ghatee, M., Hashemi, S.M., Apr 2009. Traffic assignment model with fuzzy level of travel
demand: An efficient algorithm based on quasi-Logit formulas. European Journal
of Operational Research (194):2, 432–451.
Ghayoumian, J, Saravi, M.M., Feiznia, S, Nouri, B., Malekian, A, April 2007. Application of
GIS techniques to determine areas most suitable for artificial groundwater recharge
in a coastal aquifer in southern Iran. Journal of Asian Earth Sciences (30):2,364-374.
Giuliani, R.W., Rose, J.B., Weinshall, I., June 2001. Taxi Stands in Times Square and the
Theater District A Technical Memorandum for the Midtown Manhattan Pedestrian
Network Development Project. Department of City Planning & Department of
Transportation, City of New York.
Horner, M.W., Murray, A.T., Sep 2004. Spatial Representation and Scale Impacts In Transit
Service Assessment. Environment and Planning B-Planning & Design 31 (5): 785-797.
Liu, Y.,Phinn, S. R., Nov 2003. Modelling urban development with cellular automata
incorporating fuzzy-set approaches. Computers, Environment and Urban Systems
(27): 6, 637-658.
Malins, D, Metternicht, G., March 2006. Assessing the spatial extent of dryland salinity
through fuzzy modelling. Ecological Modelling (193): 3-4,387-411.
Meng, Z., Meng, L., Sep 2007. An iterative road-matching approach for the integration of
postal data. Computers, Environment and Urban Systems (31):5, 597-615.
Murat Y.S., Baskan,O., Jul 2006. Modeling vehicle delays at signalized junctions: Artificial
neural networks approach. Journal of Scientific & Industrial Research (65): 7, 558-564.
336 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Murat, Y.S., Gedizlioglu, E., Feb 2005. A fuzzy logic multi-phased signal control model for
isolated junctions.Transportation Research Part C-Emerging Technologies (13):1,
19-36.
Murat, Y.S., Gedizlioglu, E., May 2007. Investigation of vehicle time headways in Turkey.
Proceedings of The Institution of Civil Engineers-Transport (160): 2, 73-78.
Narayanan, R., Kumar, K., Udayakumar, R., Subbaraj, P., 2003. Quantification of congestion
using Fuzzy Logic and Network Analysis using GIS. Map India Conference 2003,
New Delhi.
Nobre, R.C.M., Rotunno, O.C., Mansur, W.J., Nobre, M. M. M.,Cosenza, C. A. N., Dec 2007.
Groundwater vulnerability and risk mapping using GIS, modeling and a fuzzy
logic tool. Journal of Contaminant Hydrology (94): 3-4, 277-292.
Ocalir, E.V., Ercoskun, O.Y., Tur, R., 2010. An Integrated Model of GIS and Fuzzy Logic
(FMOTS) for Location Decisions of Taxicab Stands, Expert Systems with
Applications (37):7, 4892-4901.
O’Neill, W.A., 1995. A Comparison of Accuracy and Cost of Attribute Data In Analysis of
Transit Service Areas Using GIS. Journal of Advanced Transportation 29 (3): 299-320.
Petrík, S., Madarász, L., Ádám, N ,Vokorokos,L., 2003. Application of Shortest Path
Algorithm to GIS Using Fuzzy Logic. 4th International Symposium of Hungarian
Researchers on Computational Intelligence, November 13-14, Budapest, Hungary.
Puente, M.C.R., Diego, I.F., Maria, J.JO.S., Hernando, M.A.P., Hernaez, P.F.D.A.,2007. The
Development of a New Methodology Based on GIS and Fuzzy Logic to Locate
Sustainable Industrial Areas. 10th AGILE International Conference on Geographic
Information Science, Aalborg University, Denmark.
Rashed, T., Sep 2008. Remote sensing of within-class change in urban neighborhood
structures. Computers Environment and Urban Systems (32): 5, 343-354.
Stefanakis, E., Vazirgiannis, M., Sellis, T., 1996. Incorporating Fuzzy Logic Methodologies
into GIS Operations, Int. Conf. on Geographical Information Systems in Urban,
Regional and Environmental Planning, Samos, Greece, pp. 61—68.
Steiner, R., 2001. Fuzzy Logic in GIS,
http://www.geog.ubc.ca/courses/geog570/talks_2001/fuzzy.htm.
Strobl, R.O., Robillard, P.D., Debels, P., June 2007. Critical sampling points methodology:
Case studies of geographically diverse watersheds. Environmental Monitoring and
Assessment (129): 1-3,115-131.
TUIK (2008) Address Based Population Census Reults, Ankara.
TUIK (2009) Haber Bülteni, Motorlu Kara Taşıtları İstatistikleri, Sayı 67, Şubat 2009, Ankara.
Upchurch, C., Kuby, M., Zoldak, M., Barranda, A., March 2004.Using GIS to generate
mutually exclusive service areas linking travel on and off a network. Journal of
Transport Geography 12(1) :23-33.
Walsh S.J., Page P.H., Gesler W.M., June 1997. Normative Models and Healthcare Planning:
Network-Based Simulations Within A Geographic Information System
Environment. Health Services Research 32 (2): 243-260.
Wanek, D., 2003. Fuzzy Spatial Analysis Techniques in a Business GIS Environment. ERSA
2003 Congress, University of Jyväskylä – Finland.
Yao, X. Thill, J.C., July 2006. Spatial queries with qualitative locations in spatial information
systems. Computers, Environment and Urban Systems (30):4, 485-502.
Zadeh, L.A., 1965. Fuzzysets. Information and Control, Vol. 8, pp. 338-352.
17
1. Introduction
Natural hazards and man-made disasters are victimizing large numbers of people and
causing significant social and economical losses. By developing efficient Spatial Decision
Support Systems (SDSS), managers will be efficiently assisted in identifying and managing
impending hazards. This goal could not be reached without addressing significant
challenges, including data collection, management, discovery, translation, integration,
visualization, and communication. As an emergent technology, sensor networks have
proven efficiency in providing geoinformation for any decision support system particularly
those aiming to manage hazardous events. Thanks to their spatially distributed nature, these
networks could be largely deployed to collect, analyze, and communicate valuable in-situ
spatial information in a timely fashion. Since some decisions are expected to be taken on-
the-fly, the right data must be collected by the right set of sensors at the right time. In
addition to saving the limited resources of sensor networks, this will speed up the usability
of data especially if this data is provided in the right format. In order to boost the decision
support process, a thorough understanding and use of the semantics of available and
collected heterogeneous data will obviously help to determine what data to use and how
confident one can be in the results ultimately. An appropriate representation of the geo-
information should enhance this process.
Data collected by sensors is often associated with spatial information, which makes it
voluminous and difficult to assimilate by human being. In critical situations, the hazard
manager has to work under pressure. Coping with such collected data is a demanding task
and may increase the risk of human error. In this context, Geosimulation emerges as an
efficient tool. Indeed, mapping the collected data into a simulated environment which takes
into account the spatial dimension may dramatically help the hazard manager to easily
visualize the correlation between data collected by sensors and the geospatial constraints.
In this chapter, we first present fundamental concepts of SDSS and the most important
challenges related to their development. Second, we outline the sensor network technology
as an emergent tool for leveraging SDSS. Third, we present the Geosimulation approach as
another keystone to enhance SDSS. In this part, we summarize the current opportunities,
research challenges, and potential benefits of this technique in SDSS. Finally, for better
efficiency, we propose an encoding that emphasizes the semantics of available data and
338 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
tracks events/effects propagation. Based upon conceptual graphs, this encoding will be
used to increase the benefits from sensor networks and geosimulation in SDSS.
components mentioned in the SDSS literature differs (see (Sugumaran and Degroote, 2010)
for more details), a SDSS should at least integrate the following capabilities (Densham, 1991;
Densham and Goodchild, 1989; Keenan, 2003):
1. Spatial and non spatial data management capabilities (SDBMS component): include
tools supporting the collection and storage of data, the translation between different
data models and data structures, and retrieving data from storage.
2. Analytical and spatial modeling capabilities (Modeling and analysis component):
provide decision-makers with access to a variety of spatial models in order to help them
in the decision-making process. Examples of models include statistical, mathematical,
multi-criteria evaluation, cellular automata and agent-based models (Sugumaran and
Degroote, 2010).
3. Spatial display and report generation capabilities (User interface component): support
the visualization of the data sets and the output of models using several formats such as
maps, graphics and tables.
Domain specific
Like DSS, SDSS are often domain- and problem specific, although there are some SDSS that
are designed to be generic (Sugumaran and Degroote, 2010). SDSS have been applied to
several domains, such as urban planning, environment and natural resource management,
transportation and business, which led to the use of several spatial modeling techniques and
technologies. Reusing modeling techniques is not always evident, and the development of
SDSS is often complex, expensive and requires the acquisition of specific domain or problem
knowledge.
information, and any lack or over-load could potentially limit her ability to make good
informed decisions. Providing the relevant and meaningful information is an important
challenge. Most used techniques are visual (maps, 3D visualisations, etc.) or
quantitative (tables, graphs, etc.). There is currently a lack of qualitative techniques that
allow the decision-maker exploring the consequences of her alternative solutions with a
simple and meaningful language.
2. Developing SDSS for environmental monitoring and crisis situations requires not only
the capability of modeling dynamic spatial phenomena, but also the ability to update
those models with real-time data. Moreover, the system must provide an output that is
relevant and easy to understand to the non-technical decision-maker, because any
misleading or ambiguity can have to catastrophic consequences. However, GIS
traditionally do not handle dynamic spatial models, and in order to remedy this limit,
they have been coupled with modeling techniques from several research fields,
especially cellular automata and agent-based simulations. Although this has led to the
emergence of Dynamic GIS (Albrecht, 2007), dynamic spatial knowledge representation
and modeling is an active research field (Haddad and Moulin, 2010a), and the lack of
standards and efficient representation formalisms makes challenging the development
of SDSS for real-time or near-real time spatial problems.
3. Decision-makers often need to examine various situations simultaneously at different
levels of detail (macro-, meso- and micro- scales of representations). This is an
important issue since the modeled phenomena and observed patterns may be different
from one level of detail to another, and since interferences may arise between
phenomena developing in different levels of detail. The level of detail is very important
and complex problem, because it does not only require modeling the problem’s
dimensions at different spatial and temporal levels of details but also linking these
different levels of detail.
4. Most SDSS were developed independently of one another, and with the development of
internet and web-based SDSS (Sugumaran and Sugumaran, 2005), there is more and
more a need for sharing and accessing spatial data from several distributed and
heterogeneous sources. Interoperability is then an important challenge to be addressed.
The Open Geospatial Consortium (OGC) has been developing different interoperability
standards for web applications, including Web Map Service (WMS) and Web Feature
Service (WFS), and the issue of developing SDSS using these standards has been
recently raised (Zhang, 2010). In addition, semantic interoperability is an active research
field in geographic anthologies and semantic web (Wang et al., 2007), and the lack of
standards makes the development and deployment of ontology-driven SDSS difficult.
In the remaining of this chapter we focus on SDSS for dynamic environmental monitoring
and crisis response and we present an approach based on sensor networks and
geosimulation techniques to address their related challenges.
Sensors are small devices, deployed in an environment for the purpose of collecting data for
specific needs, such as monitoring temperature, pressure, moisture, motion, vibration, or
gas or tracking objects. Since the energy and processing constrained sensors are frequently
prone to failure, robbery, and destruction, they are unable to achieve their tasks
individually. For this reason, they are deployed within an extended infrastructure called a
sensor network. In this infrastructure, the spatially distributed sensors could implement
complex behaviors and reach their common goals through a collaborative effort. Thanks to
this collaboration, they are able to collect important spatio-temporal data for a variety of
applications. This quality motivates the link between SDSS and sensor networks, especially
when data is needed to be collected remotely and in real-time with tools which can operate
in harsh environments.
In the literature, several works have proposed sensor network-based SDSS. In the oil and
gas industry, Supervisory Control & Data Acquisition Systems (SCADA) are using sensors
in the monitoring of the processes within a total operating environment. The sensors are
connected throughout an oil refinery or pipeline network, each providing a continuous flow
of information and data about the operating processes. Chien and his colleagues (Chien et
al., 2010) have deployed an integrated space in-situ sensor network for monitoring volcanic
activity. In addition to ground sensors deployed to the Mount Saint Helens volcano, this
network includes a spacecraft for earth observation. The information collected with sensors
is integrated with data collected with an intelligent network of “spider” instrument
deployed to the surface of the Mount Saint Helens volcano. The resulting data are used by
hybrid manual reporting systems such as the Volcanic Ash Advisory Centers (VAAC) and
Air Force Weather Advisory System (AFWA). Within the same context, Song and his
colleagues (Song et al., 2008) have proposed OASIS. OASIS is a prototype system that aims
at providing scientists and decision-makers with a tool composed of a “smart” ground
sensor network integrated with “smart” space-borne remote sensing assets to enable prompt
assessments of rapidly evolving geophysical events in a volcanic environment. The system
constantly acquires and analyzes both geophysical and system operational data and makes
autonomous decisions and actions to optimize data collection based on scientific priorities
and network capabilities. The data collected by OASIS is also made available to a team of
scientists for interactive analysis in real-time.
O’Brien and his colleagues (O’Brien et al., 2009) have presented a distributed software
architecture enabling decision support in dynamic ad hoc sensor networks, which enables
rapid and robust implementation of an intelligent jobsite. The architecture includes three
layers: a layer for expressive approachable decision support application development, a
layer for expressive data processing, and a layer for efficient sensor communication.
According to the authors, the implemented prototype can speed up application
development by reducing the need for domain developers to have detailed knowledge
about device specifications. It can also increase the reusability of software and protocols
developed at all levels and make the architecture applicable to other industries than
construction.
Wicklet and Potter (Wicklet and Potter, 2009) have presented three steps for information-
gathering, from sensor data to decision support. These steps are: data validation, data
aggregation and abstraction, and information interpretation. Ling and his colleagues (Ling
et al., 2007) have proposed a sparse undersea sensor network decision support system based
on spatial and temporal random field. The system has been presented as suitable for
multiple targets detection and tracking. In this system, an optimization based random field
342 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
estimation method has been developed to characterize spatially distributed sensor reports
without making any assumptions on their underlying statistical distributions.
In (James et al., 2008), the authors have proposed an approach that aims at performing
information-centric analysis within a GIS-based decision support environment using expert
knowledge. By shifting towards a more information-centric approach to collect and use
sensor measurements, more advanced analysis techniques can be applied. These techniques
can make use of stored data about not only sensor measurements and what they represent,
but also about how they were collected and the spatial context related to them. In a related
work, Rozic (Rozic, 2006) has proposed REASON, which is a Spatial Decision Support
Framework that uses an ontology-based approach in order to interpret many different types
of data provided by sensor measurements. In such systems, binding and transforming the
sensor data into timely information which is relevant to the problem is a challenging task. In
order to solve this issue, we have proposed in a related work (Jabeur and Haddad, 2009) to
allow sensors to autonomously cooperate and coordinate their efforts while emphasising on
data semantics. By encoding causality relationships about natural phenomena and their
effects in time and space with the formalism of conceptual graphs, we have proposed an
approach that implements a progressive management of hazardous events. This approach
would provide the decision-makers with timely valuable information on hazardous events
of interest, such as floods and volcano eruption.
In (Tamayo et al., 2010), the authors have conducted the design and implementation of a
decision-support system for monitoring crops using wireless sensor networks. The
prototype implemented includes tools that provide real-time information about the crop
status, surrounding environment and potential risks such as pests and diseases.
In (Filippoupolitis and Gelenbe, 2009), the authors have proposed a distributed system that
computes the best evacuation routes in real-time, while a hazard is spreading inside a
building. The system includes a network of decision nodes and sensor nodes, positioned in
specific locations inside the building. The recommendations of the decision nodes are
computed in a distributed manner then communicated to evacuate or rescue people located
in the vicinity.
In order to support rain-fed agriculture in India, Jacques and his colleagues (Jacques et al.,
2007) have proposed COMMON-Sense Net system, which aims at providing farmers with
environment data. This system can provide farmers with valuable data allowing them to
improve the production of semiarid regions in a cluster of villages in Southern Karnataka.
Mastering a ship in heavy seas is always a challenge, especially in night time. The officer of
the watch has to “sense” the weather and waves in order to preserve the cargo and navigate
the ship without undue motions, stress or strains. SeaSense (Nielsen, 2004) is designed to
support the natural human senses in this respect by providing an accurate estimation of the
actual significant wave height, the wave period, and the wave direction. Furthermore, and
based on the measurements and estimations of sea state, SeaSense provides actual
information on the sea-keeping performance of the ship and offers decision support on how
to operate the ship within acceptable limits.
be collected remotely. There are several existing techniques that can help in achieving this
task, such as radars, satellites, and sensors. Depending on the availability of resources and
the application context, one or more technologies can be deployed.
Sensor networks afford SDSSs with opportunities along five axes (Fig. 1):
1. For which goals data are collected: The sensor network should collect the requested
data depending on the goals of the application. For example, in a flood scenario, sensors
should collect data on the level and movement of water, the conditions of soil slope,
and water pollution. Moreover, sensors can deliver to the SDSS data in several formats
including those that are compatible with widely used standards.
2. When data will be collected: sensors should be activated at the right time to collect and
deliver the requested data to decision-makers through the SDSS.
3. Where data will be collected: sensors should collect data from the right location and
thus decision-makers could have updated information on locations of interest.
4. Who will collect data: sensors should collect the requested data depending on their
current locations. If they are not explicitly appointed by the decision-makers, they
should self-organize in order to identify the subset of sensors that will be in charge of
collecting the requested data.
5. How data will be collected: sensors generally follow sleep/wakeup cycles. They are
programmed to collect data according to predefined schedules. Depending on the
current situation, this schedule has to be adapted in order to collect data at appropriate
times. Moreover, data can be collected through a collaborative effort from several
sensors.
How data
will be When data
collected ? needed?
SDSS
Where data
Who will
to be
collect data ?
collected ?
4.1 Review
SDSSs have been successful in addressing the spatial component of natural and
technological hazards (Fedra, 1997). As a result, several specialized SDSS have been
developed that assist decision-makers with hazards. For instance, regarding wildfires, a
multidisciplinary system has been developed by Keramitsoglou and his colleagues
(Keramitsoglouet al., 2009) to provide rational and quantitative information based on the
site-specific circumstances and the possible consequences. The system's architecture consists
of several distinct supplementary modules of near real-time satellite monitoring and fire
forecast using an integrated framework of satellite Remote Sensing, GIS, and RDBMS
technologies equipped with interactive communication capabilities. The system may handle
multiple fire ignitions and support decisions regarding dispatching of utilities, equipment,
and personnel that would appropriately attack the fire front.
Sensor Network and GeoSimulation: Keystones for Spatial Decision Support Systems 345
important for emergency planning. Indeed, when based on realistic assumptions, simulation
offers an efficient way of modeling crisis response tasks.
In the domain of forest firefighting for example, we developed a geosimulation-based SDSS
which relies on a four-layer architecture (Sahli and Moulin, 2009; Sahli and Jabeur, 2011). An
adaptation to this architecture is discussed later (see Subsection 4.3). In the air navigation
domain, (Ozan and Kauffmann, 2002) developed a practical tool for tactical and strategic
decision-making in aircraft-based meteorological data collection called TAMDAR
(Tropospheric Airborne Meteorological Data Reporting). This onboard system gathers
meteorological data (using airborne sensors) as aircraft fly in the troposphere and transmit
this data to the National Weather Service. Collected data will help decision-makers to
process different operational alternatives (by conducting various what-if analyses using a
GIS-based user interface) and to determine the best strategy for system operation. The SDSS
is composed of a customized simulation-optimization engine, a data utility estimator, a GIS-
based analysis layer, and a user interface. They can also conduct various what-if analyses
effectively by using GIS-based user interface.
In the domain of civilian evacuation, and as evacuees move, a specialized SDSS can
continuously track evacuees using their current spatial location coordinate. In this context,
Nisha de Silva designed a prototype of a geosimulation-based SDSS named CEMPS
(Configurable Evacuation Management and Planning Simulator) (Nisha de Silva, 2000). The
aim was to produce an interactive planning tool to produce simple scenarios wherein
emergency planners are able to watch a simulation proceed and provide limited interaction
to obtain information on the progress of the evacuation. CEMPS integrates a dynamic and
interactive evacuation simulation model with a GIS which defines the terrain, road network,
and related geographical elements such as the hazard source and shelters, as well as the
population to be evacuated.
macro modeling approach. The choice among the three levels of granularity depends on
the trade-offs that must be made in order to maintain realistic computing power when
processing large amounts of data.
4. Validation of simulated models: A simulator is intended to mimic a real-world system
using actual data associated with that system. It uses assumptions that generally
simplify the complex behavior of the real-world. When the SDSS simulates a highly
dynamic environment as during crisis response, it is a big challenge to validate the
simulation models in use.
5. Integrating data reported by citizen: Many governments are implementing large-scale
distributed SDSS to incorporate data from a variety of emergency agencies in order to
produce real-time “common operational pictures” of crisis situations. This goal remains
a very hard task. In this context, social networks have the potential to transform
emergency response by providing SDSS with data collected by citizens. Individuals are
often in the best position to produce immediate, empirical and real-time observance of
events simply because they are there. Pictures or videos taken by citizens from a cell
phone can provide invaluable information. Integrating these data and media generated
by citizens and other non-state actors with the main SDSS can enhance governmental
response to crisis. This will not only imply reviewing the command and control
strategies and the communication infrastructure, but also opening new research
avenues on how to collect, filter, and integrate these new data into the SDSS.
5. Including a full Emergency Management Cycle: The emergency management
community has long recognized that society’s response to disaster events evolve
through time. The succession of emergency response stages after a disaster event,
known as the Emergency Management Cycle, includes a phase of response/rescue
operations, a later phase of recovery/reconstruction, and a stage focused on mitigation
and monitoring for future events (Cutter, 1993). Most of existing SDSSs only focus on
one stage. It would be a considerable improvement if SDSS can address all stages
involved in emergency response.
6. Web-based SDSS: As with other applications deployed via the web, the Internet based
SDSS provides advantages over traditional desktop applications. First, the application is
centrally located, simplifying distribution and maintenance. In addition, the Internet
based approach increases the user base by reducing costs of access to users. However,
Internet standards have some limitations for use in spatial applications, but new
software and plugins continue to be developed. Current applications offer map display,
but frequently fall short of providing comprehensive GIS functionality. Future
developments offer the possibility of a distributed SDSS that could connect with
datasets held at distant locations on the Internet.
and assessed. The most appropriate plan is kept. However, when applied to real world
problems, the current SBP approach does not suggest an efficient interaction between the
real world and the simulated environment. In addition, monolithic planning turned out to
be ineffective (Desjardins et al., 1999). Agent-based planning appeared then as a good
alternative. Combining both techniques (simulation-based planning and agent-based
planning) may then help to solve such problems. The SDSS which supports this idea should
be able to build a synchronous parallel between the real world and the simulated environment
(which is mainly the geo-referenced map) of the SDSS.
The sensor relocation application can be thought of as a layered architecture as illustrated in
Fig. 2. This design philosophy is inspired by the layered simulation model we proposed in
(Sahli and Moulin, 2009). The real sensors are of course located in the real world. Each of
these sensors has a representative in the simulated environment. This representative is a
software agent. Sensor relocation is thus planned by these agents in the SDSS and then
communicated to the real sensors. The SDSS should include four layers as shown in Fig 2. A
full description of these layers can be found in (Sahli and Jabeur, 2011).
(Minsky, 1974) are examples of these approaches. The use of the XML and GML (Cox, 2006)
languages provides a standard and common ways of cataloguing and exchanging sensor
network information. The Open Geospatial Consortium (OGC) proposes the Observations
and Measurements (O&M) standard (Cox, 2006) that defines a conceptual schema and XML
encoding for observations and for features involved in sampling when making observations.
OGC also proposes the Sensor Observation Service (SOS) (Na and Priest, 2006) standard that
defines a web service interface for the discovery and retrieval of real time or archived data
produced by all kinds of sensors. By using GML, O&M documents can be generated in SOS
software. In this case, measurement records are stored to a database with a schema based on
the O&M specification. When this database is queried, the SOS server pulls the relevant
records from the database and automatically generates the O&M document in XML format
based on a template that is stored on the server (McCarthy, 2007).
6. Conclusion
When considering any DSS, it is very important to know the degree of involvement of
human experts in the process of taking decisions. This issue is more crucial during crisis
response, especially when decisions are to be taken in real-time. As the spatial component
arises, the trust of decision-makers in SDSS needs a longer discussion. In this chapter, we
discussed briefly the problem from the cognitive point of view. A longer investigation
supported by a real example can be found in (Sahli and Moulin, 2006). The cognitive aspect
helps in understanding the limits of SDSS and pinpointing the issues where human
decision-makers could not be overlooked.
When dealing with geographic reasoning, which is typically based on incomplete information,
a human planner is able to draw quite accurate conclusions by cleverly completing the
information or applying certain default rules based on common sense. Despite these cognitive
capabilities, the planner has limitations when it comes to simulate complex events, particularly
Sensor Network and GeoSimulation: Keystones for Spatial Decision Support Systems 353
in a real large-scale geographic space. This task could be achieved properly by an SDSS. While
the SDSS “knows” better the spatial environment as it uses GIS data, it does not have a refined
sense of anticipation and judgment as a human expert does. Even if a plan generated by the
SDSS seems to be well-grounded, it may not be feasible in reality or may go against certain
doctrines. Human experts can propose to adjust/change the SDSS' recommendations
according to their own experience and anticipation sense. For these reasons, we draw the
conclusion that geosimulation based SDSS complements human planning skills when
addressing complex problems such as crisis situations. It is worthwhile to investigate this
degree of complementarities and come up with some general guidelines that could be
customized depending on the application domain.
7. Acknowledgement
The research leading to these results has received Research Project Grant Funding from the
Research Council of the Sultanate of Oman Research Grant Agreement No [ORG DU ICT 10 003]
8. References
Agatsiva, J., & Oroda, A. (2002). Remote sensing and GIS in the development of a decision
support system for sustainable management of the drylands of eastern Africa: A
case of the Kenyan drylands, International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, 34:42–49.
Albrecht, J. (2007). Dynamic GIS, in The Handbook of Geographic Information Science, ed. J.
Wilson and S. Fotheringham, London: Blackwell, pp. 436–446.
Ambros-Ingerson J.A. Steel, S. (1988). Integrating planning, execution and monitoring.” In Proc.
of the 7th (US) National Conference on Artificial Intelligence (AAAI 88), St Paul, MN,
USA. Americain Association for Artificial Intelligence, Morgan Kaufmann, pp. 83-88
Botts, M., Percivall, G., Reed, C., Davidson, J. (2006). OGC Sensor Web Enablement:
Overview and High Level Architecture. Open Geospatial Consortium Whitepaper.
Open Geospatial Consortium.
Cárdenas Tamayo, R.A; Lugo Ibarra, M.G. and Garcia Macias, J.A. (2010): Better crop
management with decision support systems based on wireless sensor networks,
Electrical Engineering Computing Science and Automatic Control (CCE), 2010 7th
International Conference on, 2010, 412 – 417, Tuxtla Gutierrez
Chien, S.; Tran, D.; Doubleday, J.; Davies, A.; Kedar, S.; WEbb, F.; Rabideau, G.; Mandl, D.;
Frye, S.; Song, W.; Shirazi, B. and Lahusen, R. (2010). A Multi-agent Space, In-situ
Volcano Sensorweb, International Symposium on Space Artificial Intelligence,
Robotics, and Automation for Space (i-SAIRAS 2010). Sapporo, Japan. August 2010
Cox, S. (2006). Observations and measurements. OpenGIS Discussion Paper. Open
Geospatial Consortium (2006)
Craik, K.J.W. (1943). The Nature of Explanation. London: Cambridge. University Press, 1943.
Cutter, S. (1993). Living with Risk: The Geography of Technological Hazards, London: Edward Arnold.
Densham, P. J., Goodchild, M. F. (1989). Spatial Decision Support System: A Research
Agenda. Proceedings of GIS/LIS ’89, Florida, pp. 707-716.
Densham, P.J. (1991). Spatial decision support systems. In: D.J. Maguire, M.F. Goodchild,
D.W. Rhind (eds.) Geographical information systems: principles and applications,
pp. 403-412. London: Longman.
Desjardins, M.; Durfee, E.; Ortiz, C. and Wolverton, M. (1999). A survey of research in
distributed continual planning. AI Mag. vol. 4, 1999, pp. 13-22.
354 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Evers, M. (2007). Requirements for Decision Support in Integrated River Basin Management,
Journal for Management of Environmental Quality, Vol. 19(1), 2007
Fedra, K. & Reitsma, R.F. (1990). Decision Support and Geographical Information Systems.
In Geographical Information Systems for Urban and Regional Planning, Research Report:
RR-90-009. Laxenburg, Austria: ACA, International Institute for Applied Systems
Analysis (IIASA).
Fedra, K. (1997). Integrated risk assessment and management: Overview and state-of-the-
art, Journal of Hazardous Materials, 61:5–22.
Filippoupolitis, A. & Gelenbe, E. (2009). A distributed decision support system for building
evacuation. In Proceedings of the 2nd conference on Human System Interactions
(HSI'09). IEEE Press, Piscataway, NJ, USA, 320-327
Forbus, K.D. (1981). A Study of Qualitative and Geometric Knowledge in Reasoning about
Motion, MIT Artificial Intelligence Laboratory, Research Report AI-TR-615, 1981.
Fulcher, C.; Prato, T.; Vance, S.; Zhou, Y. and Barnett, C. (1995). Flood impact decision
support system for St. Charles, Missouri, USA, 1995 ESRI International User
Conference, 22–26 May, Palm Springs, California, pp. 11.
Gao, S., D. Sundaram, and J. Paynter. (2004). Flexible support for spatial decision making.
Paper presented at the 37th Annual Hawaii International Conference on System
Sciences, Honolulu, Hawaii.
Garbolino, V.; Sacile, R.; Olampi, S.; Bersani, C.; Alexandre, N.; Trasforini, E.; Benza, M. and
Giglio, D. (2007). A Spatial Decision Support System prototype for assessing road
HAZMAT accident impacts on the population in a dense urban area: a case study
of the city of Nice, French Riviera, Proceedings of the eight International
Conference on Chemical & Process Engineering, Ischia, 24-27 June 2007.
Goel, R. K. 1999. Suggested framework (along with prototype) for realizing spatial decision
support systems (SDSS). Paper presented at Map India 1999 Natural Resources
Information System Conference, New Delhi, India.
Haddad, H. & Moulin, B. (2007). Using Cognitive Archetypes and Conceptual Graphs to
Model Dynamic Phenomena in Spatial Environments, In: ICCS 2007, LNAI 4604, U.
Priss; S. Polovina & R. Hill (Eds.), pp. 69–82, Springer-Verlag, Berlin Heidelberg
Haddad, H. & Moulin, B. (2010a). A Framework to Support Qualitative Reasoning about
COAs in a Dynamic Spatial Environment. International Journal of Experimental &
Theoretical Artificial Intelligence , Vol. 22(4), pp. 341 – 380, Taylor & Francis
Haddad, H. & Moulin, B. (2010b). Multi-Agent Geo-Simulation in Support to Qualitative
Spatio-Temporal Reasoning, Modeling Simulation and Optimization - Focus on
Applications, Shkelzen Cakaj (Ed.), pp. 159-184, ISBN: 978-953-307-055-1, INTECH
Hoc J. M. (1987). Cognitive Psychology of Planning. Academic press, 1987.
Jaber, A.; Guarnieri, F. and Wybo, J.L. (2001). Intelligent software agents for forest fire
prevention and fighting, Safety Science, 39, pp. 3–7.
Jabeur, N. & Haddad, H. (2009). Using Causality Relationships for a Progressive
Management of Hazardous Phenomena with Sensor Networks. In proceedings of
the International Conference on Computational Science and Applications (ICCSA
2009), June 29-July 2, 2009, Yongin, Korea, pp: 1-16
Jabeur, N.; McCarthy, J.D. and Graniero, P. (2008). Improving Wireless Sensor Network
Efficiency and Adaptability through an SOS Server Agent. In proceeding of the 1st
IEEE International workshop on the Applications of Digital Information and Web
Technologies (ICADIWT 2008). 4-6 August, Ostrava, Czech Republic, pp: 409-414.
Jensen, J.R.; Hodgson, M.E.; M. G-Q Quijano, J. Im.(2009). A Remote Sensing and
GIS-assisted Spatial Decision Support System for Hazardous Waste
Sensor Network and GeoSimulation: Keystones for Spatial Decision Support Systems 355
Ozan, Erol & Kauffmann, Paul (2002). A Simulation-based Spatial Decision Support System
for a New Airborne Weather Data Acquisition System, 2002 Industrial Engineering
Research Conference, May 19-22, 2002, Orlando, Florida
Radke, J. (1995). A spatial decision support system for urban/wildland interface fire
hazards, 1995 ESRI International User Conference, 22–26 May, Palm Springs,
California, pp. 14.
Rozic, S. M. (2006). Representing spatial and domain knowledge within a spatial decision
support framework. M.Sc. Thesis, University of Windsor (2006)
Russel, S. & Norvig P. (1995). Artificial Intelligence, a Modern Approach, Prentice Hall, 1995.
Sahli, N. & Jabeur, N. (2011). Agent-Based Approach to Plan Sensors Relocation in a Virtual
Geographic Environment” in Proceedings of NTMS 2011, Paris, France, 7-10
February 2011.
Sahli, N. & Moulin, B. (2006). Agent-based geo-simulation to support human planning and
spatial cognition (extended version). J.S. Sichman and L. Antunes (Eds.): MABS
2005, Springer-Verlag Berlin Heidelberg LNAI 3891, pp. 115 – 132, 2006.
Sahli, N. & Moulin, B. (2009). EKEMAS, an agent-based geo-simulation framework to
support continual planning in the real-word” [Quick Edit] Applied Intelligence,
Vol. 31, No. 2. (1 October 2009), pp. 188-209.
Sahli, N.; Mekni, M. and Moulin, B. (2008). A Multi-Geosimulation Approach for the
Identification of Risky Areas for Trains, Workshop Agents in Traffic and
Transportation, part of the International Conference AAMAS 2008, Portugal, 2008.
Sanders, R., & Tabuchi, S. (2000). Decision support system for flood risk analysis for the
River Thames, United Kingdom, Photogrammetric Engineering & Remote Sensing,
66(11), pp. 1185–1193.
Segrera, S.; Ponce-Hernandez, R. and Arcia, J. (2003). Evolution of decision support system
architectures: Applications for land planning and management in Cuba. Journal of
Computer Science & Technology 3(1), pp. 40–46.
Sowa, J.F. (1984). Conceptual Structures: Information Processing in Mind and Machine.
Addison-Wesley, Massachusetts (1984)
Sprague, R. H. and H. J. Watson (1996). Decision support for management. Upper Saddle River,
N.J.: Prentice Hall
Sugumaran, R. & Degroote, J. (2010). Spatial Decision Support Systems: Principles and
Practices, CRC Press; 1 edition
Sugumaran, V., and Sugumaran, R. 2005. Web-based Spatial Decision Support Systems
(WebSDSS): Evolution, Architecture, and Challenges, Third Annual SIGDSS Pre-
ICIS Workshop, Designing Complex Decision Support: Discovery and Presentation of
Information and Knowledge, Las Vegas, Nevada (USA).
Uran, O. & Janssen, R. (2003). Why are spatial decision support systems not used? Some
experiences from the Netherlands. Computers, Environment and Urban Systems 27(5),
pp. 511–526
Wang, Y, Gong, J. and Wu, X. (2007). Geospatial semantic interoperability based on
ontology, GEO-SPATIAL INFORMATION SCIENCE, Vol 10(3), pp. 204-207
Wicklet, G. & Potter, S. (2009). Information-gathering: From sensor data to decision support
in three simple steps, Intelligent Decision Technologies, Vol. 3(1), 2009, pp. 3-17
Zhang, C. (2010). Develop a Spatial Decision Support System based on Service-Oriented
Architecture, Decision Support Systems, Chiang S. Jao (Ed.), ISBN 978-953-7619-
64-0, pp. 406, January 2010, INTECH, Croatia
Zhang, M. & Grant, P.W. (1993). Pages 107-114 in Proceedings of the Thirteenth Annual ESRI User
Conference, May 1993. Redlands, California: Environmental Systems Research Institute.
Part 5
1. Introduction
The recent financial crisis, the growth of the frequency of extreme risk events are
motivations for a new analysis of the mechanisms and processes that implies high risk and
high uncertainty. The new dynamics and the severe impact of the extreme events (natural
hazards, terrorism, technological accidents, and economic/financial crises), but also the
complexity of interventions have motivated the scientific community to find new efficient
solutions for decision making process dedicated for crisis management, especially for
solving the following aspects: time urgency, the complexity of event, the volatility/ rapidly
changing of event/ decision conditions, the chaotic surrounding environment ingredient,
the human behaviour in critical situations (emotional stress), the consequences for decision
failure, poor data, frequent interruption during the decision making process.
Extreme risk events are no longer only high impact - low probability events. Indeed, the
literature demonstrates that these types of events are more frequent and also their impact is
also more critical. The governments are responsible not only for the post crisis management
of extreme events. In this case, better knowledge could offer opportunities to reduce the
impact.
Emerging aspects regarding the integration of the concept of hybrid DSS in the treatment of
extreme risk management will be presented. The efficiency of using intelligent ingredients
in decision support systems (DSS) is linked to the human limits but also on the dynamics of
huge data in the context of imprecision and uncertainty. Different applications for the use of
hybrid DSS in flood and drought risk management, asymmetric risk and crisis management
exercises will be also presented.
Many extreme risk events are associated with modern human environment and in this case
the spectrum of risks is changed with a difference between the perceived possibility and
reality (Renn, 1992). Extreme risk is expressed by the potential for the realization of
unwanted, adverse consequences to human life, health, property, or the environment. In
Kaplan, Garrick (1981) is proposed the risk triplet (S-scenario, P-likelihood, D-possible
consequence), a framework that responds to the nature of disaster events that can occur,
how likely is a particular event, what are the consequences.
Decision makers in extreme risk environments should respond in a new different manner
because modern crisis management requires urgent developments toward better, more
elaborated and appropriate means for extreme risks. Extreme risk management needs to
360 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
ensure a better interoperability of different emergency services (police, fire chief, health
sector, civil protection) to provide the appropriate information (of course, after data fusion
and data filtering) at the right place in the critical moments. The new global environment is
very complex and the dynamics of changing is difficult to understand. Decision making in
critical or special situations is very complex because the systems are complex, the dynamics
is difficult to understand, and adaptability is essential. Even the technologies to cope with
the crisis and high risk events have developed considerably there are some underlying
problems that complicate high risk prevention and multiple crisis response: an inadequate
communication between different actors and different levels; the relative inadequate data
fusion, selection, filtering and standardization impacted information database; the difficulty
to update information about the development of the extreme risk (victims damages, rescue
team technologies, in the case of natural/man-made hazards, or specific information in the
case of financial crushes and crises); the access to existing databases and action plans it is
relatively slow.
The interest is to develop an integrated framework capable to support emergency decisions
via a better understanding of the dynamics of extreme events and a better detection and
management of new risk situation. In this case, the focus is on the characteristics of
genericity and adaptability of the framework in order to build a flexible, adaptive, robust
and efficient system that does not depend on a particular case and it is easy to be extended
in a creative manner.
In an uncertain and highly dynamic environment this type of applications could offer a
robust but adaptive support for all decision making factors. Based on this type of
applications is possible to build a generalized framework capable to support different types
of decisions, not only in economy and finance, but also in military and law enforcement
applications, in critical periods, or in high risk events.
computing techniques could be used together withy DSS/ IDSS not only as a mathematical
ingredient, because its efficiency is given by a better capability of selection, a higher speed of
analysis, and also on adding the advantages of adaptability, flexibility, and modularity.
management for a better coordination of activities. DSS are very efficient instruments in
complex situations/complicated environments, where decision makers need a robust
support to analyse multiple sources of knowledge (Martinsons, Davison, 2007). DSS was
used with success in management and the evolution was always linked to the dynamics of
informatics systems, databases and expert systems. The new IT application has decisive
influence DSS, with application that spread in all domains of activities. Modern DSS are able
to support operational capability and strategic decision making in complex systems. A
decision making framework for extreme risk should consider a structure based on
quantifying decision variables decoupled in their own control systems. These variables are
combined through the practical conditions offered by knowledge based infrastructure to
include all decision scenarios. Because each decision scenarios affect the system in a
different way, these effects can be also used to rank decisions and, in this way, to offer a
better adaptability of the global framework. The main objectives of DSS are related to a
better adaptability of decision making and the build of a preliminary study for decision
making in technical case where is not possible an efficient planning of such activities.
The characteristic elements of a dedicated DSS are: addressing unusual problems and assist
in improving the quality of decisions to resolve them; DSS is a productivity enhancement
tool for decision-making activity of the expert who holds an active control system;
construction and development cause an evolutionary DSS - makers, system developers that
are influencing each other in a process that does not end at a specified time; DSS has a data
integrator and is adapted to the particularities of the individual application and user,
limited to just one method or information technology; DSS may have several stages of
completion, from the core of system to application systems; DSS can be addressed according
to its stage of development and more users (decision makers, analysts, and developers of
tools).
DSS is a technology capable to improve the ability of managers in decision making to better
understand the risks and the dynamics of the extreme events within the constraints of
cognitive, time, and budget/liquidity limits. In principle, DSS design is based on the
following three capabilities: a sophisticated database management tool with real time access
to internal and external data, information, knowledge; powerful modelling capabilities
accessed by the portfolio of models used by the management; friendly, but powerful
interface designs that enable interactive queries, reporting and interface graphical user.
Since a crisis begins with events, first step in decision making process is to categorize/ rank
the events and propose a first selection of the proper models. Let a knowledge base with N
different cases corresponding to N different events, each having its own model, and a
situation vector filled with environmental and costs parameters. Based on situation vectors,
some cases are selected as possible candidate events that are associated to the portfolio of
models; the models get the necessary information from the corresponding database (online
data entered in the infrastructure through human user or automated systems and offline
data collected for corresponding similar situations/ events). Models are equipped with
controllers that compute the performances of decision variables. It results scenarios for
solving the problem with different performances (time, costs). The interest is to find the
optimal solution that offers a few numbers of ranked possible scenarios, with their effect
degree and the decision maker can select one of them. It is useful to introduce a system
capable to gets the decision maker idea about the scenarios, to refine the decision for the
future same cases and to allocate the higher degrees.
364 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
The purposes of a DSS (Fig.1) is to offer a better support for decision makers, allowing better
intelligence design, or selection via a better understanding of the whole problem and its
dynamics, a better managing knowledge via efficient solutions for non structured decisions.
The main characteristics of a special purpose DSS are the following:
- a structured knowledge that describes specific aspects of the decision makers
environment, how to accomplish different tasks;
- it incorporates the ability to acquire and maintain a “complete” knowledge and its
dynamics;
- it incorporates the ability to select any desired subset of stored knowledge and to
present the global knowledge in different customized ways/reports;
- it offers a direct interaction with the user (decision maker) with adaptability and
creative flexibility to the situation and users.
Other Web,
computer- communication
based systems
Data
bases
Intelligent knowledge
based ingredients
User interfaces
Decision
evaluation/performances
indirect effects (creation of new skills and new jobs, more efficient, better competitive, better
adaptability of the structure to critical situations).
management and knowledge management (facts and inference rules used for reasoning) are
very important. The decision maker do needs a robust framework capable to capture
imprecision, uncertainty, learn from the data/information and continuously optimize the
solution by providing interpretable decision rules (Sousa, 2007).
Monitoring, survey
Measurements
Studying knowledge
base and database
events
FIS are based on the concepts of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning.
The architecture of FIS contains a rule base, which contains a selection of fuzzy rules; a
database, which defines the membership functions used in the fuzzy rule; a reasoning
mechanism, which performs the inference procedure. FIS employ different inference
models: Mamdani-Assilian, with the rule consequence defined by fuzzy sets, Takagi-
Sugeno, with an inference in which the conclusion of a fuzzy rule is constituted by a
weighted linear combination of the crisp inputs.
EAs are population-based adaptive methods for optimization, based on the principles of
natural selection and survival-of-the-fittest. This dynamics of populations could be
expressed by:
The analysis of the performances of a DSS is very important because it offers the possibility to
compare, to select, to update, to improve the knowledge (models, database), but also the time
to response and the capability to adapt to rapidly changing conditions. The main performance
parameters for an integrated DSS-ER/CM are focused on the following aspects:
- employ a naturalistic decision model (allow decision maker to characterize event size
up information as a starting point; support user in recognizing a similar problem;
support user in analyzing recognized case; support the user in customized the selected
analogous problem to better fit the real time conditions; support the user in
implementing the final decision as an efficient CoA);
- employ case based reasoning to take advantages of expertise and historical database
experiences (implement an intelligent capability; implement a knowledge base of
observation; allow flexible/ adaptable observation knowledge base modules; allow
revision of observation);
- trigger recognition with partial characterization of an event is limited (allow fuzzy
input; recognize an event with limited information; allow a mixture of known/
unknown information; fusion capabilities);
- use system parameters that end user is likely to have available (operate on a standard
laptop/ PDA; the use of an Internet connection);
- establish a central network based capability for training and experimenting (establish an
web based, game oriented version for multiple applications; implement a training mode;
implement a capability to improve the performances to predict decisions from each web
based training session; capture web based gaming statistics; implement a capability to
improve the performance to predict decisions from each web based gaming session).
In Fig. 3 are presented a simplified view regarding the performances of a dedicated DSS for
crisis management.
For a better efficiency, DSS - ER/CM should operate on a standard IT infrastructure
(windows based PDA/ laptop, Internet connection) and will employ a user friendly GUI
(Graphical User Interface). Once the application is downloaded (with known expertise
condition), direct human interface with the system will be limited to interaction during
performance. Output data are provided in typical reports (both for training and
experimentation) designed by the user, according to the user preferences.
Maintenance will be provided through automatically updates accessible for different type of
end users. Real time diagnostics should be performed by experts to ensure the properly
operation, but no support equipment will be required (updates ensure that the performance
is optimal via a permanent adaptation).
Economic Human
Economic Human and
data Environment
model ands ocial social
data
model data
Resilience
outcomes
The advantage of DSS in modelling flood risk management is given by its interactive nature,
flexibility in approach and also on the capability to provide graphic interfaces. It also allows
the identification of the critical training/testing of the model by simulating different input
data. The use of DSS is efficient in the estimation of the critical level of flood warning. Flood
warning system (FWS) based on DSS can immediately inform the people living downstream
to take precautions and evacuation/rescue before the event. The FWS developed by the
Khetchaturat and Kanbei is based on the following steps: the selection of areas that flood
frequently occurred; prototype system design and data modelling flood phenomenon; the
development of an early warning network between the web server and users in the area via
the Internet; the development of DSS for flooding. In this framework, the authors had
obtained good results.
DSS-ANN for flood crisis management is represented by a parallel, distributed processing
structure composed of three layers: an input layer for data collection, an output layer used
to produce an adequate response to entry, and intermediate layers capable to act as a
collection of features detected. Network topology consists of a set of neurons connected by
links and organized in layers. Each node in a layer receives and processes weighted input
from the previous layer and sends its output to nodes in the next layer through links. For
each link is assigned a weight that represents an estimator of the force connection. Weighted
summation of inputs is converted to output according to the transfer function.
DSS-ANN for flood warning is efficient in the monitoring process of a flood risk zone,
meaning that the orderly evacuation can occur before the actual onset of flooding and
require long term sustainability. Integrating human knowledge with modelling tools lead to
the development of DSS-ANN to assist decision makers during the different phases of flood
management. DSS-ANN is developed as a planning tool to address both virtual and
specialists, but also all those who have different positions in the management of flood risk
management and also contributes to the selection of the options to reduce damages (using
an expert system approach), forecasting floods (using ANN), and modelling operations
according to the impact of flooding.
the database knowledge permits the construction of a drought index. If data base are
limited, data layer can retrieve data from relational databases to distributed space and
standard queries on the data layer to give a response to the request higher.
Web-based DSS is another interesting instrument for drought risk management. It provides
information for farmers, experts, end users, capable to improve the efficiency by allowing
resources to better manage this type of risks.
6. Conclusion
There is an emerging interest for different applications of DSS in critical decision making for
extreme risk and crisis management. The nature of DSS tools have changed significantly and
are equipped now with a variety of intelligent tools such as graphics, visual interactive
modeling, artificial intelligence techniques, fuzzy sets, and genetic algorithms that adds new
capabilities, well adapted for extreme risk and crisis management. The focus is to
demonstrate the efficiency of using DSS in an extended list of applications, including all the
phases in the management of natural hazards, terrorism, technological accidents, but also
financial crises and multiple crises. All these problems should be treated in a
multidisciplinary, modular and scalable framework, flexible, adaptive, and robust. This
framework should be also design in a friendly manner, so that users can input new
data/task easily. DSS-ER/CM is flexible so that new risks and impact functions can be easily
incorporated by users and it should incorporate also financial-economic modules. In this
Emerging Applications of Decision Support Systems (DSS) in Crisis Management 373
7. Acknowledgment
This work is supported partially by the National Centre for Programme Management,
Romania.
8. References
Armstrong, J, Collopy, F.K. (1993). Causal forces: structuring knowledge for time series
extrapolation. Journal of Forecasting, No.12, pp. 103-115
Abraham, A. (2001). Neuro-Fuzzy Systems: State-of-the-Art Modeling Techniques, In:
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence, J. Mira
and A. Prieto (Eds.), 269-276, Springer-Verlag Germany, Granada, Spain
Abraham, A. (2002). Optimization of Evolutionary Neural Networks Using Hybrid Learning
Algorithms, Proceedings of the IEEE International Joint Conference on Neural Networks
374 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
(IJCNN'02), 2002 IEEE World Congress on Computational Intelligence, vol.3, pp. 2797-
2802, Hawaii, IEEE Press
Alexander, D. (1993). Natural Disaster, UCL Press
Anheier, H. (ed.) (1999). When Things go Wrong: Failures,Bankruptcies, and Breakdowns in
Organizations, Thousand Oaks, CA: Sage
Comfort, L.K. (ed.) (1988). Managing Disaster: Strategies and Policy Perspectives, London: Duke
Press
Coombs, W.T. (1998). An analytic framework for crisis situations: Better responses from a
better understanding of the situation, Journal of Public Relations Research, Vol. 10, pp.
179–193
Curtin, T.; Hayman, D. & Husein, N. (2005). Managing Crisis: A Practical Guide, Basingstoke:
Palgrave Macmillan
Dahr, V. & Stein, R. (1997). Intelligent Decision Support Methods – The Science of Knowledge
Work, Prentice Hall
Druzdel, M. J. & Flynn, R. R. (2002). Decision Support Systems. (A. Kent, Ed.) Encyclopedia of
Library and Information Science
Endsley, M.R. (2004). Situation awareness: Progress and directions. In: A cognitive approach to
situation awareness: Theory, measurement and application, S. Banbury & S. Tremblay
(Eds.), 317–341, Aldershot, UK: Ashgate Publishing
Fink, E. (2002). Changes of Problem Representation, Springer Verlag, Berlin, New York
Gadomski, A. M.; Bologna, S.; Di Constanzo, G.; Perini, A., & Schaerf, M. (2001). Towards
intelligent decision support systems for emergency managers: the IDA approach,
International Journal Risk Assessment and Management, vol.1, no.3-4
Gadomski, A. M.; Balducelli, C.; Bologna, S. & DiCostanzo, G. (1998). Integrated Parellel
Bottom-up and Top-down Approach to the Development of Agent-based
Intelligent DSSs for Emergency Management. Proceedings of the International
Emergency Management Society Conf. TIEMS'98: Disaster and Emergency Management
Gadomski, A. M.; Bologna, S.; Di Costanzo, G.; Perini, A. & Schaerf, M. (2001). Towards
intelligent decision support systems for emergency managers: the IDA approach.
International Journal Risk Assessment and Management, Vol.2, No. 3,4
Gauld, R. & Goldfinch, S. (2006). Dangerous Enthusiasms: E-government, Computer Failure and
Information System Development. Dunedin: Otago University Press
Guerlain, S.; Brown, D. & Mastrangelo, C. (2000). Intelligent decision support systems,
Proceedings of the IEEE Conference on Systems, Man, and Cybernetics, Nashville, USA
Holsaplle, C. W. & Whinston, A. B. (1996). Decision Support Systems: A Knowledge-based
Approach, West Publishing Company, Minneapolis/St Paul
Jaques, T. (2000). Developments in the use of process models for effective issue
management, Asia-Pacific Public Relations Journal, Vol.2, pp. 125–132
Janssen, T. L. (1989). Network expert diagnostic system for real-time control, Proceedings of
the second international conference on industrial and engineering applications of artificial
intelligence and expert systems, Vol. 1 (IEA/AUIE ‚89)
Kaplan, S. & Garrick, B. (1981). On the quantitative definition of risk, Risk Analysis, Vol.1,
no.1, pp.11 -28
Kebair, F., Serin, F., & Bertelle, C. (2007). Agent-Based Perception of an Environment in an
Emergency Situation, Proceedings of the International Conference of Computational
intelligence and Intelligent Systems (ICCIIS), World Congress of Engineering (WCE),
London, UK, pp. 49-54
Emerging Applications of Decision Support Systems (DSS) in Crisis Management 375
Klein, M.R. & Methlie, L.B. (1995). Knowledge-based decision support systems with Applications
in Business, 2nd Edition. England: John Wiley & Sons
Lam, J. & Litwin, M.J. (2002). Where's risk? EWRM knows! The RMA Journal, November 2002
Lee, K.M. & Kim, W. X. (1995). "Integration of human knowledge and machine knowledge by using
fuzzy post adjustment: its performance in stock market timing prediction". Expert Systems,
12 (4), 331-338
Mamdani E. H., & Assilian S. (1975). An experiment in Linguistic Synthesis with a Fuzzy
Logic Controller, International Journal of Man-Machine Studies, Vol. 7, No.1, pp. 1-13
Marques, M. S.; Ribeiro, R.A. & Marques, A.G. (2007). A fuzzy decision support system for
equipment repair under battle conditions. Fuzzy Sets and Systems, Journal of
Decision Systems, Vol. 16, No. 2
Martinsons, M. G. & Davison, R. M. (2007). Strategic decision making and support systems:
Comparing American. Japanese and Chinese Management Decision Support Systems,
Vol.43, pp. 284–300.
McNurlin, B. C., & Sprague, R. H. (2004). Information systems management in practice (6th ed.).
Englewood Cliffs, NJ: Prentice Hall.
Mendelsohn L.B. (1995). Artificial Intelligence in Capital Markets, Chapter 5: Global Trading
Utilizing Neural Networks: A Synergistic Approach, Virtual Trading, Probus Publishing
Company, Chicago, Illinois
Nudell, M. & Antokol, N. (1988). The Handbook for Effective Crisis Management, Lexington:
Lexington Books
Pan J.; Kitagawa H.; Faloutsos C. & Hamamoto M. (2004). AutoSplit: Fast and Scalable
Discovery of Hidden Variables in Stream and Multimedia Database,. Proceedings of
the Eighth Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD
2004), Australia, May 26-28
Pauchant, T.C. & Mitroff, I.I. (1992). Transforming the Crisisprone Organization. Preventing
Individual, Organizational and Environmental Tragedies, Jossey Bass Publishers, San
Francisco
Pearson, P. (2006). Data representation layer in a Multi-Agent DSS, IOS Press, Vol.2, No.2, pp.
223-235
Pearson, C. & Clair, J. (1998). Reframing Crisis Management, Academy of Management Review,
Vol. 23, No. 1, pp. 59–76
Prelipcean G. & Boscoianu M. (2008). Computational framework for assessing decisions in
energy investments based on a mix between real options analysis and artificial
neural networks, Proceedings of the 9th WSEAS International Conference on
Mathematics & Computers in Business and Economics (MCBE’80), pp. 179-184
Prelipcean G.; Popoviciu N. & Boscoianu M. (2008). The role of predictability of financial
series in emerging market applications, Proceedings of the 9th WSEAS International
Conference on Mathematics & Computers in Business and Economics (MCBE’80), pp.
203-208
Prelipcean, G. & Boscoianu, M. (2009). Patents, Academic Globalization and the Life Cycle of
Innovation, Balkan Region Conference on Engineering and Business Education &
International Conference on Engineering and Business Education,15-17 October
2009, Lucian Blaga University of Sibiu, Romania, pp. 149-152
Prelipcean, G. (2010). Emerging Microfinance Solutions for Response and Recovery in the Case of
Natural Hazards, Proceedings of the International Conference on Risk Management,
376 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Assessment and Mitigation, RIMA10, Plenary Lecture, WSEAS Press, ISBN 978-960-
474-182-3, ISSN 1790-2769, pp.220-226
Prelipcean, G. & Boscoianu, M. (2010). Innovation, Technological Change and Labor saving in the
Aftermath of the Global Crisis, Balkan Region Conference on Engineering and
Business Education & International Conference on Engineering and Business
Education, 15-17 October 2009, Lucian Blaga University of Sibiu, Romania, pp.169-
173
Prelipcean, G.; Boscoianu, M. & Moisescu, F. (2010). New Ideas on the Artificial Intelligence
Support in Military Applications, Conference Information: 9th WSEAS International
Conference on Artificial Intelligence, Knowledge Engineering and Data Bases,
February 20-22, 2010, University of Cambridge, pp. 34-39
Regester, M. & Larkin, J. (2002). Risk issues and crisis management—A casebook of best practice
(2nd ed.), Kogan Page, London
Renn, O. (1992). Concept of risk: A classification, In: Social Theories of Risk, Praeger Publisher
Schwarz, H.; Wagner, R. & Mitschang, R.R. (2001). Improving the Processing of Decision
Support Queries: The Case for a DSS Optimizer, Proceedings of (IDEAS '01)
Sousa, P.; Pimentao, J. & Ribeiro, R.A. (2006). Intelligent decision support tool for
priorotizing equipement repairs in critical/disaster situations, Proceedings of the
International Conference on Creativity and Innovation in Decision Making and Decision
Support (CIDMDS 2006)
Sprague Jr., R.H. & Carlson, E.D. (1982). Building Effective Decision Support Systems, Prentice-
Hall, Englewood Cliffs New Jersey
Sugeno, M. (1985). Industrial Applications of Fuzzy Control, Elsevier Science PubCo.
Takagi, T. & Sugeno, M. (1983). Derivation of fuzzy control rules from human operator's
control actions, Proceedings of the IFAC Symposium on Fuzzy Information, Knowledge
representation and decision analysis, Marseilles, France, pp. 55-60
Toevs, A.; Zizka, R.; Callender, W. & Matsakh, E. (2003). Negotiating the Risk Mosaic: A
Study on Enterprise-Wide Risk Measurement and Management in Financial
Services. The RMA Journal, March 2003
Turban E. & Aronson J.E. (2004). Decision Support Systems and Intelligent Systems, Prentice
Hall, Upper Saddle River, NJ, 1998
Turban, E.; Aronson, J.E. & Liang, T.P. (2004). Decision Support Systems and Intelligent
Systems. Vol. 7, ed. 2004, Prentice-Hall
Wildavsky, A. (1988). Searching for Safety, New Brunswick, NJ: Transaction Press
Wisner, B.; Blaikie, P.; Cannon, T. & Davis, I. (2004). At Risk: Natural Hazards, People's
Vulnerability and Disasters (2nd edition), Routledge UK
Zadeh, L. (19650. Fuzzy sets, Information and Control, Vol.8, pp. 338–353
Zhang, C. (1992). Cooperation under uncertainty in distributed expert systems, Artificial
Intelligence, Vol.56, pp. 21-69
Zimmermann, H. (1987). Fuzzy Sets, Decision Making, and Expert Systems, Kluwer Academic
Publishers, Boston, Dordrecht, Lancaster
19
Italy
1. Introduction
Critical Infrastructures (CI) are technological systems (encompassing telecommunication and
electrical networks, gas and water pipelines, roads and railways) at the heart of citizen’s life.
CI protection, issued to guarantee their physical integrity and the continuity of the services
they deliver (at the highest possible Quality of Service), is one of the major concern of public
authorities and of private operators, whose economic results strictly depend on the way they
are able to accomplish this task .
Critical Infrastructure Protection (CIP) is thus a major issue of nations as the impact of
CIs malfunctioning or, even, their outage might have dramatic and costly consequences for
humans and human activities (1; 2). EU has recently issued a directive to member states
in order to increase the level of protection to their CIs which, in a EU-wide scale, should
be considered as unique, trans-national bodies, as they do not end at national borders but
constitute an unique, large system covering all the EU area (3).
Activities on CI protection attempt to encompass all possible causes of faults in complex
networks: from those produced by deliberate human attacks to those occurring in normal
operation conditions up to those resulting from dramatic events of geological or meteorologic
origin. Although much effort has been devoted in realizing new strategies to reduce the risks
of occurrence of events leading to the fault of CI elements, a further technological activity is
related to the study of possible strategies to be used for predicting and mitigating the effects
produced by CI crisis scenarios. To this aim, it is evident that a detailed knowledge of what is
going to happen might enormously help in preparing healing or mitigation strategies in due
time, thus reducing the overall impact of crises, both in social and economic terms.
CIP issues are difficult to be analyzed as one must consider the presence of interdependence
effects among different CIs. A service reduction (or a complete outage) on the electrical
system, for instance, has strong repercussions on other infrastructures which are (more or
less) tightly related to the electrical system. In an electrical outage case, for instance, also
vehicular traffic might have consequences as petrol pumps need electrical power to deliver
petrol; pay tolls do need electrical current to establish credit card transactions. As such, also
378
2 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
vehicular traffic on motorways might strongly perceive the effects (after a certain latency
time) of an outage on the electrical system. This is a less subtle interdependence than that
present for CI which are more directly related to the electrical power delivery, such as railway
traffic; nevertheless, all these effects must be taken into account when healing and mitigation
strategies must be envisaged for the solution of a crisis event (4).
This work reports of a new strategy aimed at realizing tools for the prediction of the onset
of crisis scenarios and for a fast prediction of their consequences and impacts on a set
of interdependent infrastructures, in terms of reduction of the services dispatched by the
infrastructures and the impact that services unavailability might have on population. All
that in order to provide a new generation of Decision Support Systems which can support CIs
operators in performing preparedness actions and to optimizing mitigation effects.
The present chapter is composed of 3 sections: in the first, the general layout of the system
is proposed, where each task of the system is described. This section contains the general
description of the risk analysis and the tools which are used to make quantitative evaluations.
The second section will encompass a general description of the meteo-climate simulation
models which provide an accurate evaluation of the precipitation level expected on short-
and medium-long period. The last section will be entirely devoted to the description of the
impact evaluation of crisis scenarios.
2.1 Geo-database
A major problem that risk assessments and mitigation strategies must cope with is the lack of
a centralized data repository allowing comprehensive risk analysis and risk prediction of CI
in a given region. Without a mean to consider, on the same ground, all CIs and their mutual
interdependencies, any efficient way to predict and mitigate crisis scenarios could be realized.
Impact mitigation strategies should, in fact, consider the different responses of the different
CIs, their perturbation in relation to the crisis, the different latency times for perturbation
spreading, the different timings in the healing actions.
To consider all these issues, the first necessary action is to constitute a control room where
data of all CIs should be made available. For this reason, the geo-referenced database
plays a central role in the MIMESIS tool. It contains a large set of data of different kinds:
(1) regional nodes of critical infrastructures, such as electrical stations, power generators,
high-to-medium-to-low voltage electrical transformers, telecommunication switches and
primary cabins, railways and roads with the specific access points, gas and water pipelines
and their specific active points; (2) geographic and elevation maps of the region with the
highest possible accuracy; (3) position, flow rates, hydrographic models of all water basins
(rivers, natural or artificial lakes, hydroelectric reservoirs etc.); (4) landslide propensity of
the different areas according to historical repositories, where dates and landslides types are
recorded; (5) geo-seismic data provided by the soil geo-seismic national agency, supported by
real-time data provided by in-situ accelerometers (where available); (6) geo-seismic data on
geological faults ; (7) social (cities, population densities), administrative (counties, districts)
and economical (industrial areas classified in terms of energetic consumptions, produced
GDP, types of resources the area is dependent on, etc.) data of the region; (8) agricolture’s
380
4 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
maps, fisheries etc. (9) traffic data on motorways and major urban roads (Origin-Destination
matrices, if available) (10) railways data with passengers and goods traffic; (11) any other data
related to other infrastructures (water, gas-oil pipelines, wherever present in the territory).
Such a huge database should be provided in a GIS format, allowing a precise geo-referenced
of each constitutive network’s elements.
where Vi (r ) is a suitable, normalized function estimating the weight of the specific agent i
in resulting a threat for the infrastructures (seismicity, presence of water basins, historical
propensity of the terrain to landsliding etc.) integrated over a suitable area surrounding the
CI element; Pi (r ) represents the sensitivity of the specific CI element to the threat i (for a given
CI element located in the point r, the value Pi (r ) might be larger if i is the seismic threat rather
than the flood threat caused by the nearby presence of water basins); I (r ) (which is essentially
independent on the threat i) is the sum of the impacts that the absence of the CI element in
r produces upon failure in its network and in the other CI networks which are functionally
related to that.
We have distinguished a number of agents which could produce risk evidences for the CI
elements. Among them:
• geo-seismic risk; each CI element is evaluated as a function of its position in the seismicity
map. In our database, we have the update italian seismic which is periodically updated by
the National Institute of Geophysics and Volcanology (INGV) (5).
• landslide risk; each CI element is evaluated as a function of its position in the italian
inventory of landslides (resulting from the ISPRA project IFFI (6) ).
• water basins proximity risk; each CI element is evaluated as a function of its position with
respect to the regional water basin. Integration in Eq.(1) is performed over a circle of a
radius which could be varied (from few hundreds meters up to a few kilometers, in the
proximity of rivers with large discharge.
The impact value can be estimated as being the reduction of the Quality of Service (QoS) of the
specific service provided by the network containing the faulted element. However MIMESIS
attempts to evaluate also the economic and social impact that the QoS loss implies.
landslides in prone grounds. The MIMESIS system daily produces an high-resolution weather
forecast with precipitation mamaps of 1 square kilometre. The numerical models involved
in weather and climatological predictions will be will be described in section 3. Then, static
analysis should be updated by evaluating the risk values upon the application of precipitation
data. In fact, the value Ri (r ) of Eq. 1 could be recalculated by modifying the function V (r ).
Let consider V (r ) as the current flow of a river. Upon consistent precipitations, the river
flow could increase and the integration of the function V (r ) over a given area could produce
an higher, over-threshold value for the risk value Ri (r ). For the landslide risk threat, the
V (r ) function could measure the extent of the correlation (captured from historical data)
precipitation abundance/landslide propensity. When precipitations are abundant, a large
landslide probability could be triggered in a specific area (that comprised in the integration of
V (r ) in Eq. 1) and the risk function Ri (r ) consequently increase.
Both numerical weather forecast and climate prediction involve the use of mathematical
models of the atmosphere, represented by a set of discretized dynamical equations and by
local deterministic parameterizations of physical processes occurring at sub-grid scales. Such
processes are supposed to be in statistical equilibrium with the resolved-scale flow and are,
therefore, treated as to their mean impact on the resolved scales.
Climate, seasonal and weather models are basically identical in structure, independent of the
specific time scale for which they have been developed, and often share the dynamical core
and physical parameterizations. Differences only lie in the resolution at which models are run,
which may imply specific tuning of the sub-grid representation, although the mathematical
core of the two approaches is conceptually distinct.
Climate is, by definition, the statistical mean state of the Earth system with its associated
variability. Therefore, numerical simulation of climate, as performed by General Circulation
Models (GCMs), is a boundary condition problem, and changes in the system equilibrium
derive from slow changes in boundary forcing (such as the sea surface temperature, the
solar constant or the greenhouse gas concentration). On the other hand, Numerical Weather
Prediction models (NWPs) are used to predict the weather in the short (1-3 days) and medium
(4-10 days) range and depend crucially on the initial conditions. For instance, small errors in
the sea surface temperature or small imbalances in the radiative transfer have a small impact
on a NWP model but can dramatically impair GCM results.
To partly overcome this problem, coupled Atmosphere-Ocean models (AOGCMs) have been
developed. In order to allow an adequate description of the system phase space the GCM
Risk Analysis
Risk Analysis and Crisisand Crisis
Scenario Scenario
Evaluation Evaluation
in Critical Infrastructures in Critical Infrastructures Protection
Protection 3837
simulation runs would last tens of years. The consequent computational cost limits the
spatial resolution of climate simulations, so that local features and extreme events, which
are crucial to good weather predictions are, by necessity, embedded in sub-grid process
parameterizations.
A similar restriction holds for global Weather Prediction Models (WPMs) that are currently
run at different meteorological centers around the world, whose prediction skill is enhanced
by performing several model forecasts starting from different perturbations of the initial
conditions (ensemble forecasting), thus severely increasing computational requirements.
Future high resolution projections of both climate and weather rely on three classes
of regionalization techniques from larger scale simulations: high-resolution "time-slice"
Atmosphere GCM (AGCM) experiments (8), nested regional numerical models (9), and
statistical downscaling (10), each presenting its own advantages and disadvantages.
At present, dynamical downscaling by nested limited area models is the most widely adopted
method for regional scenario production, its reliability being possibly limited by unavoidable
propagation of systematic errors in the driving global fields, neglecting of feedbacks from the
local to the global scales and numerical noise generation at the boundaries. This technique,
however, possesses an unquestionable inherent capacity to fully address the problem of
weather prediction and climate change at finer resolutions than those allowed by general
circulation models, as it allows local coupling among different components of the Earth
system at a reasonable computational cost.
future impacts of climate change and these will pose challenges to many economic sectors.
Climate change is expected to magnify regional differences in Europe’s natural resources
and assets. Negative impacts will include increased risk of inland flash floods, and more
frequent coastal flooding and increased erosion (due to storminess and sea-level rise). In
Southern Europe, climate change is projected to worsen conditions (high temperatures and
drought) in a region already vulnerable to climate variability, and to reduce water availability,
hydropower potential, summer tourism and, in general, crop productivity. It is also projected
to increase health risks due to heat waves and the frequency of wildfires (14).
Such dramatic changes are attributed to the anthropogenic warming arising from augmented
carbon dioxide emissions, which have a discernible influence on many physical and biological
systems, as documented in data since 1970 and projected by numerical models. Carbon
dioxide and the other greenhouse gases affect the atmospheric absorption properties of
longwave radiation, thus changing the radiation balance.
An immediate impact of this altered energy balance is the warming of the lower troposphere
(an increase of the global temperature of about 0.6◦ C has been observed over approximately
the last 50 years) that, in turn, affects the atmospheric hydrological cycle. Although extremely
relevant as to their effects on human activities, hydrological processes are still poorly
modeled, and projections are affected by severe uncertainties. Climate models can hardly
represent the occurrence probability and the duration of extreme rainfall or drought events,
even in today’s climate conditions (15), so that governmental authorities now explicitly
demand innovative science-based approaches to evaluate the complexity of environmental
phenomena.
impact studies, which follow up model projections, are definitely in need of complex systems
capable of crossing information from different disciplines and of managing huge amounts of
data.
The uncertainty involved in this type of impact assessment limits the value of the results and
great care should be taken in evaluating model skill in predicting the driving meteorological
variables. Precipitations, the main atmospheric driver of hydrological catchment response,
are unfortunately still a critical output of model diagnostics (20). Although the complexity
of cloud parameterizations is always increasing, this is no guarantee of improved accuracy,
and better representation of clouds within NWP models has been the focus of recent research
(21),(22), (23), (24). Numerical models explicitly resolve cloud and precipitation processes
associated with fronts (large scale precipitation), while parametrizing small scale phenomena
by means of the large-scale variables given at the model’s grid points. The most important
parameters are humidity, temperature and vertical motion. The vertical velocity determines
the condensation rate and, therefore, the supply of liquid water content. Temperature also
controls the liquid water content, via the determination of the saturation threshold. Moreover,
the temperature distribution within a cloud is also important in determining the type of
precipitation - rain or snow. The complexity of the parameterization of cloud processes is
limited by the associated numerical integration time (25). Model spatial resolution is crucial
for a reliable treatment of condensation processes, as vertical motion of air masses is often
forced by orography, whose representation therefore needs to be as accurate as possible.
Again, regional models, due to their higher spatial resolution and reduced computational
costs, seem to be the most appropriate tool for downscaling precipitation fields, at the same
time preserving the complexity of convection parameterization. However, the reliability of
precipitation forecasts provided by state-of-the art meteorological models also depends on
their ability to reproduce the sub-grid rain rate fluctuations which are not explicitly resolved.
In particular, the assessment of precipitation effects on the hydrological scales requires an
accurate characterization of the scaling properties of precipitation, which is essential for
assessing the hydrological risk in small basins, where there is a need to forecast watershed
streamflows of a few hundred square kilometres or less, characterized by a concentration time
of a few hours or less.
At these smaller space time scales, and specifically in very small catchments and in urban
areas, rainfall intensity presents larger fluctuations, and therefore higher values, than at the
scale of meteorological models (26). In order to allow a finer representation of land surface
heterogeneity than that allowed by nominal grid resolution, mosaic-type schemes are being
investigated, which account for topographical corrections in the sub-grid temperature and
pressure fields and downscale precipitation via a simple statistical approach. Such schemes
allow simple implementation of space dependent probability distribution functions that may
result from ongoing research on statistical downscaling of precipitation (27). As already
mentioned, a stochastic approach has also been successful in improving precipitation forecast
reliability as to its large scale statistical properties. In the last decade, ensemble prediction
systems have substantially enhanced our ability to predict precipitation and its associated
uncertainty (28). It has been shown that such systems are superior to single deterministic
forecasts for a time range up to two weeks, as they account for errors in the initial conditions,
in the model parameterizations and in the equation discretization that might cause the
flow-dependent growth of uncertainty during a forecast (29), (30), (31). At the same time,
multi-model ensemble approaches have been indicated as a feasible way to account for model
errors in seasonal and long-term climate studies (see Figure 3).
386
10 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
Fig. 3. Climatological precipitation field representing the period 1958-1998. The map has
been produced analyzing the output of the 9 models involved in the EU ENSAMBLE project.
It has been proved that, under the assumption that simulation errors in different models are
independent, the ensemble mean outperforms individual ensemble members (29), (32), (33).
By sampling modeling uncertainties, ensembles of GCMs should provide an improved basis
for probabilistic projections compared to an ensemble of single model realizations, as the latter
only samples different initial conditions, i.e. a limited portion of a specific model phase space.
Ensemble predictions are therefore increasingly being used as the drivers of impact forecasting
systems (34), (35), (20), thus reinforcing the already pressing demand for complex numerical
systems that allow rapid inter-comparison between model realizations and multivariate data
analysis.
infrastructure is still topologically connected and determines which are the new topological
"critical" points of the network. Topological analysis of the network is carried out through the
evaluation of the following quantities (7):
• nodes and links centrality indices (Betweenneess centrality, Information centrality);
• network’s diameter, min paths, min cuts;
• spectral analysis of the Adjacency and Laplacian matrices associated to the network.
Topological analysis is a first mean to assess the integrity of the network. The presence of
disconnected components of the graph can be easily seen by evaluating the eigenvalues of
the Laplacian matrix associated to the network. If the graph G(associated to the network)
has n vanishing eigenvalues, the graph has n different disconnected components. Centrality
measures tend to identify which are the most relevant elements (nodes, arcs) of the network.
Node i Information centrality, for instance, determines which is the increase of the min paths
among all the other nodes if node i is lost (when network i is lost, the min paths connecting all
other nodes originally passing through i should be re-evaluated and they will produce new
min paths larger than the original ones). As far Information Centrality is concerned, larger is
the Information Centrality, larger is the importance of the node to provide "good connections"
among all the others.
After a first assessment of the perturbed network upon topological analysis, the MIMESIS
tool performs the most relevant action: the estimate of the reduction of the Quality of Services
produced by the perturbation occurred to the network due to the faults in one or more
of its elements. This task is accomplished by using "domain" or "federated" simulators of
CIs. For domain simulators we intend commercial or open source simulators of specific
infrastructures: electrical (such as Powerworld, E-Agora, Sincal etc.), telecommunications
(NS2), railways (OpenTrack) etc. Federating domain simulator is one of the major
achievements granted by a strong collaboration among the european CIP scientific community
(e.g. the projects IRRIIS (36) and DIESIS (37)). For federated simulators we intend a new
class of simulators which "couple" two or more domain simulators through some specific
synchronization mechanisms and some interdependency rules which allows to describe how,
and to what extent, a CI determines the functioning of another. MIMESIS integrates the
outcome of one of the most successful EU FP7 projects, the DIESIS project (37) which has
attempted to design a general model which could allow to integrate more domain simulators
in an unique framework. The key role in the DIESIS mode is played by the ontology model
(KBS) which is able, to an abstract level, to describe a generic Critical Infrastructure and its
links with other Infrastructures. A generic element of a network from this abstract space can
be subsequently "mapped" into the real space of a specific Critical Infrastructure (electrical,
telecommunication or others) by adapting the generic elements to the specific case. The
ontology model allows to avoid the problem of directly connecting systems which have
different structures, different constitutive elements, different functioning laws: Integration
is firstly performed in a "meta-space" (the abstract space of Critical Infrastructures) (38) and
then mapped into the spaces of the single infrastructures. A brief sketch of the KBS approach
is outlined in the following section.
simulation framework (Federated Simulation Phase). The main idea is to develop a Knowledge
Base System (KBS) based on ontologies and rules providing the semantic basis for the federated
simulation framework (40), (41).
sub-property of the WONT property isConnected. Analogously, all dependencies among the
considered CI domains are modeled through ad-hoc designed sub-properties of the WONT
property dependsON.
In the following, given a CI domain Xi , Ci indicates the set of all components of Xi and Pri
indicates the set of properties related to the components of Xi . Then, a generic IONT can be
represented as IONTi = {Ci , Pri }.
Once the IONT has been defined to model a particular domain, it is possible to create
individuals (instances of IONT classes) to represent actual network domains (for example
the electrical power network or the telecommunication network of a specific city district).
Similarly to the IONT definitions, we indicate with Ci∗ the set of the all instantiated
components belonging to the domain Xi and with Pri∗ the set of instantiated properties related
to Xi . Then, the IONT instance IONTi∗ can be expressed as IONTi∗ = {Ci∗ , Pri∗ }.
The FONT includes all IONTs of the domains involved in the considered scenario (e.g.
electrical, telecommunication, railway domains). The FONT properties (sub-properties of the
WONT property dependsON) allow to model dependencies among components of different
domains (e.g. the FONT property feedsTelco models the electrical supply of telecommunication
nodes). The sets of the FONT properties and of the FONT instantiated properties are defined
respectively as:
5. Conclusions
CIP is a major concern of modern nations. EU has issued a Directive (3) to increase awareness
of this duty as CIs are become transnational bodies whose care must be a shared concern.
Whether markets liberalization has produced, at least in principle, benefits for the consumers,
it has de facto imposed a deep revision of the governance strategies of the major national
infrastructures. In many countries, there has been a (sometimes sudden) fragmentation of the
ownership and the management of relevant parts of infrastructures (see, for instance, those
of gas and oil distribution, telecommunications, electrical transmission and distribution,
motorways and railways) which has strongly weakened the centralized governance model
which has been substituted with a model of "diffused" governance of the infrastructures.
Many different industrial players autonomously own and manage parts of the infrastructures,
leading its global control more complex. The lack of information sharing among operators
of different parts of the infrastructures, due in some cases to industrial competition reasons,
reduces the technical control of the whole infrastructure and, even more, reduces the
390
14 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Will-be-set-by-IN-TECH
Other than being a prediction tool for external events, MIMESIS could also be used to
correctly design new branches of existing infrastructures, by allowing the ex ante evaluation
of their impact on the Quality of Services of all the system of interdependent infrastructures.
We do believe that data availability by CI owners, their integration with other types of data
(geophysical, economic, administrative etc.), the use of advanced numerical simulation tools
for weather and climatic predictions, the use of CI simulators (both single domain or
"federated" simulators) could represent an invaluable clue to realize a new generation of
tools for increasing protection and enhancing security of CIs.
6. References
[1] UCTE "Interim Report of the Investigation Committe of the 28 September 2003 Blackout
in Italy", 17 October, 2003.
[2] "Final Report on the August 14, 2003 Blackout in the United States and Canada: Causes
and Recommendations", U.S. - Canada Power System Outage Task Force, April
2004.
[3] EU Council Directive 2008/114/EC on the identification and designation of European
critical infrastructures and the assessment of the need to improve their protection
(December 8, 2008).
[4] Rosato V., Tiriticco F., Issacharoff L., De Porcellinis S., Setola R. (2008), Modelling
interdepedent infrastructures using interacting dynamical models, Int. J. Crit.
Infrastr. Vol. 4 n.1/2 (63,79)
doi:10.1504/IJCIS.2008.016092
[5] http://esse1-gis.mi.ingv.it/s1_en.php
[6] IFFI stands for Inventory of Landsliding Phenomena in Italy
(http://www.apat.gov.it/site/en-GB/Projects/IFFI_Project/default.html
[7] Boccaletti, S. et al, (2006). Complex Networks: Structure and Dynamics Phys. Reports,
Vol.424, No.4-5, (February 2006) (175-308),
doi:10.1016/j.physrep.2005.10.009
[8] Cubasch, U et al (1995). Regional climate changes as simulated in time-slice experiments.
Clim Change, 31: 273-304,
doi: 10.1007/BF01095150
[9] Giorgi, F et al (1999). Regional climate modeling revisited. An introduction to the special
issue J Geophys Res 104: 6335-6352.
doi:10.1029/98JD02072
[10] Wilby, RL et al (1998). Statistical downscaling of general circulation model output: a
comparison of methods. Water Resources Research 34: 2995-3008.
doi:10.1029/98WR02577
[11] Buontempo C. et al., Multiscale projections of weather and climate at UK Met Office, in
"Management of Weather and Climate Risk in the Energy Industry" Proceedings of
the NATO Advanced Research Workshop on Weather/Climate Risk Management
for the Energy Sector, Santa Maria di Leuca, Italy, 6-10 October 2008, Troccoli A.
ed., NATO Science for Peace and Security Series C: Environmental Security,
Springer Science+Business Media B.V. 2010, ISBN: 978-90-481-3690-2.
392 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
[12] Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and
H.L. Miller (eds.), (2007). Contribution of Working Group I to the Fourth
Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge
University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp.
[13] Giorgi, F (2006). Climate change hot-spots Geophys. Res. Lett. 33, L08707,
doi:10.1029/ 2006GL025734
[14] Adger, W.N. et al. (2007). Summary for policy-makers. In Parry, M.L. Canziani, O.F.,
Palutikof, J.P., Hanson, C.E., van der Linden P.J., (eds.) Climate Change 2007:
Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the
Fourth Assessment Report of the Intergovernmental Panel on Climate Change. pp.
7-22. Cambridge University Press: Cambridge.
[15] A. Bronstert (2004). Rainfall-runoff modelling for assessing impacts of climate and land-
use change, Hydrol. Process. 18, 567–570
doi: 10.1002/hyp.5500
[16] T. J. Toy, G. R. Foster, K. G. Renard (2002). Soil erosion: processes, prediction,
measurement, and control, John Wiley & Sons Inc., New York
[17] Dracup, JA et al. (1980). On the definition of droughts Wat. Resour. Res., 16 (2), 297-302,
doi:10.1029/WR016i002p00297
[18] Bronstert, A et al (2002). Effects of climate and land-use change on storm runoff
generation: present knowledge and modelling capabilities. Hydrological Processes
16(2): 509–529,
doi: 10.1002/hyp.326
[19] Niehoff, D et al. (2002). Land-use impacts on storm-runoff generation: scenarios of land-
use change and simulation of hydrological response in a meso-scale catchment in
SW-Germany J. of Hydrology 267(1–2): 80–93,
doi: 10.1016/S0022-1694(02)00142-7
[20] Pappenberger, F. et al. (2005). Cascading model uncertainty from medium range
weather forecasts (10 days) through a rainfall-runoff model to flood inundation
predictions within the European Flood Forecasting System (EFFS). Hydrol. Earth
Syst. Sci., 9, 381–393,
doi:10.5194/hess-9-381-2005
[21] Wilson, D. et al. (1999). A microphysically based precipitation scheme for the UK
Meteorological Office Unified Model. Q. J. R. Meteorol. Soc., 125, 1607–1636.
[22] Tompkins, A. M. (2002). A prognostic parameterization for the subgrid-scale variability
of water vapour and clouds in large-scale models and its use to diagnose cloud
cover J. Atmos. Sci., 59, 1917–1942.
[23] Jakob, C. (2003). An improved strategy for the evaluation of cloud parameterizations in
GCMs Bull. Amer. Meteorol. Soc., 84, 1387–1401,
doi: 10.1175/BAMS-84-10-1387
[24] Illingworth, A et al. (2007). Cloudnet-Continuous evaluation of cloud profiles in seven
operational models using ground-based observations. Bull. Am. Met. Soc, 88, 883-
898 ,
doi:10.1175/BAMS-88-6-883
[25] Tiedke, M. (1993). Representation of Clouds in Large-Scale Models Mon. Wea. Rew., 121,
3040-3061,
Risk Analysis and Crisis Scenario Evaluation in Critical Infrastructures Protection 393
doi: 10.1175/1520-0493(1993)121
[26] Deidda, R. et al. (2006). Space-time multifractality of remotely sensed rainfall fields ,J. of
Hydrology, 322, 2–13,
doi:10.1016/j.jhydrol.2005.02.036
[27] Palmer, TN et al. (2005). Representing model uncertainty in weather and climate
prediction Annu. Rev. Earth Planet. Sci., 33:163–93,
doi: 10.1146/annurev.earth.33.092203.122552
[28] Palmer, TN (2002). The economic value of ensemble forecasts as a tool for risk
assessment: from days to decades. Q. J. R. Meteorol. Soc. 128:747–774,
doi: 10.1256/0035900021643593
[29] Palmer, TN et al. (2004). Development of a european multimodel ensemble system for
seasonal-to-interannual prediction (demeter), Bull. Am. Met. soc, 85,853-872,
doi: 10.1175/bams-85-6-853
[30] Palmer, TN (2000). Predicting uncertainty in forecasts of weather and climate Rep. Prog.
Phys., 63 (2), 71–116,
doi: 10.1088/0034-4885/63/2/201
[31] Zhu, YJ et al. (2002). The economic value of ensemble-based weather forecasts Bull.
Amer. Meteor. Soc., 83, 73–83.
[32] Hagedorn, R. et al. (2005). The Rationale Behind the Success of Multi-model Ensembles
in Seasonal Forecasting - I. Basic Concept Tellus, 57A, 219-233.
[33] Lambert, S. J. and G. J. Boer (2001). CMIP1 evaluation and intercomparison of coupled
climate models Climate Dynamics, 17, 83-106,
doi: 10.1007/PL00013736
[34] de Roo, A. et al. (2003). Development of a European Flood Forecasting System Int. J.
River Basin Manage, 1, 49–59.
[35] Gouweleeuw, B. et al. (2005). Flood forecasting using probabilistic weather predictions.
Hydrol. Earth Syst. Sci., 9, 365–380.
[36] http://www.irriis.org
[37] http://www.diesis-project.eu/
[38] Usov, A. Beyel, C., Rome E., Beyer U., Castorini E., Palazzari P., Tofani A., The DIESIS
approach to semantically interoperable federated critical infrastructure simulation
Second International Conference on Advances in System Simulation (SIMUL), 22-27
August 2010, Nice, France. Los Alamitos, Calif. [u.a.]: IEEE Computer Society, 2010,
pp. 121-128 ISBN 9780769541426.
[39] Tofani, A. Castorini, E. Palazzari, P. Usov, A. Beyel, C. Rome, E. Servillo, P. An
ontological approach to simulate critical infrastructures, Journal of Computational
Science, 1 (4): 221-228, 2010
doi:10.1016/j.jocs.2010.08.001
[40] T. Rathnam, C.J.J. Paredis Developing Federation Object Models using Ontologies,
Proceedings of the 36th Winter Simulation Conference, 2004.
doi: http://doi.ieeecomputersociety.org/10.1109/WSC.2004.1371429
[41] R.K. McNally, S.W. Lee, D. Yavagal, W.N. Xiang Learning the critical infrastructure
interdependencies through an ontology-based information system, Environment and
Planning B: Planning and Design, 34, pp. 1103-1124, 2007
doi:10.1.1.132.5215
Part 6
1. Introduction
The basic purpose of aviation maintenance is to prevent the decreased levels of original
aircraft airworthiness, safety and reliability design at the lowest cost, which is of great
significance to the running of air transport enterprises. According to statistics, aviation
maintenance accounts for approximately 20% to 30% of direct operating costs, not including
indirect costs caused due to maintenance, such as flight delays, material and spare parts,
corporate image.
In recent years, as the rapid development of civil aviation industry, a large number of new
technologies have been used in airplanes, which increase the complexity of the civil aviation
maintenance, especially the application of digital information technology. In response to
modern aviation maintenance of the new situation, so as to adapt to the future development
trend of digital maintenance and improve the efficiency and quality of modern aircraft
maintenance, establishing the necessary modern air transport aircraft maintenance decision
support system becomes increasingly urgent. For this reason, this paper in-depth analysis
the problem of how to build the aviation Maintenance Decision Support System (MDSS).
other real-time aircraft state parameters acquisition and monitor systems, which make it
possible to monitor aircraft state parameters during the whole flight. Specifically, the
collected aircraft state parameters are inputted to the aircraft online fault diagnosis system,
which diagnoses the fault of aircraft by the established online fault diagnosis model. The
result should be real-time transmitted to the ground system through the air-ground data
link (such as Aircraft Communications Addressing and Reporting System(ACARS)), so as to
provide decision support for ground maintenance and achieve rapid repair of aircraft
failure, thus improving efficiency in the use of aircraft.
Aviation Maintenance
Information Aviation Maintenance
Aviation Maintenance Air Material Management Maintenance Event
Acquisition Engineering
Technique Supporting and Control Evaluation
and Processing Management
Real-time Data Acquisition and Processing
Execution
Evlation
Aviation Maintenance Decision Support System
2. Aircraft maintenance records are the depiction of the overall evolution of the aircraft and
systems. Aircraft maintenance record data can be used to obtain deep relationship
between the system failure symptoms and the cause of the malfunction, it becomes
particularly useful when comes to learn the relationship between system and
environment, and the interaction chain relationships between system and system, that
makes it possible to conduct more comprehensive and in-depth fault diagnosis on the
basis of online fault diagnosis, so as to provide firm decision support for the ground
maintenance.
3. Air material is the material of aviation maintenance, in order to achieve the complexity
capabilities and high-performance of modern aircraft, there are many types of aircraft
systems, while different types of air material have different price, different loss of
regularity and different storage conditions (the required temperature, humidity, storage
time). Thus the improper selection of air material control strategy may lead to
deficiency or excess of air material, on the other hand, if make good use of air material
historical consumption data, then we can understand the status of air material loss and
establish an appropriate procurement strategy to maintain a reasonable level of air
material reserve, to improve aviation material utilization and the efficiency of
enterprises.
into the aviation maintenance technical documentation for the interactive system, direct
access to a viable maintenance program.
In addition, according to the system with historical fault data can also use trend analysis,
curve fitting, time series and other methods for different types of faults predict the trend
predicted results using the fault diagnosis can be assessed for failure diagnostic support.
Online Diagnose
Online Fault
Diagnose Model System End
Library
N Maintenance Reasonable
Unreasonable Fault Practise
Y Offline Fault Evalution
Online Fault
Diagnose Model Add to Fault Code List Diagnose
Reasonable
Evaluation Result
Reasonable
Offline Fault Diagnose
Fault Offline Fault
System
Classification Diagnose Model
End Based on Library
Not
MEL/MSG-3 serious
Serious
Storage Fault
Transmission Code Lists
Fault Code
List By
ACARS Fault Code Wifi PDA
Lists
Transmission
On Ground
documents, which are the important tools and resources to support the maintenance, for
example, Minimum Equipment List (MEL), Operations Specifications (OS), Technology
Policy and Procedures Manual (TPPM), Check Manual (TM), Outline Reliability Manual
(RPM). Traditionally, many of these technical documents use paper for the storage medium,
or in part, have been digitized, but a large number of technical documents have problems of
queries difficult, not easy to carry and preservation and difficult to update.
Interactive aviation maintenance technical manual integrates advantages of technology of
application computer, multimedia, database and network, organize and manage complex
content operating manuals and maintenance information according to relevant standards,
implement accurate performance of required information of maintenance staff with
optimized way. The purpose of it is to accelerate the progress of aviation maintenance,
enhance the effectiveness of maintenance.
for information acquisition and processing module to get air materiel, it can gain the
regularity of loss and buy of air materials using techniques of statistical analysis and data
mining, such as the loss regularity of repairable air materials and unrepairable air materials,
the maintained regularity of maintained air materials, etc., then through the rational control
strategy, it establishes the corresponding loss mode, implements optimization of air material
inventory and improve the quality of maintenance.
In addition, air material inventory control should consider the method of shared regional
aviation maintenance units, especially with high value, and there may be more serious
problems with lack of critical flight equipment, such as aircraft engines, etc. Through
modeling and analysis of historical data, it can construct a reasonable model for regional air
material sharing, and then contribute to enhance the efficiency of the industry for the whole
region.
relative diagnosis accuracy is not high. (2) The applicability of off-line fault diagnosis model.
Off-line fault diagnosis system has sufficient resources and time to judge the on-line
diagnosis system’s results in detail, therefore, off-line fault diagnosis system is not critical on
the diagnosis time, but the accuracy of diagnosis must be high, thus reasonable maintenance
tasks can be determined, therefore, the effectively maintenance can be implemented and
maintenance costs will be reduced. (3) Reasonable air materiel control model. Air material
involves in various aspects, such as mechanical components, electronic components, and
there are repair materials and maintenance consumables. Reasonable air materiel
configuration not only meets the maintenance requirements, but also reduces the cost from
procurement, storage, transportation, etc. Otherwise the airline may have a very negative
impact on operations.
(1) The definition of model input variables and output variables, the model input variables
is the logic state of fault symptoms, defining as
X x1 x2 x3 x4 x5 x6 x7 ,
The meaning as shown in table 1; model output variables is degree of confidence of fault
causes, defining as
Y y1 y2 y3 y4 y5 ,
The meaning is as shown in table 2.
Variable name x1 x2 x3 x4
Oil in nozzle at Oil at compressor Oil at White smoking
Symptom of
the bottom of entrance, fore tank airplane at aft pressure
Phenomenon
turbine bottom skin tail cone reducer
Variable Name x5 x6 x7
Oil beads at Ejector tube
Symptom of Flight in the air, oil
centrifugal emitting purple
Phenomenon consumption heavy
ventilator’s vent smoke
Variable
y1 y2 y3 y4 y5
Name
D0 d01 d02 d03 d04 d05 d06 d07 and D0 d11 d12 d13 d14 d15 d16 d17 ,
Table 3. The membership grade relationship statistics between fault symptoms and fault
causes in the repair factory
Table 4. Expert database of membership grade relations between fault symptoms and fault
causes
According to the above flow, adaptive FPN fault diagnosis model is set up as shown in
Figure 6. Using MATLAB and 18 fault symptom state vectors, simulation is carried out,
simulation results and the actual cause confirmed in maintenance as shown in Table 5, the
diagnostic accuracy rate is 100%. This shows that the reliability of model diagnosis is
relatively high. On the other hand, item 7 and 10, there are some deviation between output
of fault model diagnosis and result of actual maintenance, because take method of threshold
membership grade to eliminate fuzzy, diagnostic efficiency little decreases, but at the same
time to avoid impact of random factors, which could result in the deviation of diagnostic
results.
406 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
d1 x1 d2 x2 d3 x3 d4 x4 d5 x5 d6 x6 d7 x7
t11 11 t12 12 t13 13 t14 14 t15 15 t16 16 t17 17
t21 21 t22 22 t23 23 t24 24 t25 25 t26 26 t27 27
x31 x32 x33 x34 x35 x36 x37 x38 x39 x40 x41 x42 x43 x44 x45 x46 x47 x48 x49 x50
t31 31 t32 32 t33 33 t34 34 t35 35
y1 y2 y3 y4 y5
t53 53 t54 54
t41 41
t52 t55 t56
55
52 56
t51 t57
51 57
Serial Number 1 2 3 4 5 6
Fault Symptom
[0 1 0 1 0 0 0] [1 0 0 0 1 0 0] [0 1 1 0 0 0 1] [1 0 0 1 0 0 0] [1 0 1 0 0 0 0] [1 0 0 0 0 0 1]
Matrix X
Diagnosis Cause y4 y4 y2 y1 y3 y1
Actual Cause y4 y4 y2 y1 y3 y1
Serial Number 7 8 9 10 11 12
Fault Symptom
[1 1 0 0 0 0 0] [0 0 1 0 0 1 0] [1 0 1 0 0 0 1] [0 1 0 1 0 1 0] [1 0 0 0 0 1 0] [0 0 1 0 1 0 1]
Matrix X
Diagnosis Cause y1 , y2 y3 y1 , y3 y1 , y4 y1 y3
Actual Cause y2 y3 y1 , y3 y4 y1 y3
Serial Number 13 14 15 16 17 18
Fault Symptom
[0 0 1 1 0 1 0] [1 0 0 0 1 1 0] [0 0 0 1 0 0 1] [0 0 0 1 0 1 0] [0 1 0 1 1 0 0] [0 0 1 01 0 1]
Matrix X
Diagnosis Cause y3 y1 , y3 y5 y4 y4 y3
Actual Cause y3 y1 , y3 y5 y4 y4 y3
Through the theory of adaptive FPN, it analyzes the cause of turbojet engine heavy oil
consumption, sets up a heavy oil consumption fault symptoms - fault cause diagnostic
model, and gives an intuitive expression of complex relationship between the fault cause
and fault symptoms. It has reality significance to the aviation maintenance factory fast
locating fault cause, improving maintenance efficiency and reducing maintenance costs of
enterprises. And it also provides a new idea for the further development of the aviation
maintenance decision-making support system.
2. Tire loss data correlation is weak between different days. Thus from an engineering
point of view, the loss of tire number of different dates are independent.
3. With the increase of computing cycles, random distribution tends to normal
distribution. Application of engineering control theory to solve the problem of optimal
inventory seems necessary.
Excessive tire inventory will increase storage costs, administrative costs, regular test costs
and renovation costs, etc. On the other hand, the lack of tire inventories will cause difficulty
in timely maintenance and affect the flight delays, and cause economic and social loss, thus,
the optimal inventory levels need to be determined.
Considering the long period of tire repair, we take tire inventory as a normal distribution
process. Table 8 is the assumptions and parameters of the optimal inventory analysis.
5 k N 0 2 5 k N 0 2
1 k
G2 k G2
2
e 2 e 2 2 (2)
k 1 2 k 1 2
Then total cost is:
5 k N 0 2
k
F N 0G1 G2 e 2 2 (3)
k 1 2
As can be seen from the equation, when the average inventory increases, the first increase,
the second reduction, when the average inventory reduction, the first decrease, while the
second increase. When stocks Mean Square Error (MSE) is constant, it can calculate the
optimal value of the average stock. Table 9 gives the MSE of different stocks and optimal
average inventory.
σ 1 2 3 4 5 6 7 8 9
analysis and experimental data analysis, it shows that the process of tire loss can be seen as
independent Poisson process in engineering. When the period calculated (in fact, tire repair
period) is large, it is considered that this process is independent normal random process. In
addition, through the optimal analysis and calculation of optimal average inventory for the
MSE of different stocks, it can be seen that if it uses the method of reducing the MSE of stock
to control inventory, it can reduce the total cost.
5. References
Zhang Yalin. & He Yizheng. (2007). Maintenance Information Management System. Avionics
Technology, Vol.38, No.2, (June 2007), pp. 41-45
Qi Yanjie.; Lv Zhigang. & Song Bifeng. (2008). Design for Maintenance on Modern Airplane.
Journal of Civil Aviation University of China, Vol.26, No.5, (October 2008), pp. 5-9
Qi Yanjie.; Lv Zhigang. & Song Bifeng. (2010). Ultimate Service Life:New Concept for
Aircraft Life Management. Journal of Civil Aviation University of China, Vol.28, No.2,
(April 2010), pp. 22-28
Harry A. Kinnison,Ph.D. (2004). Aviation Maintenance Management, The McGraw Hill
Companies, Inc, ISBN 0-07-142251-X, New York, USA
Sun Li. (2008). Remote Management of Real-Time Airplane Data. Aviation Maintenance &
Engineering, (January 2008)
Tan Xuehua. & Wang Huawei. (2008). Research on Frame of Plane’s Maintenance
Programme Optimization. Computer Technology And Development, Vol.18, No.11
(Nov. 2008), pp. 183-186
412 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Iran
1. Introduction
The prediction of rock sawability is important in the cost estimation and the planning of the
stone plants. An accurate estimation of rock sawability helps to make the planning of the
rock sawing projects more efficient. Rock sawability depends on non-controlled parameters
related to rock characteristics and controlled parameters related to properties of cutting
tools and equipment. In the same working conditions, the sawing process and its results are
strongly affected by mineralogical and mechanical properties of rock.
Up to now, many studies have been done on the relations between sawability and rock
characteristics in stone processing. Norling (1971) correlated sawability with petrographic
properties and concluded that grain size was more relevant to sawability than the quartz
content. Burgess (1978) proposed a regression model for sawability, which was based on
mineralogical composition, hardness, grain size and abrasion resistance. Vaya and Vikram
(1983) found a fairly good correlation between the Brinell hardness test and diamond
sawing rates. However, the variables involved were so many that they believed no
mathematical solution would be possible. They also considered the Specific Energy (SE)
concept, in conjunction with mineralogy, to give a better understanding of the sawing
responses of various rock types. Ertingshausen (1985) investigated the power requirements
during cutting of Colombo Red granite in up-cutting and down-cutting modes. He found
out that the required power was less for the up- cutting mode when the cutting depth was
below 20–25 mm. For deeper cuts, however, the power consumption was less for the down-
cutting mode. Wright and Cassapi (1985) tried to correlate the petrographic analysis and
physical properties with sawing results. The research indicated cutting forces to have the
closest correlation. Birle et al. (1986) presented similar work in 1986, but again considered
only blade life as the criterion on which a ranking system should be based. Hausberger
(1989) concluded that an actual sawing test was the most reliable method for determining
the machinability of a rock type. He observed that the higher proportion of minerals with
well defined cleavage planes helps the cutting to be easier. Jennings and Wright (1989) gave
an overall assessment of the major factors which affect saw blade performance. They found
out that hard materials usually require a smaller size diamond than do softer stones because
the load per particle is not sufficiently high and greater clearance is required for swarf.
414 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Conversely, if large diamond grits are used on hard materials, the penetration of the
diamond is limited, and normally either excessive grit pull-out will occur or large wear flats
will appear on the diamond particles. Unver (1996) developed empirical equations for the
estimation of specific wear and cutting force in the sawing of granites. He used mean quartz
grain size, NCB cone indenter hardness number, and mean plagioclase grain size in his
equations. Clausen et al. (1996) carried out a study on the acoustic emission during single
diamond scratching of granite and suggested that acoustic emission could be used in
sawability classification of natural stones. They also concluded that the cutting process is
affected by the properties and frequency of minerals, grain size and degree of interlocking.
Tonshoff and Asche (1997) discussed the macroscopic and microscopic methods of
investigating saw blade segment wear. Luo (1997) investigated the worn surfaces of
diamond segments in circular saws for the sawing of hard and relatively soft granites. He
found out that for the sawing of hard granite, the worn particles were mainly of the macro-
fractured crystal and/or pull-out hole type. Ceylanoglu and Gorgulu (1997) correlated
specific cutting energy and slab production with rock properties and found good
correlations between them. Webb and Jackson (1998) showed that a good correlation could
be obtained between saw blade wear performance and the ratio of normal to tangential
cutting forces during the cutting of granite. Xu (1999) investigated the friction characteristics
of the sawing process of granites with diamond segmented saw blade. The results of the
experimental studies indicated that most of the sawing energy is expended by friction of
sliding between diamonds and granites. Xipeng et al. (2001) found that about 30 percent of
the sawing energy might be due to the interaction of the swarf with the applied fluid and
bond matrix. Most of the energy for sawing and grinding is attributed to ductile ploughing.
Brook (2002) developed a new index test, called Brook hardness, which has been specifically
developed for sliding diamond indenters. The consumed energy is predictable from this
new index test. Konstanty (2002) presented a theoretical model of natural stone sawing by
means of diamond-impregnated tools. In the model, the chip formation and removal process
are quantified with the intention of assisting both the toolmaker and the stonemason in
optimising the tool composition and sawing process parameters, respectively.
Li et al. (2002) proposed a new machining method applicable to granite materials to achieve
improved cost effectiveness. They emphasized the importance of the tribological
interactions that occur at the interface between the diamond tool surface and the workpiece.
Accordingly, they proposed that the energy expended by friction and mechanical load on
the diamond crystal should be balanced to optimize the saw blade performance. Xu et al.
(2003) conducted an experimental study on the sawing of two kinds of granites with a
diamond segmented saw blade. The results of their study indicated that the wear of
diamond grits could also be related to the high temperatures generated at individual cutting
points, and the pop-out of diamonds from the matrix could be attributed to the heat
conducted to saw blade segments. Ilio and Togna (2003) proposed a theoretical model for
the interpretation of saw blade wear process. The model is based on the experimentally
determined matrix characteristics and grain characteristics. The model indicates that a
suitable matrix material must not only provide the necessary grain support in the segment,
but it should also wear at an appropriate rate in order to maintain constant efficiency in
cutting. Eyuboglu et al. (2003) investigated the relationship between blade wear and the
sawability of andesitic rocks. In their study, a multiple linear regression analysis was carried
out to derive a prediction equation of the blade wear rate. They showed that the wear rate of
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 415
andesite could be predicted from the statistical model by using a number of stone
properties. The model indicated the Shore scleroscope hardness as the most important rock
property affecting wear rate. Xu et al. (2003) carried out an experimental study to investigate
the characteristics of the force ratio in the sawing of granites with a diamond segmented
blade. In the experiments, in order to determine the tangential and the normal force
components, horizontal and vertical force components and the consumed power were
measured. It was found out that the force components and their ratios did not differ much
for different granites, in spite of the big differences in sawing difficulty. Gunaydin et al.
(2004) investigated the correlations between sawability and different brittleness using
regression analysis. They concluded that sawability of carbonate rocks can be predicted
from the rock brittleness, which is half the product of compressive strength and tensile
strength. Ersoy et al. (2004, 2005) experimentally studied the performance and wear
characteristics of circular diamond saws in cutting different types of rocks. They derived a
statistical predictive model for the saw blade wear where specific cutting energy, silica
content, bending strength, and Schmidt rebound hardness were the input parameters of the
model. An experimental study was carried out by Xipeng and Yiging (2005) to evaluate the
sawing performance of Ti–Cr coated diamonds. The sawing performances of the specimens
were evaluated in terms of their wear performances during the sawing of granite. It was
concluded that the wear performance of the specimens with coated diamonds were
improved, as compared with uncoated diamonds. Delgado et al. (2005) experimentally
studied the relationship between the sawability of granite and its micro-hardness. In their
study, sawing rate was chosen as the sawability criterion, and the micro-hardness of granite
was calculated from mineral Vickers micro-hardness. Experimental results indicated that the
use of Vickers hardness microindentor could provide more precise information in sawability
studies. Mikaeil et al. (2008a and 2008b) developed a new statistical model to predicting the
production rate of carbonate rocks based on uniaxial compressive strength and equal quartz
content. Additional, they investigated the sawability of some important Iranian stone.
Yousefi et al. (2010) studied the factors affecting on the sawability of the ornamental stone.
Especially, among the previous studies some researchers have developed a number of
classification systems for ranking the sawability of rocks. Wei et al. (2003) evaluated and
classified the sawability of granites by means of the fuzzy ranking system. In their study,
wear performance of the blade and the cutting force were used as the sawability criteria.
They concluded that with the fuzzy ranking system, by using only the tested petrographic
and mechanical properties, a convenient selection of a suitable saw blade could be made for
a new granite type. Similarly, Tutmez et al. (2007) developed a new fuzzy classification of
carbonate rocks based on rock characteristics such as uniaxial compressive strength, tensile
strength, Schmidt hammer value, point load strength, impact strength, Los Angeles abrasion
loss and P-wave velocity. By this fuzzy approach, marbles used by factories were ranked
three linguistic qualitative categories: excellent, good and poor. Kahraman et al. (2007)
developed a quality classification of building stones from P-wave velocity and its
application to stone cutting with gang saws. They concluded that the quality classification
and estimation of slab production efficiency of the building stones can be made by
ultrasonic measurements.
The performance of any stone factory is affected by the complex interaction of numerous
factors. These factors that affect the production cost can be classified as energy, labour,
water, diamond saw and polishing pads, filling material and packing. Among the above
416 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
factors, energy is one of the most important factors. In this chapter, it was aimed to develop
a new hierarchy model for evaluating and ranking the power consumption of carbonate
rock in sawing process. By this model, carbonate rocks were ranked with the respect to its
power consumption. This model can be used for cost analysis and project planning as a
decision making index. To make a right decision on power consumption of carbonate rock,
all known criteria related to the problem should be analyzed. Although an increasing in the
number of related criteria makes the problem more complicated and more difficult to reach
a solution, this may also increase the correctness of the decision made because of those
criteria. Due to the arising complexity in the decision process, many conventional methods
are able to consider limited criteria and may be generally deficient. Therefore, it is clearly
seen that assessing all of the known criteria connected to the power consumption by
combining the decision making process is extremely significant.
The major aim of this chapter is to compare the many different factors in the power
consumption of the carbonate rock. The comparison has been performed with the
combination of the Analytic Hierarchy Process (AHP) and Fuzzy Delphi method and also
the use of TOPSIS method. The analysis is one of the multi-criteria techniques that provide
useful support in the choice among several alternatives with different objectives and criteria.
FDAHP method has been used in determining the weights of the criteria by decision makers
and then ranking the power consumption of the rocks has been determined by TOPSIS
method. The study was supported by results that were obtained from a questionnaire
carried out to know the opinions of the experts in this subject.
This chapter is organized as follows; in the second section, a brief review is done on concept
of the fuzzy sets and fuzzy numbers. In the third section FDAHP method is illustrated. This
section is included the methodology of FDAHP method. Fourth section also surveys TOPSIS
method. In the fifth section, after explanation of effective parameters on power
consumption, the FDAHP method is applied for determination of the weights of the criteria
given by experts. Then the ranking the power consumption of carbonate rocks is carried out
by TOPSIS method. Eventually, in sixth and seventh sections, results of the application are
reviewed. These sections discuses and concludes the paper. According to the authors’
knowledge, ranking the power consumption using the FDAHP-TOPSIS is a unique research.
Fig. 1. A triangular fuzzy number, M
Each TFN has linear representations on its left and right side such that its membership
function can be defined as
0, x l,
( x l ) (m l ), l x m,
)
(x M (1)
(u x ) (u m), m x u,
0, x u.
A fuzzy number can always be given by its corresponding left and right representation of
each degree of membership:
( M l( y ) , M r ( y ) ) (l (m l )y , u (m u)y ),
M y [0,1], (2)
Where l( y ) and r ( y ) denote the left side representation and the right side representation of
a fuzzy number, respectively. Many ranking methods for fuzzy numbers have been
developed in the literature. These methods may give different ranking results and most
methods are tedious in graphic manipulation requiring complex mathematical calculation.
The algebraic operations with fuzzy numbers have been explained by Kahraman (2001) and
Kahraman et al. (2002).
418 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
aij ( ij , ij , ij ) (3)
n 1
ij ( ijk ) n , k 1,..., n (5)
k 1
Where, αij≤δij≤γij, ij , ij , ij [1 / 9,1] [1,9] and αij,δij,γij are obtained from Eq. (4) to Eq. (6).
αij indicates the lower bound and γij indicates the upper bound. βijk indicates the relative
intensity of importance of expert k between activities i and j. n is the number of experts in
consisting of a group.
(2) Following outlined above, we obtained a fuzzy positive reciprocal matrix A
[ a ], a a 1, i , j 1, 2,..., n
A ij ij ji
Or
(1,1,1) ( 12 , 12 , 12 ) ( 13 , 13 , 13 )
( 1
A 1 1 ( 23 , 23 , 23 ) (7)
12 , 12 , 12 ) (1,1,1)
( 1 ,1 ,1 ) (1 ,1 ,1 ) (1,1,1)
13 13 13 23 23 23
420 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
4. TOPSIS method
TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) is one of the useful
MADM techniques to manage real-world problems (Yoon & Hwang, 1985). TOPSIS method
was firstly proposed by Hwang & Yoon (1981). According to this technique, the best
alternative would be the one that is nearest to the positive ideal solution and farthest from
the negative ideal solution (Benitez, et al., 2007). The positive ideal solution is a solution that
maximizes the benefit criteria and minimizes the cost criteria, whereas the negative ideal
solution maximizes the cost criteria and minimizes the benefit criteria (Wang & Elhag, 2006).
In short, the positive ideal solution is composed of all best values attainable of criteria,
whereas the negative ideal solution consists of all worst values attainable of criteria (Wang,
2008). In this paper TOPSIS method is used for determining the final ranking of the
sawability of rocks. TOPSIS method is performed in the following steps:
Step 1. Decision matrix is normalized via Eq. (9):
wij
rij j 1, 2, 3,..., J i 1, 2, 3,..., n (9)
J
wij2
i 1
A v1 , v2 , ... , vi , ... , vn Maximum Values (11)
A v1 , v2 , ... , vi , ... , vn Minimum Values (12)
Step 4. The distance of each alternative from PIS and NIS are calculated:
n
vij v*j
2
d *j (13)
i 1
n
vij vj
2
d j (14)
i 1
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 421
dj
CC j (15)
dj* dj
Texture
Weathering
Mohs Hardness
Hardness
Schmidt Hammer
Young’s Modules
The weighting factors for each criterion were presented in the following steps:
1. Compute the triangular fuzzy numbers (TFNs)
aij ( ij , ij , ij )
According Eq. (4)- Eq. (6)
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 423
C1 C2 C3 C4 C5 C6
C1 (1, 1, 1) (0.7, 1, 1.4) (0.7, 1.2, 2.3) (0.6, 0.7, 1) (0.6, 0.7, 1) (0.6, 0.8, 1)
C2 (0.7, 1, 1.4) (1, 1, 1) (0.7, 1.2, 1.7) (0.7, 0.7, 0.8) (0.6, 0.7, 1) (0.7, 0.8, 1)
C3 (0.4, 08, 1.4) (0.6 0.8, 1.4) (1, 1, 1) (0.4, 0.6, 1) (0.3, 0.6, 0.8) (0.4, 0.6, 1)
C4 (1, 1.4, 1.8) (1.3, 1.4, 1.4) (1, 1.7, 2.3) (1, 1, 1) (0.8, 1, 1.3) (1, 1.1, 1.3)
C5 (1, 1.4, 1.8) (1, 1.4, 1.8) (1.3, 1.8, 3) (0.8, 1.1, 1.3) (1, 1, 1) (0.8, 1.1, 1.3)
C6 (1, 1.3, 1.8) (1, 1.3, 1.4) (1, 1.6, 2.3) (0.8, 1, 1) (0.8, 0.9, 1.3) (1, 1, 1)
C7 (0.4, 0.7, 1.4) (0.6, 0.7, 1) (0.4, 0.8, 1) (0.4, 0.5, 0.8) (0.3, 0.5, 0.8) (0.4, 0.5, 1)
C8 (0.4, 0.7, 1) (0.6, 0.7, 1) (0.7, 0.9, 1) (0.4, 0.5, 0.7) (0.3, 0.5, 0.7) (0.4, 0.6, 0.7)
C9 (0.4, 0.8, 1.4) (0.4, 0.8, 1.4) (0.6, 1, 1.7) (0.3, 0.6, 0.8) (0.3, 0.6, 0.8) (0.3, 0.6, 1)
C10 (1.3, 1.5, 1.8) (1, 1.5, 1.8) (1, 1.8, 3) (0.8, 1.1, 1. 3) (0.8, 1.1, 1.3) (1, 1.2, 1.3)
C11 (1, 1.1, 1.4) (0.7, 1.1, 1.4) (0.7, 1.4, 2.3) (0.6, 0.9 1) (0.6, 0.8, 1) (0.7, 0.9, 1)
C12 (0.7, 1.1, 1.4) (0.7, 1.1, 1.4) (1, 1.3, 2.3) (0.6, 0.8, 1) (0.7, 0.8, 0.8) (0.6, 0.8, 1)
Z 2 a 21 a 22 ...a 212
1/12
[0.7451,0.968,1.2989]
Z 3 a 31 a 32 ...a 312
1/12
[0.5373,0.7861,1.2268]
424 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Z 4 a 41 a 42 ...a 412
1/12
[1.0284,1.3098,1.7181]
Z 5 a 51 a 52 ...a 512
1/12
[1.0502,1.3773,1.8295]
Z 6 a 61 a 62 ...a 612
1/12
[0.9,1.1823,1.5363]
Z i [8.9356,12.327,16.919]
Z (Z Z Z )1 [0.0411,0.0785,0.1537]
W1 1 1 2 3
of rocks are the constitutive minerals and their spatial positions, weathering or alteration
rate, micro-cracks and internal fractures, density and porosity (Hoseinie et al. 2009).
Therefore, uniaxial compressive strength test can be considered as representative of rock
strength, density, weathering, texture and matrix type. Thus, the summation of the weights
of five parameters (texture, weathering, density, matrix type and UCS) is considered as
weight of UCS. In total, the weight of UCS is about 0.372.
EQC Gs BTS
F (16)
100
Where F is the Schimazek’s wear factor (N/mm), EQC is the equivalent quartz content
percentage, Gs is the median grain size (mm), and BTS is the in direct Brazilian tensile
strength. Regarding the rock parameters which are used in questionnaires, summation of
the weights of abrasiveness, grain size, tensile strength and equivalent quartz content is
considered as weight of Schimazek’s F-abrasiveness factor. In total the weight of this factor
is 0.3687.
each rock type from the same piece. Each block sample was inspected for macroscopic
defects so that it would provide test specimens free from fractures, partings or alteration
zones. Then, test samples were prepared from these block samples and standard tests have
been completed to measure the above-mentioned parameters following the suggested
procedures by the ISRM standards (ISRM, 1981). The results of laboratory studies are listed
in table 5 and used in next stage.
UCS w1 SF-a w2 MH w3 YM w4
0.372 0.3687 0.1745 0.0847
W 0.0568 A 0.0958
UCS 0.1136
Fig. 4. The final weights of major parameters in power consumption ranking system
A 0.1547,0.1863,0.0393,0.0748
A 0.1101,0.1130,0.0229,0.0602
Then the distance of each method from PIS (positive ideal solution) and NIS (negative ideal
solution) with respect to each criterion are calculated with the help of Eqs. (13) and (14).
Then closeness coefficient of each rock is calculated by using Eq. (15) and the ranking of the
rocks are determined according to these values.
UCS SF-a YM MH
C1: C2: C3: C4:
1 71.5 0.135 32.5 3.5
2 74.5 0.109 33.6 3.2
3 53 0.122 20.7 2.9
4 61.5 0.124 21 2.9
5 63 0.127 23.5 2.95
6 73 0.105 31.6 3.1
7 74.5 0.173 35.5 3.6
Table 6. Decision matrix
UCS SF-a YM MH
C1: C2: C3: C4:
1 0.3991 0.3937 0.4243 0.4166
2 0.4158 0.3176 0.4387 0.3809
3 0.2958 0.3556 0.2703 0.3452
4 0.3432 0.3619 0.2742 0.3452
5 0.3516 0.3709 0.3068 0.3511
6 0.4074 0.3065 0.4126 0.3690
7 0.4158 0.5052 0.4635 0.4285
Table 7. Normalized decision matrix
UCS SF-a YM MH
C1: C2: C3: C4:
1 0.1485 0.1452 0.0360 0.0727
2 0.1547 0.1171 0.0372 0.0665
3 0.1101 0.1311 0.0229 0.0602
4 0.1277 0.1334 0.0232 0.0602
5 0.1308 0.1368 0.0260 0.0613
6 0.1516 0.1130 0.0350 0.0644
7 0.1547 0.1863 0.0393 0.0748
Table 8. Weighted normalized matrix
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 429
The power consumption ranking of carbonate rocks are also shown in Table 9 in the
descending order of priority.
10000
8000
Power Consumption (W)
6000
4000
2000
0
MHAF MHAR MANA MSAL TDAR THAJ TGH
7. Conclusion
In this chapter, a decision support system was developed for ranking the power
consumption of carbonate rocks. This system designed to eliminate the difficulties in taking
into consideration many decision criteria simultaneously in the rock sawing process and to
guide the decision makers for ranking the power consumption of carbonate rocks. In this
study, FDAHP and TOPSIS methods was used to determine the power consumption degree
of the carbonate rocks. FDAHP is utilized for determining the weights of the criteria and
TOPSIS method is used for determining the ranking of the power consumption of carbonate
rocks. During this research a fully-instrumented laboratory sawing rig at different feed rate
for two groups of carbonate rocks were carried out. The power consumptions were used to
verify the result of applied approach for ranking them by sawability criteria. The
experimental results confirm the new ranking results precisely.
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 431
This new ranking method may be used for evaluating the power consumption of carbonate
rocks at any stone factory with different carbonate rock. Some factors such as uniaxial
compressive strength, Schmiazek F-abrasivity, mohs hardness and young's modulus must
be obtained for the best power consumption ranking.
8. Acknowledgment
The authors would like to say thank to educational workshop centre of Sharif University of
Technology and central laboratory of Shahrood University of Technology for providing the
facilities in order to perform an original research.
9. References
Ataei, M., (2005). Multi-criteria selection for alumina-cement plant location in East-
Azerbaijan province of Iran. The Journal of the South African Institute of Mining and
Metallurgy, 105(7), pp. 507–514.
Atici U, Ersoy A., (2009). Correlation of specific energy of cutting saws and drilling bits with
rock brittleness and destruction energy. journal of materials processing technology, 209,
pp. 2602–2612.
Ayag, Z., Ozdemir, R. G., (2006). A fuzzy AHP approach to evaluating machine tool
alternatives. Journal of Intelligent Manufacturing, 17, pp. 179–190.
Basligil, H., (2005). The fuzzy analytic hierarchy process for software selection problems.
Journal of Engineering and Natural Sciences, 2, pp. 24–33.
Bieniawski ZT (1989). Engineering rock mass classifications. New York: Wiley.
Benitez, J. M., Martin, J. C., Roman, C., (2007). Using fuzzy number for measuring quality of
service in the hotel industry. Tourism Management, 28(2), pp. 544–555.
Bentivegna, V., Mondini, G., Nati Poltri, F., Pii, R., (1994). Complex evaluation methods. An
operative synthesis on Multicriteria techniques. In: Proceedings of the IV
International Conference on Engineering Management, Melbourne.
Birle J. D., Ratterman E., (1986). An approximate ranking of the sawability of hard building
stones based on laboratory tests, Dimensional Stone Magazine, July-August, pp. 3-29.
Bojadziev, G., Bojadziev, M., (1998). Fuzzy sets fuzzy logic applications. Singapore: World
Scientific Publishing.
Bozdag, C. E., Kahraman, C., Ruan, D., (2003). Fuzzy group decision making for selection
among computer integrated manufacturing systems. Computer in Industry, 51, pp. 13–
29.
Brook, B. (2002). Principles of diamond tool technology for sawing rock. Int. J. Rock Mech.
Min. Sci. 39, pp 41-58.
Buckley, J. J., (1985). Fuzzy hierarchical analysis. Fuzzy Sets and Systems, 17, pp. 233–247.
Burgess R. B., (1978). Circular sawing granite with diamond saw blades. In: Proceedings of
the Fifth Industrial Diamond Seminar, pp. 3-10.
Buyuksagis I.S, (2007). Effect of cutting mode on the sawability of granites using segmented
circular diamond sawblade. Journal of Materials Processing Technology, 183, pp. 399–
406.
Ceylanoglu, A. Gorgulu, K. (1997). The performance measurement results of stone cutting
machines and their relations with some material properties. In: V. Strakos, V. Kebo,
432 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Ertugrul, I., Tus, A., (2007). Interactive fuzzy linear programming and an application sample
at a textile firm. Fuzzy Optimization and Decision Making, 6, pp. 29-49.
Ertugrul, I., Karakasoglu, N., (2009). Performance evaluation of Turkish cement firms with fuzzy
analytic hierarchy process and TOPSIS methods, Expert Systems with Applications, 36 (1),
pp. 702–715.
Eyuboglu A.S, Ozcelik Y, Kulaksiz S, Engin I.C, (2003). Statistical and microscopic
investigation of disc segment wear related to sawing Ankara andesites, International
Journal of Rock Mechanics & Mining Sciences, 40, pp. 405–414.
Fener M, Kahraman S, & Ozder M.O; (2007). Performance Prediction of Circular Diamond
Saws from Mechanical Rock Properties in Cutting Carbonate Rocks, International
Journal of Rock Mechanics & Rock Engineering, 40 (5), pp. 505–517.
Gu, X., Zhu, Q., (2006). Fuzzy multi-attribute decision making method based on eigenvector
of fuzzy attribute evaluation space. Decision Support Systems, 41, pp. 400–410.
Gunaydin O., Kahraman S., Fener M., (2004). Sawability prediction of carbonate rocks from
brittleness indexes. The Journal of the South African Institute of Mining and Metallurgy,
104, pp. 239-244.
Haq, A. N., Kannan, G., (2006). Fuzzy analytical hierarchy process for evaluating and
selecting a vendor in a supply chain model. International Journal of Advanced
Manufacturing, 29, pp. 826–835.
Hartman H. L., (1992). SME Mining Engineering Handbook. Society for Mining, Metallurgy,
and Exploration, Inc. Littleton, Colorado, 2nd Edition, Vol. 1.
Hartman, H. L., Mutmansky, J. M., (2002). Introductory Mining Engineering. John Wiley &
Sons, New Jersey.
Hoseinie S.H., Aghababaei H., Pourrahimian Y. (2008). Development of a new classification
system for assessing of rock mass drillability index (RDi). International Journal of
Rock Mechanics & Mining Sciences, 45, pp. 1–10.
Hoseinie S.H., Ataei M., Osanloo M. (2009).A new classification system for evaluating rock
penetrability. International Journal of Rock Mechanics & Mining Sciences. 46, pp. 1329–
1340.
Hsieh, T. Y., Lu, S. T., Tzeng, G. H., (2004). Fuzzy MCDM approach for planning and design
tenders selection in public office buildings. International Journal of Project
Management, 22, pp. 573–584.
Hausberger P. (1989). Causes of the different behaviour of rocks when machined with
diamond tools. Industrial Diamond Review, 3, pp. 1-25.
Hwang, C. L., Yoon, K., (1981). Multiple attributes decision making methods and
applications. Berlin: Springer.
Hwang, C.L., Lin, M.L., (1987). Group Decision Making under Multiple Criteria, Springer-
Verlag, Berlin Heidelberg, New York.
Ilio, A.D. Togna, A. (2003). A theoretical wear model for diamond tools in stone cutting, Int.
J. Mach. Tools Manuf. 43, pp. 1171–1177.
International Society for Rock Mechanics, (1981). Rock characterisation, testing and
monitoring: ISRM suggested methods. Oxford: Pergamon.
Ishikawa, A., Amagasa, T., Tamizawa, G., Totsuta, R., Mieno, H., 1993. The Max-Min Delphi
Method and Fuzzy Delphi Method via Fuzzy Integration, Fuzzy Sets and Systems, Vol.
55.
434 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Jennings M., Wright D., (1989). Guidelines for sawing stone, Industrial Diamond Review, 49, pp.
70–75.
Kahraman, C., (2001). Capital budgeting techniques using discounted fuzzy cash flows. In:
Ruan, D., Kacprzyk, J., Fedrizzi, M. (Eds.), Soft Computing for Risk Evaluation and
Management: Applications in Technology, Environment and Finance, Physica-Verlag,
Heidelberg, pp. 375–396.
Kahraman, C., Ruan, D., Tolga, E., (2002). Capital budgeting techniques using discounted
fuzzy versus probabilistic cash flows. Information Sciences, 42 (1–4), pp. 57–76.
Kahraman, C., Ruan, D., & Dogan, I., (2003). Fuzzy group decision making for facility
location selection. Information Sciences, 157, pp. 135–153.
Kahraman, C., Cebeci, U., Ulukan, Z., (2003). Multi-criteria supplier selection using fuzzy
AHP. Logistics Information Management, 16 (6), pp. 382–394.
Kahraman, C., Cebeci, U., Ruan, D., (2004). Multi-attribute comparison of catering service
companies using fuzzy AHP: The case of Turkey. International Journal of Production
Economics, 87, pp. 171–184.
Kahraman S, Fener M, Gunaydin O. (2004). Predicting the sawability of carbonate rocks
using multiple curvilinear regression analysis. International Journal of Rock Mechanics
& Mining Sciences, 41; pp. 1123–1131.
Kahraman S, Altun H, Tezekici B.S., Fener M, (2005). Sawability prediction of carbonate
rocks from shear strength parameters using artificial neural networks. International
Journal of Rock Mechanics & Mining Sciences, 43(1), pp 157–164.
Kahraman S., Ulker U., Delibalta S., (2007).A quality classification of building stones from P-
wave velocity and its application to stone cutting with gang saws. The Journal of the
South African Institute of Mining and Metallurgy, 107, pp 427-430.
Kaufmann, A., Gupta, M.M., (1988). Fuzzy Mathematical Models in Engineering and
Management Science. North-Holland, Amsterdam.
Konstanty, J. (2002). Theoretical analysis of stone sawing with diamonds. J. Mater. Process
Technol. 123, pp. 146-54.
Lai, Y. J., Hwang, C. L., (1996). Fuzzy multiple objective decision making. Berlin: Springer.
Leung, L., Cao, D., (2000). Theory and methodology in consistency and ranking of
alternatives in fuzzy AHP. European Journal of Operational Research, 124, pp. 102–113.
Li, Y. Huang, H. Shen, J.Y. Xu, X.P. Gao, Y.S. (2002). Cost-effective machining of granite by
reducing tribological interactions, J. Mater. Process. Technol. 129, pp. 389–394.
Liu YC, Chen CS (2007). A new approach for application of rock mass classification on rock
slope stability assessment. Engineering Geology.;89, pp.129–43.
Luo, S.Y. (1997). Investigation of the worn surfaces of diamond sawblades in sawing granite,
J. Mater. Process. Technol. 70, pp. 1–8.
Mikhailov, L., Tsvetinov, P., (2004). Evaluation of services using a fuzzy analytic hierarchy
process. Applied Soft Computing, 5, pp. 23–33.
Mikaeil R, Ataei M, Hoseinie S.H. (2008). Predicting the production rate of diamond wire
saws in carbonate rocks cutting, Industrial Diamond Review, (IDR). 3, pp. 28-34.
Mikaeil, .R, Zare Naghadehi, .M, Sereshki, .F, (2009). Multifactorial Fuzzy Approach to the
Penetrability Classification of TBM in Hard Rock Conditions. Tunnelling and
Underground Space Technology, Vol. 24, pp. 500–505.
Mikaeil, .R, Zare Naghadehi, .M, Ataei, .M, KhaloKakaie, R, (2009). A Decision Support
System Using Fuzzy Analytical Hierarchy Process (FAHP) and TOPSIS Approaches
Evaluating the Power Consumption in Carbonate Rock
Sawing Process by Using FDAHP and TOPSIS Techniques 435
Wei X, Wang C.Y, Zhou Z.H. (2003). Study on the fuzzy ranking of granite sawability.
Journal of Materials Processing Technology, 139, pp. 277–80.
Wright and Cassapi. (1985). Factors influencing stone sawability, Industrial Diamond Review, 2, pp. 84-
87.
Xipeng, X. Yiging, Y. (2005). Sawing performance of diamond with alloy coatings, Surf.
Coat. Technol. 198, pp. 459–463.
Xu, X. (1999). Friction studies on the process in circular sawing of granites, Tribol. Lett. 7, pp.
221–227.
Xu, X. Li, Y. Malkin, S. (2001). Forces and energy in circular sawing and grinding of granite,
J. Manuf. Sci. Eng. 123, pp. 13–22.
Xu, X.P. Li, Y. Zeng, W.Y. Li, L.B. (2002). Quantitative analysis of the loads acting on the
abrasive grits in the diamond sawing of granites, J. Mater. Process. Technol. 129, pp.
50–55.
Xu, X. Li, Y. Yu, Y. (2003). Force ratio in the circular sawing of granites with a diamond
segmented blade, J. Mater. Process. Technol. 139, pp. 281–285.
Yoon, K., Hwang, C. L. (1985). Manufacturing plant location analysis by multiple attribute
decision making: Part II. Multi-plant strategy and plant relocation. International
Journal of Production Research, 23(2), pp. 361–370.
Zadeh, L. A., (1965). Fuzzy sets. Information and Control, 8, pp. 338–353.
Zare Naghadehi, .M, Mikaeil .R, Ataei, M. (2009). The Application of Fuzzy Analytic
Hierarchy Process (FAHP) Approach to Selection of Optimum Underground
Mining Method for Jajarm Bauxite Mine, Iran. Expert Systems with Applications, 36,
pp. 8218–8226.
Zhu, K., Jing, Y., Chang, D. (1999). A discussion on extent analysis method and applications
of fuzzy AHP. European Journal of Operational Research, 116, pp. 450–456.
22
1. Introduction
In a recent survey the consulting company AT Kearney (ELA/AT Kearney survey 2004)
states that there are more than 900,000 warehouse facilities worldwide from retail to service
parts distribution centers, including state-of-art, professionally managed warehouses, as
well as company stockrooms and self-store facilities. Warehouses frequently involve large
expenses such as investments for land and facility equipments (storage and handling
activities), costs connected to labour intensive activities and to information systems.
Lambert et al. (1998) identify the following missions:
Achieve transportation economies (e.g. combine shipment, full-container load).
Achieve production economies (e.g. make-to-stock production policy).
Take advantage of quantity purchase discounts and forward buys.
Maintain a source of supply.
Support the firm’s customer service policies.
Meet changing market conditions and again uncertainties (e.g. seasonality, demand
fluctuations, competition).
Overcome the time and space differences that exist between producers and customers.
Accomplish least total cost logistics commensurate with a desired level of customer
service.
Support the just-in-time programs of suppliers and customers.
Provide customers with a mix of products instead of a single product on each order (i.e.
consolidation).
Provide temporary storage of material to be disposed or recycled (i.e. reverse logistics).
Provide a buffer location for trans-shipments (i.e. direct delivery, cross-docking).
Bartholdi and Hackman (2003) conversely recognize three main uses:
Better matching the supply with customer demands
Nowadays there is a move to smaller lot-sizes, point-of-use delivery, high level of order and
product customization, and cycle time reductions. In distribution logistics, in order to serve
customers, companies tend to accept late orders while providing rapid and timely delivery
within tight time windows. Consequently the time available for order picking becomes
shorter.
438 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Consolidating products
The reason to consolidate products is to better fill the carrier to capacity and to amortize
fixed costs due to transportation. These costs are extremely high when the transportation
mode is ship, plane or train. As a consequence a distributor may consolidate shipments from
vendors into larger shipments for downstream customers by an intermediate warehouse.
Providing Value-added processing
Pricing, labelling and light assembly are simple examples of value added processing. In
particular the assembly process is due for a manufacturing company adopting the
postponement policy. According to this policy products are configured as close to customers
as possible.
As a result warehousing systems are necessary and play a significant role in the companies’
logistics success.
Fig. 1. Framework for warehouse design and operation problems, (Gu et al., 2007).
loads while customers order small volumes (less than unit loads) of different products as
simply shown in Figure 3. Typically, hundreds of customer orders, each made of many
requests (orderlines), have to be processed in a distribution warehousing system per day.
• Truck-dock assignment
• Order-truck assignment
Receiving and shipping • Truck dispatch schedule
W arehouse operation
order, after which the residual stock quantity is stored again. This type of system is also
called unit-load OPS. The automated crane can work under different functional modes:
single, dual and multiple command cycles. The single-command cycle means that either a load
is moved from the I/O point to a rack location or from a rack location to the I/O point. In
the dual-command mode, first a load is moved from the I/O point to the rack location and
next another load is retrieved from the rack. In multiple command cycles, the S/R machines
have more than one shuttle and can pick up several loads in one cycle, at the I/O point or
retrieve them from rack locations.
simultaneously by the decision maker. As a consequence he/she has to accept local optima
and sub optimizations.
Main decisions deal with the determination of (1) the system type, e.g. automatic or manual
warehousing system, parts-to-picker or picker-to-parts, unit-load or less than unit-load,
forward-reserve or forward only, etc.; (2) the best storage capacity of the system in terms of
number of pallet locations for each sku; (3) the structure of the system, i.e. the layout and
configuration of the system in terms of racks, bins, aisles, etc.; (4) the allocation of product
volumes to the storage area in agreement with the whole capacity defined by (1) and in
presence/absence of a reserve area; (5) the assignment of products to the storage area; (6) the
evaluation of the performance of the adopted system configuration by the simulation of
vehicles’ routes.
A brief and not exhaustive classification of storage systems types has been introduced in
previously illustrated Figure 4 (as proposed by De Koster 2004). The generic form of the
proposed DSS is made of active tables for data entry, reports, graphs and tables of results,
etc. A "Quick report" section reports all necessary information for the user: for example it is
possible to show the sequence of picking in an order according to a given picking list and to
collect a set of performance indices. Next subsections illustrate main data entry forms and
decision steps for the design of a storage system.
This chapter adopts the following terms many times: fast-pick, reserve, bulk, sku, etc.. Which
is the difference between the fast-pick area and the reserve one? The fast-pick area is a site inside the
warehouse where the most popular skus are stored in rather small amounts so that a large
part of daily picking operations can be carried out in a relatively small area with fast
searching process and short travelled routes. The items most frequently requested by
customers are grouped in this storage area, which is often located in an easily accessible area
so that the time of picking and handling is minimized. The location of the items in the fast
pick area is better than any other in the warehouse and related operations, e.g. stocking,
travelling, searching, picking, and restocking, are faster.
The so-called class based analysis of historical storage quantities generates a histogram of
frequency values of storage levels collected in the adopted historical period of time by the
preliminary definition of a number of histogram classes of values. The histogram of
cumulative values identifies the probability of "stockin", i.e. the probability of the
complementary event of the stockout (the probability that stockout would not occur).
The continuous frequency based analysis generates a similar set of graphs without the
preliminary definition of a number of classes of values (historical measures).
The so called DP approach identifies the best level of storage capacity by the analysis of
historical demand profiles. Given a period of time, e.g. one year, and a set of values of
demand quantities for each product within this period, DP quantifies the expected demand
during an assumed subperiod of time t, called time supply (e.g. 3 weeks). As a consequence
this approach assumes to store an equal time supply of each sku. This is a frequently adopted
strategy in industrial applications and is widely discussed by Bartholdy and Hackman
(2003): the equal time strategy - EQT. By this strategy the storage system should supply the
expected demand orders for the period of time t without stockouts. Obviously this depends
on the adopted fulfilment system, which relates with inventory management decisions
significantly correlated to the storage/warehousing decisions object of this chapter.
The output of a storage capacity evaluation is the storage volume of products in the fast pick
area (adopting a forward-reserve configuration system) and the whole storage capacity
(including the bulk area when forward-reserve configuration is adopted). In presence of fast
picking, it usually refers to the lowest level of storage: the so-called 0 level.
This capacity is usually expressed in terms of cubic meter, litres, number of pallets, cases,
carton, pieces of products, etc.
Figure 6 presents the form of the proposed DSS for data entry and evaluation of historical
storage levels given an admissible risk of stockout. This figure also shows a curve “risk of
stockout” as a result of the statistical analysis of historical observations: this is the stockout
probability plot. Obviously given a greater storage capacity this risk decreases.
specific supply period of time e.g. a month or a year. Consequently, it is necessary to ensure
replenishment of picked goods from a bulk storage area, known as reserve area.
Therefore, consideration has to be given to an appropriate choice between the space
allocated for an item in the fast pick area and its restock frequency (Bartholdi and Hackman,
2003). They discuss three different allocation strategies for calculating the volume to be
assigned to each sku assuming it incompressible, continuously divisible fluid. The models
proposed for determining the sku level of stock are based on the following notation:
let fi be the rate of material flow through the warehouse for the skui;
let the physical volume of available storage be normalized to one. vi represents the
fraction of space allocated to skui so that:
∑ =1 (1)
Three different levels of stock for skui are defined as follows:
i. Equal Space Strategy (EQS). This strategy identifies the same amount of space for each
sku. The fraction of storage volume to be dedicated to the skui under EQS is:
= (2)
ii. Equal Time Strategy (EQT). In this strategy each sku i is replenished an equal number of
times according to the demand quantities during the period of time considered. Let K
be the common number of restocks during a planning period so that:
= (3)
=∑ (4)
The fraction of storage volume to be dedicated to the skui under EQT is:
= =∑ (5)
iii. Optimal Strategy (OPT). Bartholdi and Hackman (2003) demonstrate that this strategy
minimizes the number of restocks from the reserve area. The fraction of available space
devoted to skui is:
=∑ (6)
A critical issue supported by the what-if multi-scenario analysis, which can be effectively
conducted by the proposed DSS, is the best determination of the fraction of storage volume
for the generic sku as the result of the minimization of pickers travelling time and distance
in a forward – reserve picker to part order picking system.
Equations (2), (5), and (6) are fractions of fast pick volume to be assigned to the generic item
i. As a consequence it is necessary to preliminary know the level of storage to be assigned to
the fast pick area in order to properly defined each dedicated storage size. A company
usually traces and knows the historical picking orders with a high level of detail (date of
order, pickers, picked skus and quantities, visited locations, pickers id, restocking
movements, etc.) thanks to traceability tools and devices (barcode, RFID, etc.), but it rarely
A Supporting Decisions Platform for the Design and Optimization of a Storage Industrial System 449
has "photographs" of the storage systems, i.e. historical values of storage levels. In presence
of forward-reserve systems and a few (or many) photographs of storage levels, the generic
inventory level, made available by the warehouse management system, does not distinguish
the contribution due to fast pick area and that stored in bulk area as discussed in section 3.1.
The proposed DSS supports the user to define the whole level of storage for fast picking, the
level of storage for bulk area and those to be assigned to each sku in both areas.
Items allocations affect system performance in all main activities previously discussed:
pallet-loading (L), restocking/replenishment (R) and picking (P). This is one of the main
significant contributions of the proposed DSS to knowledge and warehousing system
optimization. The following questions are still open: Which is the effect of storage allocation to
travel time and distances due to L, R and P activities? Given a level of storage assigned to the fast
pick area, is the OPT the best allocation storage for travel distance and time minimization?
We know that the OPT rule, equation (6), supports the reduction of the number of restocks
(R) but the logistic cost of material handling is also due to (I) and (P) activities. The multi-
scenarios what-if analysis supported by the proposed DSS helps the user to find the most
profitable problem setting which significantly varies for different applications.
Figure 10 presents exemplifying results of the item allocation for an industrial application.
This is a low level forward-reserve OPS: for each sku the number of products in fast pick
and reserve areas is determined.
A list of typical indices adopted to rank the skus for the assignment of storage locations
follows:
Popularity (P). This is defined as the number of times an item belongs to an order in a
given set of picking orders which refer to a period of time T:
, =∑ (7)
where
1
=
0 ℎ
(xij) product-order incidence matrix.
Cube per Order Index (COI) can be defined as the ratio of volume storage to inventory for
the generic sku to the average number of occurrences of sku in the order picking list for
a given period of time (Haskett, 1963). Given an sku i, COI is defined as the ratio of the
volume of the stocks to the value of its popularity in the period of interest T. Formally:
,
, =∑ (8)
where
vi,T average storage level of sku i in time period T.
Order Closing Index (OC). Order Completion (OC) assignment is based on the OC
principle introduced by Bartholdi and Hackman (2003). Bindi (2010) introduced the
Order Completion rule based on an index called OC index that evaluates the probability
of a generic item being part of the completion of an order, composed of multiple
orderlines of different products (items). The OC index is the sum of the fractions of
orders the generic item performs. For a generic sku and a time period T, OC is defined
as follows:
, =∑ , (9)
where
xij
f ij ,T m ( j)
(10)
order j in
period T
x ij
z1
Turn Index (T). Given an sku i, it is defined as the ratio of the picked volume during a
specific period of time T to the average stock stored in T. The index can be written as:
pij
j
Ti ,T (11)
order j in vi ,T
period T
where
travelling distance and time during the picking activity. In order to group
products, the statistical correlation between them should be known or at least be
predictable, as described by Frazelle and Sharp (1989), and by Brynzér and
Johansson (1996).
The proposed tool assigns the location to the generic sku by the Cartesian product, as the
direct product of two different sets. The first set (RANKsku)i is made of the list of skus
ordered in agreement with the application of a ranking criterion (see Figure 10), e.g. the
popularity measure based rule or a similarity based & clustering rule (Bindi et al. 2009). The
second set is made of available locations ordered in agreement with a priority criterion of
locations assignment, e.g. the shortest time to visit the location from the I/O depot area. As
a consequence most critical sku are assigned to the nearest available locations. Obviously
the generic sku can be assigned to multiple locations in presence of more than one load
stored in the system.
This assignment procedure refers to the products quantities subject to picking, e.g. located
in the so called fast picking area: this is the so called low level OPS. In high level systems all
locations at different levels can be assignable in agreement with the adopted ranking
procedure. Both types of warehouses are supported by the proposed DSS. Figure 10 shows
the form for setting the assignment of products within the system.
Fig. 10. Storage assignment setting, ranking criterion and ranking index.
Correlated storage assignment rules are supported by the DSS in agreement with the
systematic procedure proposed and applied by Bindi et al. (2009 and 2010). Figure 11
exemplifies the result of the assignment of products within the fast pick area by the use of
A Supporting Decisions Platform for the Design and Optimization of a Storage Industrial System 453
different colours, one for each sku. Similarly the result of the assignment of products to the
higher levels can be shown as illustrated in Figure 12.
Figure 14 shows the form for the visual simulation of the picking orders in forward-reserve
OPS. This simulation also quantifies the costs due to restocking. Similarly it is possible to
simulate the behaviour of the system including pallet loading (L) activities, and/or in
presence of AS/RS (adopting Chebyshev metric), and/or in presence of correlated storage
assignment.
4. Case study
The proposed DSS has been applied to a low level picker to part OPS for spare parts of
heavy equipment and complex machinery in a popular manufacturing company operating
worldwide. The total number of items stored and handled is 185,000 but this is continuously
growing due to new business acquisitions and above all to engineering changes to address
new requirements for pollution control and reduction.
The subject of the analysis is the picking activities concerning medium-sized parts weighing
less than 50 pounds per piece. These parts are stored in light racks corresponding to about
89,000 square feet of stocking area. This area contains more than 3,000 different items.
The horizon time for the analysis embraces the order profile data during four historical
months. The number of order picking lines is 37,000 that correspond to 6,760 different
customer orders. The picking list presents an average of 86 orders fulfilled per day with the
average depth varying around 6 items per order.
The result of the design of the order picking system is a 58,400 square foot picking area (350
feet x 170 feet). Table 2 demonstrates that OPT strategy significantly reduces the number of
456 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
restocks for the historical period of analysis in agreement with Bartholdi and Hackman
(2010). The reduction is about 55% compared to EQS, and about of 62% compared to EQT,
thus confirming the effectiveness of OPT strategy.
Table 3 reports the values of traveled distances and aisles crossed in retrieving operations
i.e. excluding restocking for different assignment rules and allocation strategies. Table 3
demonstrates that COI and P assignment rules reduce picking activities and cost the most.
In particular, the best performance is obtained by adopting the COI assignment rule and the
EQS allocation strategy, quite different from the OPT strategy which minimizes the number
of restocks (see Table 2).
Allocation strategies
EQS EQT OPT
Traveled Aisles Traveled Aisles Traveled Aisles
Assignment rules
distance crossed distance crossed distance crossed
COI 6,314,459 33,579 6,025,585 33,659 6,706,537 34,482
OC 6,536,697 33,922 8,047,296 36,210 7,241,533 35,424
P 6,379,887 33,713 7,254,318 35,270 6,869,774 34,655
T 8,015,507 35,766 8,155,378 36,191 8,717,042 36,497
Table 3. What-if analysis results. Traveled distance [feet] and aisle crossed [visits] during a
picking period of 4 months
Figure shows where the most frequently visited skus are located in the fast pick area: the
size of circles is proportional to the popularity value respectively according to the return
and the traversal strategies.
Fig. 15. Storage assignment in return strategy with I/O located at (x,y)=(170,0)
6. Acknowledgment
The authors would like to thank Prof. John J. Bartholdi of the Georgia Institute of
Technology who gave us several useful suggestions to improve the research.
7. References
Bartholdi, J. and Hackman, S.T., 2003. Warehouse & distribution science,
<http://www2.isye.gatech.edu/people/faculty/John_Bartholdi/wh/book/editio
ns/history.html> (date of access: January 2010).
Bindi, F., Manzini, R., Pareschi, A., Regattieri, A., 2009, Similarity-based storage allocation
rules in an order picking system. An application to the food service industry.
International Journal of Logistics Research and Applications. Vol.12(4), pp.233-247.
Bindi, F., Ferrari, E., Manzini, R., Pareschi, A., 2010, Correlated Storage Assignment and
Isotime Mapping for Profiling Sku, XVI International Working Seminar on production
Economics, Innsbruck (Austria),Volume 4, pp 27-41. Edited by W.Grubbström and
Hans H. Hinterhuber.
458 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Bindi, F., 2010. Advanced Models & Tools for Inbound & Outbound Logistics in Supply
Chain, PhD thesis in industrial engineering of University of Padova, 81-86.
Brynzér, H. and Johansson, M.I., 1996. Storage location assignment: using the product
structure to reduce order picking times. International Journal of Production Economics,
46, 595-603.
Cahn, A.S., 1948. The warehouse problem, Bulletin of the American Mathematical Society, 54,
1073.
De Koster, R. and T. Le-Duc, 2004. Single-command travel time estimation and optimal rack
design for a 3-dimensional compact AS/RS, Progress in Material Handling Research:
2004, 2005, 49-66
Ela/ATKearney, 2004, Excellence in Logistic.
Frazelle, E. H., Sharp, G. P., 1989. Correlated assignment strategy can improve any order
picking operation. Industrial Engineering, 21, 33-37.
Frazelle, E.H., 2002. World-class Warehousing and Material Handling. McGraw Hill, New York.
Haskett, J.L., 1963, Cube per order index – a key to warehouse stock location, Transportation
and Distribution Management, 3, 27-31.
Gu, J., Goetschalckx, M., McGinnis, L.M., 2007. Research on warehouse operation: A
comprehensive review, European Journal of Operational Research, 177(1), 1-21.
Lambert, D.M., Stock, J.R. and Ellram, L.M., 1998. Fundamentals of logistics management.
Singapore: McGraw-Hill.
Manzini, R., 2006. Correlated storage assignment in an order picking system, International
Journal of Industrial Engineering: Theory Applications and Practice, v 13, n 4, 384-394.
Manzini, R., Gamberi, M., Persona, A., Regattieri, A., 2007. Design of a class based storage
picker to product order picking system, International Journal of Advanced
Manufacturing Technology, v 32, n 7-8, 811-821, April 2007.
Manzini, R., Regattieri, A., Pham, H., Ferrari, E., 2010, Maintenance for industrial systems.
Springer-Verlag, London.
Petersen, C.G., 1999. The impact of routing policies on warehousing efficiency, International
Journal of Operations and Production Management, 17, 1098-1111.
Tompkins, J.A., White, J.A., Bozer, Y.A., Frazelle, E.H. and Tanchoco, J.M.A., 2003. Facilities
Planning. NJ: John Wiley & Sons.
Van den Berg, J.P., Zijm, W.H.M., 1999, Models for warehouse management: classification
and examples, International Journal of Production Economics, 59, 519-528.
23
1. Introduction
The history of Decision Support Systems in forestry is quite long as well as the list of created
systems and reviews summarizing their merits and flaws. It is generally recognized that a
modern decision support system (DSS) should address simultaneously as many economical
and ecological issues as possible without becoming overly complex and still remain
understandable for users (Reynolds et al., 2008). The ongoing global change including the
climate change sets new boundary conditions for decision makers in the forestry sector. The
changing growth conditions (Albert & Schmidt, 2010) and expected increasing number of
weather extremes like storms force forest owners to make decisions on how to replace the
damaged stands and/or how to mitigate the damages. This decision making process
requires adequate information on the future climate as well as on complex climate-forest
interactions which could be provided by an appropriate climate-driven decision support
tool. Both the damage factors and the forest management (e.g. harvesting) result in changes
of the structure of forest stands. The structural changes result in immediate changes of
albedo and roughness of land surface as well as of microclimatological conditions within the
stand and on the soil surface. The consequences are manifold. The changed stand density
and leaf area index trigger energy and water balance changes which in turn increase or
decrease the vulnerability of the remaining stand to abiotic and biotic damage factors like
droughts or insect attacks. A change of the microclimatic conditions might strengthen the
forest against drought, but at the same time reduce its resistance to windthrow. The sign
and extent of vulnerability changes depend on complex interactions of the effective climatic
agents, above- and belowground forest structure, and soil. There are many DSS that are
capable of assessing one or several risk factors; however there are few that are able to assess
the additional increase or decrease of risks triggered by modification of forest structure
resulting from previous damage or forest management activities. Disregarding these effects
will inevitably lead user to either under- or overestimation of the potential damages. The
question arises whether these additional risks are significant enough to be considered in a
DSS.
In this chapter we present a new DSS developed according to the above mentioned
requirements and capable to provide decision support taking into account economical and
ecological considerations under the conditions of changing climate - the Decision Support
460 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
System – Forest and Climate Change. We then use the modules of that system to investigate
the contribution of additional forest damage risks caused by structure changes into the
general vulnerability of forest stand under changing climate.
2. DSS WuK
The Decision Support System – Forest and Climate Change (DSS-WuK) was developed to
assist forest managers in Germany selecting an appropriate tree species under changing
climate, considering future risks (Jansen et al., 2008, Thiele et al., 2009). It operates with a
daily time step and combines traditional statistical approaches with process-based models.
2.1 Input
The system is driven by SRES-climate scenarios and hindcast data modelled by coupled
ocean - atmospheric general circulation model ECHAM5-MPIOM developed by the Max-
Planck-Institute for Meteorology (Roeckner et al. 2006). Additional input data are a digital
elevation model (Jarvis et al., 2008) and its derivatives: aspect and slope, soil data from the
national forest soil map (Wald-BÜK, Richter et al., 2007) at a resolution of 1:1 Mio. and
atmospheric nitrogen deposition estimated using a modified version of the extrapolation
model MAKEDEP (Alveteg et al., 1998, Ahrends et al., 2010a) with data from Gauger et al.
(2008). The user of the DSS provides additional input on stand characteristics (tree species,
age, yield class) on the forest of interest.
different tree species, since the first stage does not include stand growth. To avoid the
underestimation of risks the stands are chosen roughly at their rotation age, as these
stands have the highest vulnerability to wind and insect damage.
If a user is interested in a detailed description of changes in growth and yield data or
the various risks, she/he can obtain a more sophisticated analysis by starting the
second stage of the system. In this part of the DSS-WuK the on-demand simulations for
a user-defined stand with downscaled climate data (1 km resolution) are carried out.
Because of the relatively long response time, the system sends a report as a pdf file via
email. In the second stage, the stand does not have a static structure any more but
grows climate-dependent using an individual-tree growth model. The stand
characteristics are updated every 10 years and submitted to the risk modules which
carry out their calculations with a new structure with a daily time step (Fig. 1). The
effect of growing stands on the risks is evaluated, meaning the stand characteristics
determine the effect of damage factors, i.e. extreme weather events. The risks in the
system are defined as a percentage of area loss of a fully stocked stand.
Both stages use the same core model (Fig. 1) and all calculations are carried out with daily
resolution. All models get an update of their input data after each 10 year period. The risk
model for drought mortality is coupled to the soil water model. The soil water itself, is
influenced inter alia by the current stand characteristics. The soil water and climate
conditions influence the output of the wind damage and insect attack risk models as well as
the stand growth model. In the next iteration, the soil water model gets the stand
characteristics actualized by the stand growth model. The results are finally aggregated to
30 year climatological periods mentioned above.
However, working at a level of forest stand the model is not able to consider the effect of the
risks on stand growth, structure and microclimate, yet (Fig. 1). The estimated damage risks
are not taken into account in calculations of stand growth and are executed (conceptually) in
parallel. The system, thus, ignores the fact that a tree, which was, for example, thrown by
wind, cannot be damaged by insects or drought later on. Overcoming this weakness is one
of the challenges in forest decision support systems development. In the following, we will
present the importance of this point by performing numerical experiments with a reduced-
complexity, directly-interacting part of the simulation system DSS-WuK.
parameters in forest ecosystem models. The decrease of leaf area leads immediately to the
decreasing of rain and light interception and, therefore, to the enhanced supply of light and
water under the forest canopy. This in turn results in increased evaporation from soil
surface under the canopy, but decrease of transpiration due to reduced LAI. Thus, the
interplay of all these factors may lead to the increase or decrease of soil moisture depending
on particular site conditions: latitude, meteorological conditions, soil type, tree species,
initial LAI, extent of damage/management etc. These changed conditions may increase or
decrease forest vulnerability and the probability of various consequent damages. If, for
example, the resulting effect of windthrow would be an increase of soil moisture in a stand
it would, on the one hand lead to the stand destabilisation against next storm, but at the
same time – reduce the probability of drought stress (Ahrends et al., 2009).
Fig. 2. Impacts of changing forest structure on stand vulnerability resulting from triggered
interaction of ecosystem processes.
Therefore, to quantify the contribution of additional damage caused by forest structure
changes the experiments with each of the chosen forest sites were carried out twice as
shown in Fig. 3:
with static structure, i.e. the climate caused structure damages were registered and
summed up for the 30-years climatic periods, but the actual structure of modelled
stands remained unchanged.
with dynamic structure, i.e. the part of stand was removed according to the damage
degree and the calculations proceed with the changed structure until the next damage
event and so on for the rest of particular 30-years period.
300 m above sea level, was selected to study the possible effects of windthrow events on
forest vulnerability to following windthrow damage risk – windthrow feedback. The
characteristics of the investigated spruce stand are given in Table 1.
Fig. 3. The scheme of calculations with damage-independent structure (upper panel) and
with forest structure updated (LAI and stand density reduced) after each damage event
during 30-years period (lower panel).
Depth Stone Ψu θu θs b Ku fi
[%] kPa - - - mm day-1 -
+5-0 (FF) 0 -6.3 0.384 0.848 5.23 2.3 0.92
0-5 2 -6.3 0.615 0.621 6.4 4.98 0.92
5-15 3 -6.3 0.615 0.621 6.4 4.98 0.92
15-20 3 -6.3 0.508 0.517 6.12 4.98 0.92
20-40 3 -6.3 0.508 0.517 6.12 4.4 0.92
40-60 30 -6.3 0.472 0.482 8.7 4.4 0.92
60-70 30 -6.3 0.472 0.482 8.7 1.76 0.92
70-95 30 -6.3 0.371 0.377 26.5 1.76 0.97
95-200 30 -6.3 0.371 0.377 26.5 1.28 0.97
Table 2. Soil hydraulic at saturation (subscript ‘s’) and at the upper limit of available water
(subscript ‘u’). Note that ψ is matrix potential, K is hydraulic conductivity, θ is volumetric
water fraction, b is the pore-size distribution parameter, fi is the Clapp-Hornberger
inflection point, Ku is the hydraulic conductivity at the upper limit of available water and FF
is the forest floor. The stone content was taken from Feisthauer (2010).
The climate conditions at the research site “Otterbach” in Solling was characterized by
means of long-term measurements at the meteorological station “Silberborn” (51.76°N,
9.56°E, 440 m a.s.l.) of German weather Service located near “Otterbach” site. The
measurements time series cover the period from 1976 to 2008. To account for 140 m altitude
difference between “Otterbach” site and “Silberborn” station the station’s temperature was
corrected using constant gradient of 0.0065°C m-1.
temperature, rooting depth and applied on a standard leaf area typical for a given tree
species. The variations of these characteristics increase or decrease critical wind load
threshold.
Influence of rooting depth (RD) is taken into account as the linear factor describing the ratio
of actual tree anchorage (RDact) to “reference” value (RDref) for given tree species, age and
state fRD= RDact/(RDref). As an indicator of soil moisture provided by soil water module the
Relative Extractable Water (REW) (Granier et al., 2007) was adapted. The effect of soil
moisture on windthrow is expressed as a linear function of REW deviations relatively to its
reference value: REWref = 0.6 REWt,s, for the certain combination of tree species (subscript
“t”) and soil type (subscript “s”). The critical wind load (CWL) for actual soil moisture and
forest structure is, therefore, derived from its reference value: CWLact= CWLref/REWact* fRD.
The system distinguishes between wet, dry (REWref < 0.6 REWt,s) and frozen (Tsoil < 0°C)
conditions. Under dry and frozen conditions the probability of overturning is assumed zero
and only a damage caused by stem breakage is possible.
Thus, when the actual wind load calculated from climate and wind modules exceeds the
CWL (either for overturning or for stem breakage) modified by outcome of soil and LAI
modules, the next wind damage event occurs. The stand density and LAI are reduced,
which in turn leads to changes in stand microclimate, soil water regime and tree anchorage
as described above, and consequently to further modification of CWL. The new CWL can be
either higher or lower then the one before the damage event, so the stand vulnerability is
either increased or decreased. Simultaneously the risk of drought stress is changed as well.
4. Results
4.1 Reference period, Otterbach
4.1.1 Climate and water budget components
As mentioned above the system was run at first for the real spruce stand at research site
“Otterbach” under real climate measured during the period from 1976 to 2008.
Fig. 4 demonstrates the time courses of air temperature and maximal wind speed at the
research site. A slight trend towards higher temperatures can be observed in the 32-year
series. The warming trend generally results in a destabilization of forest stands in terms of
wind damage as the number of days with frozen soil decreases in winter - e.g. the anchorage
is reduced during the period of winter storms (Peltola et al., 1999). The wind curve shows
several years with very strong storm events: 1983, 1990, 1993, and 1994. The Fig. 5 shows the
course of precipitation sums and of the major water balance components calculated by the
soil water module of the system. Except for 1994 the storm years were not accompanied by
unusually high annual precipitation sums.
4.1.2 Windthrow
The calculations of wind damage with unchanged structure, i.e. without feedback clearly
shows that despite the importance of soil, relief and vegetation factors the driving force
defining the temporal pattern of damage is the wind (Fig. 6). The extensive windthrow
events coincide with the strong windstorm years listed above, with the largest event in 1993
– the year with the strongest storm. The effect of other factors, however, is clearly visible in
the non-linear relationship between storm strength and damage amount. For instance the
damage in 1990 is lower than in 1994 and 1995, although the storm was stronger. The
468 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
possible reason could be deduced from Fig. 5 where it is shown that 1990 was also drier than
both 1994 and 1995. The drier soil provided better anchorage for trees.
12 70
Tmean
50
8
40
6
30
4
20
2 10
0 0
1976 1980 1984 1988 1992 1996 2000 2004 2008
Years
Fig. 4. Mean annual air temperature and maximal windspeed measured at the
meteorological station of German Weather Service near research station “Otterbach” for the
simulation period from 1976 to 2008.
1800
1600
]
-1
1400
water flux [mm year
1200
1000
800
600
400
200
0
1976
1978
1980
1982
1984
1986
1988
1990
1992
1994
1996
1998
2000
2002
2004
2006
Fig. 5. The main water budget components for the simulation period from 1976 to 2007.
This effect could be observed even better when the feedbacks of the windthrow damage are
taken into account (Fig. 6, lower panel). The reduction of stand density and LAI resulting
from first strong storm in 1990 leads to considerable destabilization of stand and increase of
its vulnerability, so that the strongest storm in 1993 produced even higher damage. This, in
its turn results in more than 5% damage increase during following storms in 1994-1995.
Thus, the decreasing resilience of stands and decreasing of CWL produce a cascade of wind
damage events at the end of the period. Though each event in itself is not very extensive, the
constantly decreasing CWL sum up the small damages to the significant annual sums. The
constant small windthrows are also observed in reality at the Otterbach research site. This
Challenges in Climate-Driven Decision Support Systems in Forestry 469
cascade is not observed in modeling results when the feedbacks are not considered (Fig. 6,
upper panel).
50
45 Storm 24 January 1993
(Vmax > 50 m s-1)
daily windthrow risk [%]
40
35
30
25 Several hurricanes from Daria to
20 Vivian (Vmax > 40 m s-1)
15
10
5
0
1.1.1975 1.1.1980 1.1.1985 1.1.1990 1.1.1995 1.1.2000 1.1.2005
40
35
30
Several hurricanes from Daria to
25
Vivian (Vmax > 40 m s-1)
20 “Anatol” 3 December
15 1999 (Vmax > 30 m s-1)
10
5
0
1.1.1975 1.1.1980 1.1.1985 1.1.1990 1.1.1995 1.1.2000 1.1.2005
Fig. 6. Daily windthrow risk without (upper panel) and with feedback (lower panel) for the
simulation period.
5 25
3 15
[%]
2 10
1 5
0 0
0 50 100 150 200 250 300 350 400 450 500 550 600 650 700
days after winthrow event
7 25
transpiration [mm d -1]
[%]
3 10
2
5
1
0 0
0 50 100 150 200 250 300 350 400 450 500 550 600 650 700
days after winthrow event
1.2 25
daily windthrow risk
1
relative soil water
20
0.8
content [-]
15
[%]
0.6
10
0.4
0.2 5
0 0
0 50 100 150 200 250 300 350 400 450 500 550 600 650 700
days after winthrow event
Fig. 7. Effect of static and dynamic (with feedback) forest structure on water budget
components after strong wind storm at 25.01.1993.
Challenges in Climate-Driven Decision Support Systems in Forestry 471
However, towards the next winter (1994) the temperature limited transpiration and
increased precipitation result in highest soil water content both for feedback (reduced LAI)
and without feedback (normal LAI) calculations. Therefore, the 5% higher damage
produced by the next storm (1994) in calculation with feedback is caused only to very small
degree by reduced anchorage in soil, but mainly due to destabilisation of forest resulted
from reduced stand density. Another consequence of this destabilisation is the occurrence of
several small windthrows in the feedback calculations, which are definitely the product of
reduced CWL and therefore not observed in calculation without feedback. The latter one
registers only the damage of two main storms in 1994.
The strong events in 1994 caused further decrease of LAI in the feedback version which lead
to even stronger difference in transpiration in summer 1995 (Fig. 7, middle panel, days 450-
650) and, therefore, significantly higher soil moisture (Fig. 7, lower panel, days 450-650).
Combined with reduced stand density the total effect was such a strong reduction of CWL
that it led to the untypical windthrow event in late summer around the day 600 from initial
windthrow event in 1993. The damage risks from feedback are twice higher then without
feed under the presented soil and climate conditions.
4.2 Climate projection SRES A1B and damage risks in Finland and Russia
4.2.1 Projected climate
The experiment was limited to the two variations of boreal climate: the more continental one
in Russia and the maritime one in southern Finland. The Norway spruce and Scots pine
stands were chosen as typical for boreal zone. According to the same criteria a podzol was
taken as soil type for the experiment. The calculations were carried out for a period of 120
years (1981 – 2100) with daily time step.
The analysis of the projected climate data shows a considerable increase of both summer
and winter temperatures for boreal climate zone (Fig. 8). The precipitation in Finland
increases noticeably during the 21st century, while the increase is due to higher summer and
winter precipitation. In Russia the projected increase of precipitation is very weak, whereas
only the winter precipitation shows the consistent rise (Fig. 9).
Both in Finland and in Russia the maximal windspeed shows similar pattern during 21st
century: strong increase from present conditions to the period 2011-2040 and almost
constant values (or even slight decrease) towards 2100 (Fig. 10). For both sites the winter
winds are stronger then in summer, while in Finland the difference is more pronounced.
The period 2041-2070 has the highest number of strong storms (not shown here) and period
2071-2100 with second highest number.
Finland Russia
10 10
Air temperature[°C]
8 Tmean Tmean
Finland Russia
14
Winter 14 Winter
Air temperature [°C]
10 Summer
10 Summer
6
Ta [°C] 6
2
2
-2 -2
-6 -6
-10 -10
1981-2010 2011-2040 2041-2070 2071-2100 1981-2010 2011-2040 2041-2070 2071-2100
Climatic Periods Climatic Periods
Fig. 8. Changes of mean annual air temperature (upper panels) and mean seasonal
temperatures for 30-years climatic periods (lower panel) in 21st century in Russia and
Finland according to SRES A1B (ECHAM5-MPIOM).
Finland Russia
1300 1300
1200 Precipitation
Precipitation [mm year-1]
1200
Precipitation [mm year -1]
Precipitation
1100 30-years-mean 1100 30-years-mean
1000 1000
900 900
800 800
700 700
600 600
500 500
400 400
1981 1991 2001 2011 2021 2031 2041 2051 2061 2071 2081 2091 1981 1991 2001 2011 2021 2031 2041 2051 2061 2071 2081 2091
Years Years
Finland Russia
1000 1000 1000 1000
Winter Summer Annual
800 800
Precipitation (mm 6month )
0 0 0 0
1981-2010 2011-2040 2041-2070 2071-2100 1981-2010 2011-2040 2041-2070 2071-2100
Climatic Periods Climatic Periods
Fig. 9. Changes of annual precipitation sum (upper panels) and mean seasonal precipitation
sums for 30-years climatic periods (lower panel) in 21st century in Russia and Finland
according to SRES A1B (ECHAM5-MPIOM).
Challenges in Climate-Driven Decision Support Systems in Forestry 473
Finland
6.00
Winter
5.00
Summer
Vmax [m s-1]
4.00
3.00
2.00
1.00
0.00
1981-2010 2011-2040 2041-2070 2071-2100
Climatic Periods
Russia
4.00
Winter
3.00 Summer
Vmax [m s-1]
2.00
1.00
0.00
1981-2010 2011-2040 2041-2070 2071-2100
Climatic Periods
Fig. 10. Changes of seasonal-averaged daily maximal windspeed (Vmax) for 30-years
climatic periods (lower panel) in 21st century in Russia and Finland according to SRES A1B.
sites are not easily explained. In Russia the maximum of windthrow falls on the period of
2011-2040 where the maximal increase of mean windspeed is observed. The maximal
increase of windspeed in Finland is also observed in 2011-2040, however the maximal
damage risk is projected for 2041-2070. The main reason is the higher number of extreme
storms events during 2041-2070 though the mean Vmax is almost as high as in 2011-2040.
Additionally the joint effect of increased precipitation and temperature which are
considerably higher than in 2011-2040 contributes to the difference. The following decrease
of wind damage toward 2100 in Finland is caused by reduction of windstorms comparing to
previous period. Although the storms number during the period 2071-2100 is similar to the
reference period the slightly higher mean Vmax and considerably higher temperature and
precipitation result in twice higher damage during 2071-2100 comparing to P0. The analysed
dynamics of wind damage demonstrates that to capture such joint effects of several climatic
parameters the system has to use coupled process-based approach.
474 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Finland Russia
10 10
spruce spruce
8 8
windthrow [%]
windthrow [%]
pine pine
6 6
4 4
2 2
0 0
P0 P1 P2 P3 P0 P1 P2 P3
Finland Russia
8 8
spruce spruce
feedback [%]
feedback [%]
6 6
pine pine
4 4
2 2
0 0
P0 P1 P2 P3 P0 P1 P2 P3
Fig. 11. Changes of windthrow risks and feedback contribution to wind damage aggregated
for 30-years climatic periods (upper panel) in 21st century in Russia and Finland according
to SRES A1B. Periods P0: 1981-2010, P1: 2011-2040, P2: 2041-2070, P3: 2071-2100.
The lower panel of figure 10 shows the contribution of feedbacks to initial damage, i.e. the
additional damage produced in model when the vegetation damaged by initial windthrow
event is “removed” from the modelled stand as described in previous sections. Comparing
the upper and lower panels as well as Finnish site to the Russian one, one can see again the
non-linear dependences of damage risks on combinations of effecting factors. Generally the
magnitude of feedbacks is proportional to the magnitude of initial damage. Therefore the
feedback contribution in Finland is higher than in Russia. However, the complexity of
influencing agents might produce a complex response which would be higher or lower than
considering only one factor. For instance the damages in P0: 1981-2010, P1: 2011-2040
(Finland) are almost of the same magnitude – for pine even lower in 2011-2040 - however
due to the higher windspeed in 2011-2040 the same or lower initial damage produces more
extensive consequences. The same is true for Russian site when compare the periods P2:
2041-2070 and P3: 2071-2100. Generally the contribution of feedbacks to initial damage in
modelled stands is almost negligible comparing to real conditions presented in section 4.1.
The main reason is the coarse resolution of Global Circulation Model (ECHAM5-MPIOM)
which is not really suitable for evaluation of wind damage event at stand level. Due to wide
area averaging it underestimates considerably the small-scale variations of windspeed
caused by local surface heterogeneities and therefore – wind gustiness and Vmax. In present
study it was implemented for purely demonstration purposes. For any practical use of DSS
with climate projections, the downscaled scenarios should be implemented.
Challenges in Climate-Driven Decision Support Systems in Forestry 475
5. Conclusions
The additional damage caused by forest structural changes resulting from previous damage
or forest management activities might contribute considerably to the projected forest risks
and it is advisable to take it into account in a decision support process. It is emphasized that
the degree of contribution as well as a sign of feedback in each particular case will strongly
depend on the local or regional combination of climatic and soil conditions with tree species,
age and structure. The DSS-user should be aware that one kind of damage might strengthen
(windthrow -> windthrow) or weaken (windthrow -> drought) the contribution of other
damage factors. Although, the projected total amount of damaged forest might even remain
the same the losses would be redistributed between damage factors, which are the
extremely important information for the forest owner looking for a decision support.
Therefore, the adequate evaluation of projected risks in forestry requires the coupled
modelling capable to consider the combined effect of risk factors.
The DSS-user should be also aware that although the system is quite advanced it has certain
limitations. We would like to list the main limitations here. In present form is applicable in
Germany only as the empirical functions are derived there. The systems works with
monospecies stands only. The user should also take into account that the climate projections
are not forecasts and the skill of estimation of future risks is limited by uncertainties of
climate scenarios.
6. Acknowledgment
The development of DSS-WuK was supported by the German Ministry for Education and
Research (Program klimazwei) we gratefully acknowledge this support.
7. References
Ahrends, B.; Jansen, M. & Panferov, O. (2009). Effects of windthrow on drought in different
forest ecosystems under climate change conditions. Ber. Meteor. Insti. Univ. Freiburg,
Vol. 19, pp. 303-309. ISSN 1435-618X
Ahrends, B.; Meesenburg, H., Döring, C. & Jansen M. (2010a). A spatio-temporal modelling
approach for assessment of management effects in forest catchments. Status and
Perspectives of Hydrology in Small Basins, IAHS Publ. Vol. 336, pp 32-37. ISBN 978-
1-907161-08-7
Ahrends, B.;Penne, C. & Panferov, O. (2010b): Impact of target diameter harvesting on
spatial and temporal pattern of drought risk in forest ecosystems under climate
change conditions. The Open Geography Journal, Vol. 3, pp. 91-102. ISSN 1874-9232
Albert, M. & Schmidt, M. (2010). Climate-sensitive modelling of site-productivity
relationships for Norway spruce (Picea abies (L.) Karst.) and common beech (Fagus
sylvatica L.), Forest Ecology and Management, Vol. 259, No. 4, (February 2010), pp.
739-749, ISSN 0378-1127.
Alveteg, M.; Walse, C. & Warfvinge, P. (1998). Reconstructing historic atmospheric
deposition and nutrient uptake from present day values using MAKEDEP. Water,
Air, and Soil Pollution, Vol. 104, No. 3-4, (March 1998), pp. 269-283, ISSN 0049-6979.
476 Efficient Decision Support Systems – Practice and Challenges in Multidisciplinary Domains
Brooks, R. H. & Corey A. T. (1966). Properties of porous media affecting fluid flow. J.
Irrigation and Drainage Div., Proc. Am. Soc. Civil Eng. (IR2), Vol. 92, pp. 61-87. ISSN
0097-417X
Clapp, R. B. & Hornberger, G. M. (1978). Empirical equations for some soil hydraulic
properties. Water Resources Research, Vol. 14, No. 4. Pp. 601-603. ISSN 0043-1397
Czajkowski, T.; Ahrends B. & Bolte A. (2009). Critical limits of soil water availability (CL-
SWA) in forest trees - an approach based on plant water status. vTI agriculture and
forest research, Vol. 59, No. 2. pp. 87-93. ISSN 0458-6859
FAO (1990). Soil Map of the world. Vol 1: Legend. World soil resources report, Rome. pp 119.
ISSN 92-5-103022-7
FAO (2003). http://www.fao.org/ag/agl/agll/wrb/soilres.stm
Federer, C. A. (1995). BROOK90: A simulation model for evaporation, soil water and stream
flow, Version 3.1. Computer Freeware and Documentation. USDA Forest Service, PO
Box 640, Durham NH 03825, USA.
Federer, C. A.; Vörösmarty C. & Feketa, B. (2003). Sensitivity of annual evaporation to soil
and root properties in two models of contrasting complexity. J. Hydrometeorology,
Vol. 4, pp. 1276-1290. ISSN 1525-755X
Feisthauer, S. (2010). Modellierte Auswirkungen von Kleinkahlschlägen und Zielstärken-
nutzung auf den Wasserhaushalt zweier Fichtenreinbestände mit LWF-BROOK90.
Georg-August University, Master's Thesis, Göttingen. pp. 118, unpublished
Fröhlich, D. (2009). Raumzeitliche Dynamik der Parameter des Energie-, Wasser- und
Spurengashaushalts nach Kleinkahlschlag. PhD, Berichte der Fakultät für Forstwissen-
schaften und Waldökologie, Georg-August Universität, Göttingen, pp. 116
Gauger, T.; Haenel, H.-D., Rösemann, C., Nagel, H.-D. , Becker, R., Kraft, P., Schlutow, A.,
Schütze, G. , Weigelt-Kirchner, R. & Anshelm, F. (2008). Nationale Umsetzung
UNECE-Luftreinhaltekonvention (Wirkung). Abschlussbericht zum UFOPLAN-
Vorhaben FKZ 204 63 252. Im Auftrag des Umweltbundesamtes, gefördert vom
Bundesministerium f. Umwelt, Naturschutz und Reaktorsicherheit. Dessau-
Rosslau.
Granier, A.; Reichstein, M., N. Breda, I. A. Janssens, E. Falge, P. Ciais, T. Grunwald, M.
Aubinet, P. Berbigier, C. Bernhofer, N. Buchmann, O. Facini, G. Grassi, B. Heinesch,
H. Ilvesniemi, P. Keronen, A. Knohl, B. Koster, F. Lagergren, A. Lindroth, B.
Longdoz, D. Loustau, J. Mateus, L. Montagnani, C. Nys, E. Moors, D. Papale, M.
Peiffer, K. Pilegaard, G. Pita, J. Pumpanen, S. Rambal, C. Rebmann, A. Rodrigues,
G. Seufert, J. Tenhunen, T. Vesala & Wang O. (2007). Evidence for soil water control
on carbon and water dynamics in European forests during the extremely dry year:
2003. Agr. and Forest Meteorology, Vol. 143, No 1-2. pp. 123-145. ISSN 0168-1923
Hammel, K. & Kennel, M. (2001). Charakterisierung und Analyse der Wasserverfügbarkeit
und des Wasserhaushalts von Waldstandorten in Bayern mit dem
Simulationsmodell BROOK90. Forstliche Forschungsberichte München, 185. Heinrich
Frank. München. pp 148. ISBN 0174-1810
IPCC (2000) - Nakicenovic N. and Swart R. (Eds.) Special Report Emission Scenarios,
Cambridge University Press, UK. pp 570, ISBN: 92-9169-113-5
Challenges in Climate-Driven Decision Support Systems in Forestry 477
Jansen, M.; Döring, C., Ahrends, B., Bolte, A., Czajkowski, T., Panferov, O., Albert, M.,
Spellmann, H., Nagel, J., Lemme, H, Habermann M, Staupendahl K, Möhring B,
Böcher M, Storch S, Krott M, Nuske R., Thiele, J. C., Nieschulze, J., Saborowski, J. &
Beese, F. (2008). Anpassungsstrategien für eine nachhaltige Waldbewirtschaftung
unter sich wandelnden Klimabedingungen – Entwicklung eines
Entscheidungsunterstützungssystems "Wald und Klimawandel" (DSS-WuK).
Forstarchiv Vol. 79, No. 4, (July/August 2008), pp. 131-142, ISSN 0300-4112
Jarvis, A.; Reuter, H.I., Nelson, A., Guevara, E. (2008). Hole-filled SRTM for the globe
Version 4, available from the CGIAR-CSI SRTM 90m Database
http://srtm.csi.cgiar.org.
Olchev, A.; Radler, K., Panferov, O., Sogachev A., & Gravenhorst G. (2009). Application of a
three-dimensional model for assessing effects of small clear-cuttings on radiation
and soil temperature, Ecological Modelling, Vol. 220, pp. 3046-3056. ISSN 0304-3800
Panferov, O. & Sogachev, A. (2008). Influence of gap size on wind damage variables in a
forest, Agric. For. Meteorol., Vol. 148, pp 1869–1881, ISSN: 0168-1923
Panferov, O.; Doering, C., Rauch, E., Sogachev, A., & Ahrends B. (2009). Feedbacks of
windthrow for Norway spruce and Scots pine stands under changing climate.
Environ. Res. Lett., Vol. 4, doi:10.1088/1748-9326/4/4/045019. ISSN 1748-9326
Panferov, O.; Sogachev, A. & Ahrends, B. (2010). Changes of forest stands vulnerability to
future wind damage resulting from different management methods. The Open
Geography Journal, Vol. 3, pp. 80-90. ISSN 1874-9232
Peltola, H.; Kellomäki, S., Väisänen, H. (1999). Model computations on the impact of climatic
change on the windthrow risk of trees. Climate Change, Vol. 41, No. 1, pp. 17-36.
ISSN 1573-1480
Reynolds, K.M.; Twery, M.; Lexer, M.J.; Vacik, H.; Ray, D.; Shao, G. & Borges, J.G. (2008).
Decision Support Systems in Forest Management. In: Handbook on Decision Support
Systems 2, Frada Burstein, F. & Holsapple, C.W., pp. 499-533, Springer, ISBN 978-3-
540-48715-9, Berlin Heidelberg.
Roeckner, E.; Brokopf, R.; Esch, M.; Giorgetta, M.; Hagemann, S.; Kornblueh, L.; Manzini,
E.; Schlese, U. & Schulzweida, U. (2006). Sensitivity of simulated climate to
horizontal and vertical resolution in the ECHAM5 atmosphere model, Journal of
Climate, Vol. 19, No. 16, (August 2006), pp. 3771-3791, ISSN 0894-8755.
Richter, A.; Adler, G.H., Fahrak, M., Eckelmann, W. (2007). Erläuterungen zur
nutzungsdifferenzierten Bodenübersichtskarte der Bundesrepublik Deutschland im
Maßstab 1:1000000 (BÜK 1000 N, Version 2.3). Hannover. ISBN 978-3-00-022328-0
Sogachev, A.; Menzhulin, G., Heimann, M., Lloyd, J. (2002). A simple three dimensional
canopy - planetary boundary layer simulation model for scalar concentrations and
fluxes. Tellus B, Vol. 54, pp. 784–819. ISSN 0280-6509
Sogachev, A. and Panferov, O. (2006). Modification of two-equation models to account for
plant drag. Boundary Layer Meteorology. Vol. 121. pp 229-266. ISSN 0006-8314
Sogachev, A., Panferov, O., Ahrends, B., Doering, C., and Jørgensen, H. E., (2010). Influence
of abrupt forest structure changes on CO2 fluxes and flux footprints, Agricultural
and Forest Meteorology, doi:10.1016/j.agrformet.2010.10.010, ISSN: 0168-1923