Recent Ijietap PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 151

International Journal of Industrial Engineering, 19(1), 1-13, 2012.

INTERNET USER RESEARCH IN PRODUCT DEVELOPMENT: RAPID


AND LOW COST DATA COLLECTION
A.Shekar and J.McIntyre
School of Engineering and Advanced Technology
Massey University, Auckland
New Zealand
Small to Medium Enterprises (SMEs) face enormous financial risks when developing new products. A key element
of risk minimization is an early emphasis on gathering information about the end users of the product quickly. SMEs
are often overwhelmed by the prospect of expected research costs, lack of expertise, and financial pressures to rush to
market. Too often the more conventional path is chosen, whereby a solution is, developed and tested in the market to
see if it sticks. Such methodologies are less effective and subject the SME to increased financial risk. This study
demonstrates how SMEs can make use of freely available internet resources to reproduce aspects of more
sophisticated customer research techniques. Internet resources such as the YouTube and Forums enable SMEs to
research customers rapidly, and in a cost effective manner. This study examines New Zealand SMEs and presents
two case studies to support the use of modern web-based user research in new product development.
Keywords: product development, user information, web research, New Zealand
(Received 27 October 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
Small and Medium Enterprises (SMEs) are a large and vital component of most developed nations economies. The
prevalence of such firms is so large that in sectors such as manufacturing, their numbers often dominate the economic
landscape (Larsen and Lewis 2007). Their accrued success contributes substantially to employment, exports, and
Gross Domestic Product (GDP). The sheer quantity of firms and their individual contributions build flexibility and
robustness into a nations economy. Governments generally recognize this fact (Massey 2002) and support
innovation in SMEs through funding research and incentive programs.
The ability to launch new products and services is a critical element of success for all companies, large and small.
Launching a new product or service is often the most significant financial risk a firm may face since its own
inception. New product launches are typically characterized by large expenditures associated with research,
production tooling, marketing and promotions. The successful recovery of expenditures and the prospect of
generating profits depend entirely upon the products success in the consumer marketplace. The losses incurred from
a failed product can be devastating for the small organisation. In one study of SMEs based in the Netherlands, 40% of
firms were found not to survive their first 5 years in business (Vos, Keizer et al. 1998). Surveys of NZ SMEs indicate
that the risks are well understood; however, NPD is still identified as a weakness within their organisation (McGregor
and Gomes 1999).
1.1 SME Challenges and New Product Development
Innovation poses inherent risks, yet remains an essential activity of businesses both large and small (Boag and
Rinholm 1989). While SMEs are typically described as being more entrepreneurial risk-takers than their larger
counterparts, in reality their situation may be more precarious. Small businesses are often more sensitive to the risks
of new product development (NPD) activities due to limited financial resources. Indeed, an unsuccessful product
introduction can spell disaster for the small business.
While structured approaches have been successfully implemented in larger firms, smaller organisations are found to
be less enthusiastic about incorporating them and struggle to adopt and make use of them (Enright 2001). The
reasons for this are varied and not well understood. Many SMEs operate without the benefit of academic partnerships
and may simply not be aware of the information available. Others may recognize that structured NPD approaches
generally cater to the specific needs of larger firms and the results may impose unnecessary bureaucracy on the
smaller organisations.
It is generally recognized that smaller firms are distinct in both principle and practice from their larger counterparts.
Successful large firms deal efficiently with multiple project ideas, communications involving large numbers of
participants, and documentation to retain and share corporate knowledge. Smaller firms participating in the NPD
process face different challenges. SMEs typically address smaller numbers of projects, involving fewer participants,
and enjoy opportunities for more frequent face to face communications. Challenges to successful NPD efforts are the
results of operating constraints and the culture found within smaller organisations. A partial summary of the unique
issues faced by SMEs is presented in Table 1.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(1), 14-25, 2012.

AN INVESTIGATION OF INTERNAL LOGISTICS OF A LEAN BUS


ASSEMBLY SYSTEM VIA SIMULATION: A CASE STUDY
Aric Johnson11, Patrick Balve2, and Nagen Nagarur1
1

Department of Systems Science and Industrial Engineering


Binghamton University
P.O. Box 6000
Binghamton, NY. 13902-6000, USA
2
Production und Logistics Department
Heilbronn University
39 Max-Planck-Strae, Heilbronn 74081, Germany

Corresponding authors email: {Aric Johnson, [email protected]}


This study involves the internal logistics of a chosen bus assembly plant that follows a lean assembly process dictated by
takt time production. The assembly system works according to a rigid sequential order of assembly of different models of
buses, called the String of Pearls. The logistics department is responsible for supplying kitted components to assembly
workstations for the right model at the right time. A simulation model was developed to study this assembly system, with
an objective of finding the minimum number of kit carts for multiple production rates and kitting methods. The
implementation of JIT kitting was the ultimate goal in this case. The research focused on a specific assembly plant and
therefore, the numerical results are applicable to the selected plant only. However, some of the trends in the output may be
generalized to any assembly plant of similar type.
Significance: This study illustrates the use of simulation to plan further lean transformation within a major bus assembly
plant. This assembly plant had recently transformed their assembly operations according to lean principles with much
success. The next step was to transform the logistical support to this system, and this was planned via simulation. This
paper makes an original contribution to this area of research, and to the best of the authors knowledge such a work has not
been published so far.
Keywords: Bus assembly, kitting, takt time, simulation, internal logistics, JIT
(Received 21 March 2011; Accepted in revised form 12 March 2012)

1. INTRODUCTION
Automotive industries, including bus assemblies, have been forced to cut costs to remain competitive in a global
environment. For customers, price is often an important criterion, and so automotive plants strive to cut costs, while at the
same time struggle to improve their throughput. The industry has mostly adopted lean manufacturing methods as the means
of reducing costs and increasing throughput. Auto plants typically follow an assembly-line type of manufacturing, in which
all the operations are done in stations or cells connected sequentially with a set of operations assigned for each station. This
is because there are a large number of operations that need to be completed to produce a finished automobile; breaking the
operations into stations allows the system to operate more efficiently and at a much faster rate. Most plants also implement
a balanced assembly line of workstations that allows assemblies to flow through the system at a specific, predetermined
rate, termed takt time. This balanced, sequential workstation design promotes a smooth flow throughout the plant.
However, this type of system then inherits a new challenge of physically getting the required parts to the workstations on
time. This problem can be described as a problem of internal logistics between parts storage (warehouse) and the many
workstations. A well-coordinated logistics system is vital since a single workstation that does not receive its required
parts/components on time results in delaying the entire assembly line. An assembly plant operating at a takt time production
rate has little or no slack built into its schedule. Hence, getting the required parts/components to the right workstation at the
right time is critical in this setting.
One internal supply strategy would be to stage required parts at the workstations and replenish them as necessary. This is
often not feasible in bus assembly. For one thing, the parts may be of large size and storage of such parts at a workstation
may be prohibitive. In addition, if the line is producing multiple models, storing all the combinations of parts makes it more
complex and tends to become more error prone. Hence, the standard practice under such situations is to let the product flow
through the stations and have the parts/components for an individual assembly be brought to the appropriate workstation at
the exact time they are needed. The majority of the parts/components are stored in a warehouse, and the required set of
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(1), 26-32, 2012.

RESEARCH-BASED ENQUIRY IN PRODUCT DEVELOPMENT


EDUCATION: LESSONS FROM SUPERVISING UNDERGRADUATE
FINAL YEAR PROJECTS
A. Shekar
School of Engineering and Advanced Technology
Massey University, Auckland
New Zealand
(Received 27 October 2010; Accepted in revised form 24 June 2011)

This paper presents an interesting perspective on enquiry-based learning by engineering students through a project and
research-based course. It discusses the lessons learned from coordinating and supervising undergraduate research-based
project courses in Product Development engineering, at Massey University in New Zealand. Research is undertaken by
students at the start and throughout the development project in order to understand the background and trends in the
literature and incorporate them in their projects. Further research is done regarding the products technologies,
problem and motivation behind the development, as well as a thorough knowledge of the context and user environment
are undertaken. The multi-disciplinary nature of product development, requires students to research widely across
disciplinary borders, and then to integrate the results for the goals of designing a new product and journal-style research
papers. The Product Development process is a research-based decision-making process and one that needs an enquiring
mind and an independent learning approach, as often the problems are open-ended and ill-defined. Both explicit and
tacit knowledge are gained through this action-research methodology of learning. Tacit knowledge is gained through
the hands-on project experience, experimentation, and learning by doing.
Several implications for educators are highlighted, including the need for a greater emphasis on self-learning through
research and hands-on practical experience, the importance of developing student research skills, and the value of
learning from peer interaction.
Keywords: Product development, research-based enquiry, project-based learning.
(Received 1 May 2009; Accepted in revised form 1 June 2010)

1. INTRODUCTION
Engineering design programs are increasingly aware, that the project-based approach results in the development of
competencies that are expected by employers (DeVere, 2010). One of these competencies is independent research
skills and learning. Several new design-engineering programs have emerged and many see the need for engineers to
demonstrate design and management (business) thinking in addressing product design problems. Most of these
programs build the curriculum by combining courses from business, design and engineering faculties, leaving the
integration to the students. We have found that this integration does not take place well. Often students tend to
compartmentalise papers, do not appreciate the links between papers or sometimes lecturers from other departments are
not aware of how engineers may use some of the material they cover, hence may not provide relevant examples. Hence
project-based learning is an attempt to address this issue.
A broad definition of project-based learning (PBL) given by Prince and Felder is:
Project-based learning begins with an assignment to carry out one or more tasks that lead to the production of a final
producta design, a model, a device or a computer simulation. The culmination of the project is normally a written
and/or oral report summarizing the procedure used to produce the product and presenting the outcome.
In practice, many engineering education activities developed on the basis of inductive instructional methods active
research, inquiry-led learning and problem-based learning focus on a fixed deliverable and therefore fall within this
definition of PBL.
Massey University is currently reorganizing the curriculum towards overcoming the gap between theory and practice,
the lack of good integration of disciplines and taking on a more student-centred approach to learning. Students follow
courses in engineering sciences, physicals, mathematics, statistics and the like; however in tackling practical design
projects, they fail to apply this knowledge to the extent that their design would benefit. The new curriculum proposes to
have more project-based learning and less of the traditional chalk and talk teacher centred approach in all of the
majors offered. This approach follows worldwide trends in engineering education, and has already been practiced
within the current product development major with success, hence is presented in this paper.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(1), 33-46, 2012.

A HYBRID BENDERS/GENETIC ALGORITHM


FOR VEHICLE ROUTING AND SCHEDULING PROBLEM
Ming-Che Lai 1, Han-Suk Sohn 2, Tzu-Liang (Bill) Tseng 3, and Dennis L. Bricker 4
1

Department of Marketing and Logistics Management, Yu Da University, Miao-Li County 361, Taiwan
2
Dept. of Industrial Engineering, New Mexico State University, Las Cruces, NM 88003, USA
3
Dept. of Industrial Engineering, University of Texas, El Paso, TX 79968, USA
4
Dept. of Mechanical and Industrial Engineering, The University of Iowa, Iowa City, IA 52242, USA
Corresponding author: Han-Suk Sohn, [email protected]

This paper presents an optimization model and its application to a classical vehicle routing problem. The proposed model is
exploited effectively by the hybrid Benders/genetic algorithm which is based on the solution framework of Benders
decomposition algorithm, together with the use of genetic algorithm to effectively reduce the computational difficulty. The
applicability of the hybrid algorithm is demonstrated in the case study of the Rockwell Collins fleet management plan.
The results demonstrate that the model is a practical and flexible tool in solving realistic fleet management planning
problems.
Keywords: Vehicle Routing, Hybrid Algorithm, Genetic Algorithm, Benders Decomposition, Lagrangian Relaxation,
Mixed-integer programming.
(Received 9 June 2011; Accepted in revised form 28 February 2012)

1. INTRODUCTION
The vehicle routing problem (VRP) involves a number of delivery customers to be serviced by a set of identical vehicles at
a single home depot. The objective of the problem is to find a set of delivery routes such that all customers are served
exactly once and the total distance traveled or time consumed by all vehicles is minimized, while at the same time the sum
of the demanded quantities in any routes does not exceed the capacity of the vehicle. The VRP is one of the most
challenging combinatorial optimization problems and it was first introduced by Dantzig and Ramser (1959). Since then, the
VRP has stimulated a large amount of researches in the operations research and management science community (Miller,
1995). There are substantial numbers of heuristic solution algorithms proposed in the literature. Early heuristics for this
problem are those of Clarke and Wright (1964), Gillett and Miller (1974), Christofides et al. (1979), Nelson et al. (1985),
and Thompson and Psaraftis (1993). A number of more sophisticated heuristics have been developed by Osman (1993),
Thangiah (1993), Gendreau et al. (1994), Schmitt (1994), Rochat and Taillard (1995), Xu and Kelly (1996), Potvin et al.
(1996), Rego and Roucairol (1996), Golden et al. (1998), Kawamura et al. (1998), Bullnheimer et al. (1998 and 1999),
Barbarosoglu and Ozgur (1999), and Tom and Vigo (2003). As well, exact solution methods have been studied by many
authors. These include branch-and-bound procedures, typically with the basic combinatorial relaxations (Laporte et al.,
1986; Laporte and Nobert, 1987; Desrosiers et al., 1995; Hadjiconstantinou et al., 1995) or Lagrangiran relaxation (Fisher,
1994; Miller, 1995; Toth and Vigo, 1997), branch-and-price procedure (Desrochers et al., 1992), and branch-and-cut
procedure (Augerat et al, 1995; Ralphs, 1995; Kopman, 1999; Blasum and Hochstattler, 2000).
Unlike many other mixed-integer linear programming applications, however, Benders decomposition algorithm was not
successful in this problem domain because of the difficulty of solving the master system. In mixed-integer linear
programming problems, where Benders algorithm is most often applied, the master problem selects values for the integer
variables (the more difficult decisions) and the subproblem is a linear programming problem which selects values for the
continuous variables (the easier decisions). For the VRP problem, the master problem of Benders decomposition is more
amenable to solution by a genetic algorithm (GA) which searches the solution space in parallel fashion. The fitness
function of the GA is, in this case, evaluated quickly and simply by evaluating a set of linear functions. In this short paper,
therefore, a hybrid algorithm is presented in order to overcome the difficulty in implementing the Benders decomposition
for the VRP problem. It is based on the solution framework of Benders decomposition algorithm, together with the use of
GA to effectively reduce the computational difficulty. The rest of this paper is organized as follows. In section 2 the
classical vehicle routing problem is presented. The application of the hybrid algorithm is described in section 3. In Section
4, a case study on the fleet management planning of the Rockwell Collin, Inc. is presented. Some concluding remarks are
presented in Section 5. Finally, Section 6 lists references used in this paper.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(1), 47-56, 2012.

A NEW TREND BASED APPROACH FOR FORECASTING OF ELECTRICITY


DEMAND IN KOREA
Byoung Chul Lee, Jinsoo Park, Yun Bae Kim
Department of Systems Management Engineering, Sungkyunkwan University, Suwon, Republic of Korea
Corresponding author: Yun Bae Kim, [email protected]
Many forecasting methods for electric power demand have been developed. In Korea, however, these kinds of methods do
not work correctly. A peculiar seasonality in Korea increases the forecasting error produced by previous methods. Two big
festivals, Chuseok and Seol, also produce forecasting errors. Therefore, a new demand forecasting model is required. In this
paper, we introduce a new model for electric power demand forecasting which is appropriate to Korea. We start the
research using the concept of weekday average. The final goal is to forecast hourly demand for both the long and short term.
We finally obtain the result with accuracy of over 95%.
Keywords: Demand forecasting, electric power, moving average
(Received 7 April 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
There have been many studies related to forecasting electric demand. These studies have contributed to achieving greater
accuracy. Shahiderhpour et al. (2002) introduced market operation in electric power systems. Price modeling for electricity
markets was described by Bunn (2004). Kawauchi et al. (2004) developed a forecasting method based on conventional
chaos theory for short term forecasting. Gonzalez-Romera et al. (2007) used neural network theory, Oda et al. (2005)
forecasted demand with regression analysis, and Pezzulli et al. (2006) focused on seasonal forecasting with a Bayesian
hierarchical model .
These attempts, while valuable, are inappropriate for Korea because there are four distinct seasons, which have their
own feature such as a cycle of three cold days and four warm days in winter. In addition, Korean demand trend has a cycle
by weekly unit. Therefore we have two sources of seasonality, seasonal factor and weekly factor. Therefore, the previous
methods are not proper due to double seasonality.
To examine double seasonality, we analyzed past data to determine properties of Korean electric demand. Using these
properties, we defined a new concept of weekday average, (WA), and developed models for forecasting hourly demand of
electric power in Korea.
The organization of this paper is as follows. In Section 2, the concept of WA is used for 24 hours as the first step in
forecasting hourly demand. In Section 3, we deal with the methods of forecasting WA and non-weekday demand, including
holidays and festivals. We apply our model to the actual demand data and show the results in Section 4. We conclude the
research and suggest further studies in Section 5.

2. CONCEPT OF WEEKDAY AVERAGE


We found two special properties related to the hourly demand of electric power in Korea; one is the character of weekdays,
and the other is a connection between weekdays and non-weekdays (a weekday means the days from Tuesday to Friday).
Holidays and festival seasons are regarded as non-weekdays even though they are in a weekday period. The demands
during each weekday are almost similar to one another at the same hour; this is the first property. However, the demands of
Monday and weekends are less than those of weekdays by an invariable ratio; this is the second property. Therefore, our
research starts by developing a method for forecasting the hourly demand of weekdays. We then find the relation between
weekdays and non-week days.
Let us define the hourly demand:
...
(1)
Dn i (h) : demand from (h 1) : 00 to h : 00

(h = 1, ,24 and i = 1, ,7).


where n is the number of weeks from the base week; for example, if the base week is the first week in 2007, then Dec.
31st, 2007 has the value of n = 52 . i is the day of the week (1=Monday, 7= Sunday).
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(2), 57-67, 2012.

RELIABILITY EVALUATION OF A MULTISTATE NETWORK UNDER


ROUTING POLICY
Yi-Kuei Lin
Department of Industrial Management
National Taiwan University of Science and Technology
Taipei, Taiwan 106, R.O.C.
Tel: +886-2-27303277, Fax: +886-2-27376344
Corresponding author: Lin, [email protected]
A multistate network is a stochastic network composed with multistate arcs in which each arc has several possible
capacities and may fail due to failure, maintenance, etc. Different from the deterministic case, the minimum transmission
time in a multistate network is not a fixed number. We evaluate the probability that a given amount of data/commodity can
be sent from a source port to a sink port through a pair of minimal path (MP) simultaneously under the time constraint.
Such a probability is named the system reliability. An efficient solution procedure is first proposed to calculate it. In order
to enhance the system reliability, the network administrator decides the routing policy in advance to indicate the first and
the second priority pairs of MP. Subsequently, we can evaluate the system reliability under the routing policy. An easy
criterion is then proposed to derive an ideal routing policy with higher system reliability. We can treat the system reliability
as a performance index to measure the transmission ability of a multistate network such as computer, logistics, urban traffic,
telecommunication systems, etc.
Keywords: Multistate network; commodity transmission; system reliability; transmission time; routing policy
(Received 1 March 2010; Accepted in revised form 27 February 2012)

1. INTRODUCTION
For a deterministic network in which each arc has a fixed length attribute, the shortest path problem is to find a path with
minimum total length. When commodities are transmitted from a source to a sink through a flow network, it is desirable to
adopt the shortest path, least cost path, largest capacity path, shortest delay path, or some combination of multiple criteria
(Ahuja, 1998; Bodin et al., 1982; Fredman and Tarjan, 1987; Golden and Magnanti, 1977), which are all variants of the
shortest path problem. From the point of view of quality management and decision making, it is an important task to reduce
the transmission time through a flow network. Hence, a version of the shortest path problem called the quickest path
problem proposed by Chen and Chin (1990) arises. This problem finds a quickest path with minimum transmission time to
send a given amount of data/commodity through the network. In this problem, each arc has the capacity and the lead time
contributes (Chen and Chin, 1990; Hung and Chen, 1992; Martins and Santos, 1997; Park et al., 2004). More specifically,
the capacity and the lead time are both assumed to be deterministic. Several variants of quickest path problems are
thereafter proposed; constrained quickest path problem (Chen and Hung, 1994; Chen and Tang, 1998), the first k quickest
paths problem (Chen, 1993; Chen, 1994; Clmaco et al., 2007; Pascoal et al., 2005), and all-pairs quickest path problem
(Chen and Hung, 1993; Lee and Papadopoulou, 1993).
However, due to failure, partial failure, maintenance, etc., each arc should be considered as multistate in many real-life
flow networks such as computer, logistics, urban traffic, telecommunication systems, etc. That is, each arc has multiple
possible capacities or states (Jane et al., 1993; Lin et al., 1995; Lin, 2003, 2004, 2007a,b, 2009; Yeh, 2007, 2008). Then the
transmission time thorough a network is not a fixed number if each arc has the time attribute. Such a network is named a
multistate network throughout this paper. For instance, a logistics system with each node representing the shipping port and
each arc representing the shipping itinerary between two ports is a typical multistate network. The capacity of each arc is
counted in terms of number of container, and is stochastic due to that either containers or traffic tools (e.g., cargo airplane,
cargo ship, etc.) through each arc may be in maintenance, reserved by other suppliers or in other conditions.
The purpose of this paper is to design a performance index to measure the transmission ability for a multistate network.
In order to reduce the transmission time, the data/commodity can be transmitted through several minimal paths (MPs)
simultaneously, where an MP is a sequence of arcs without loops. For convenience, we first concentrate on commodity
transmission through two MPs. We mainly evaluate the probability that the multistate network can send d units of
commodity from a source port to a sink port through a pair of MP under the time constraint T. Such a probability is named
the system reliability, which can be treated as a performance index. Under the same time constraint and demand
requirement, the system owns a better transmission ability if it obtains the higher system reliability. In order to boost the
transmission ability, the network administrator decides the routing policy in advance to indicate the first and the second
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(2), 68-79 2012.

DETERMINING THE CONSTANTS OF RANGE CHART FOR SKEWED


POPULATIONS
Shih-Chou Kao
Graduate School of Operation and Management, Kao Yuan University,
No.1821, Jhongshan Rd., Lujhu Dist., Kaohsiung City 821, Taiwan (R.O.C.).
Corresponding author email: [email protected]
The probability of a false alarm rate (type I risk) in Shewhart control charts based on a normal distribution will increase
as the skewness of a process increases. However, the distribution of a range is a positivelyskewed one. It is unstable
for monitoring range values by using three-sigma control limits that is from the concept of a normal assumption.
Moreover, most studies employ a simulation method to compute the type I risks of the range control chart for
nonnormal processes. To provide an alternative method, this study utilizes the probability density function of the
distribution of the range to construct the appropriated control limits of a range control chart for a skewed process. The
control limits of the range chart were determined by setting that the type I risk is equal to 0.0027 and the standardized
Weibull, lognormal and Burr distributions. Furthermore, compared to range charts that use type I risks and type II risks,
weighted variance (WV), skewness correction (SC) and traditional Shewhart control charts, the proposed range chart is
superior to other control chart, in terms of the type I risks and type II risks for a skewed process. An example of the
yield strength for the deformed bar in coil is presented to illustrate these findings. The study utilized the probability
density function of range distribution and =0.0027 probability limits with considering the three distributions, Weibull,
lognormal and Burr to construct the R control chart. The computed constants of the R control chart were listed in a
table that can be consulted by for practitioners. R chart using the proposed method is superior to other control chart, in
terms of the type I risks and type II risks for a skewed process.
Keywords: Range chart, skewed distribution, normality, type I risk.
(Received 22 March 2010; Accepted in revised form 24 June 2011)

1. INTRODUCTION
The development of control charts became rapid and diverse after W. A. Shewhart proposed a traditional control chart.
Control charts have the superior ability for monitoring a process in manufacturing, and they have been applied
successfully in other areas, such as finance, health care and information.
The Shewhart range (R) control chart is one of the most frequently used control charts since it is easily operated and
interpreted by practitioners. In general, traditional variable control charts, such as an average and a R control charts, are
based on the normality assumption. However, many processes in industry violate this assumption. These skewed
processes involve chemical processes, cutting tool wear processes and lifetime in an accelerated life test (Bai and Choi,
1995). Moreover, the range distribution is a positivelyskewed one (Montgomery, 2005). If the traditional control
charts are used to monitor a nonnormal process, the probabilities of a type I error () in the control charts increases as
the skewness of the process increases (Bai and Choi, 1995; Chang and Bai, 2001).
Bai and Choi (1995), Chang and Bai (2001) and Montgomery (2005) considered four methods for improving the
capabilities of control charts for monitoring a skewed process. The first method increased the sample sizes on the basis
of the central limit theorem. When the samples are larger, the skewed distribution will become a normal or
approximately normal distribution. However, the method is often expensive due to sampling. The second method is to
assume that the distribution of a process is known and then to derive a suitable control chart from this known
distribution. Ferrell (1958) designed geometric midrange and range control charts for a lognormally distributed process.
Nelson (1979) proposed median, range, scale and location control charts for a Weibull distribution.
The third method is to construct the traditional control chart using approximately normal data that result from
transforming skewed data. Various criteria were proposed to transform exponential data, such as maximum likelihood
and Bayesian methods (Box and Cox, 1964), KullbackLeibler (KL) information numbers (Hernandez and Johnson,
1980; Yang and Xie, 2000), measure of symmetry (zero skewness; Nelson, 1994), ease of use (Kittlitz, 1999) and
minimizing the sum of the absolute differences (Kao et al., 2006), to assess transformation efficiency. The shortcoming
of this method is that it is difficult to identify an exact distribution of a process with the second method.
The last method is to construct control charts using heuristic methods with no assumption on the form of the
distribution. Choobineh and Ballard (1987) proposed the WV method to determine the constants of average and R
charts based on the semivariance estimation of Choobineh and Branting (1986). Bai and Choi (1995) considered the
three skewed distributions (Weibull, lognormal and Burr) and determined the constants of average and R charts using
the weighted variance (WV) method by splitting a skewed distribution in two parts at the mean. Chang and Bai (2001)
decided the constants of average control chart by replacing a variance of WV method with a standard deviation. Chan
and Cui (2003) proposed the skewness correction (SC) method based on the CornishFisher expansion (Johnson et al.,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(2), 80-89 , 2012.

PERFORMANCE MODELING AND AVAILABILITY ANALYSIS OF SOLE


LASTING UNIT IN SHOE MAKING INDUSTRY: A CASE STUDY
Vikas Modgil 1, S.K. Sharma2, JagtarSingh3
1

Dept of Mechanical Engineering, D.C.R.U.S.T., Murthal, Sonepat, Haryana, India


2
Dept of Mechanical Engineering, N.I.T Kurukshetra, Haryana, India
3
Dept of Mechanical Engineering, S.L.I.E.T Longowal, Sangrur, Punjab, India
Corresponding author: Vikas Modgil, [email protected]

In the present work Performance modelling of the sole lasting unit, a part of shoe making industry has been done on the
basis of Markov birth-death process using probabilistic approach for the purpose to compute and improve the time
dependent system availability (TDSA). The kolmogorov-differential equations based on mnemonic rule are formulated
using the performance model and are solved to estimate the availability of the system as a function of time month wise
for the whole year using a more sensitive and advance numerical technique, known as adaptive step-size control RungeKutta method. The input contributors for the computation of time dependent system availability of the system are the
existing failure and repair rate are taken from plant maintenance history sheets. The new repair rates are also devised for
the purpose of maximum improvement in the availability. The analysis finding helps the plant management for adapting
the best possible maintenance strategies. Performance modeling and availability analysis of a practical system is
conducted in the paper with the purpose to improve its operational availability. The time dependent system availability
(TDSA) is computed with the existing failure and repair rates on the monthly basis for the whole year. New devised
repair rates are also proposed through which one can assure maximum availability of the system with existing
equipments/or machines. It is also explored that the, the knowledge of TDSA minimizes the chances of sudden failure
and assure the maximum availability of the system and exposes the critical subsystems which needs more attention and
due consideration as far as the maintenance is concerned. The improvement in the availability of the system is mostly
from 2% to 5% in most of the month. However it increases drastically to 9% in the month of April. Further the assured
increase in availability increases productivity as well as the balance between demand and supply such that the
manufacturer delivers its product properly in time to the market/society, which in turn increases the profit and the
reputation of industry in the market.
Keywords: Performance Modelling, Time Dependent System Availability (TDSA); Runge-Kutta; Sole Lasting.
Kolmogorov-Differential equation, Shoe Making.
(Received 25 September 2011; Accepted in revised form 27 February 2012)

1. INTRODUCTION
With increasing advancement and automation, the industrial systems are getting complex and thus maintaining their
failure-free operation is not only costly but also difficult. Thus maximum availability levels are desirable to reduce the
cost of production and maintaining them in working order for a long duration. The industrial operating conditions and
repair facility play also an important role in this regard.
Several attempts have been made by various researchers and authors to find the availability of practical industrial
system using different techniques. Dhillon and Natesan (1983) examined the availability of power system in fluctuating
environment. Singh I.P. (1989) studied the reliability analysis of a complex system having four types of components
with pre-emptive priority repairs. Singh and Dayal (1992) studied the reliability analysis of a repairable system in a
fluctuating environment. Gupta et al. (2005) evaluated the reliability parameters for butter manufacturing system in a
diary plant considering exponentially distributed failure rates of various components. Solanki et al (2006) evaluated the
reliability of thermal-hydraulic passive systems using thermal hydraulic code RELAP 5/MOD 3.2(which operate in two
phase natural circulation). Rajpal et al (2006) employed artificial neural network for modelling reliability, availability
and maintainability of a repairable helicopter transport facility. Kumar et al. (2007) developed a simulated availability
model for CO2 cooling system of a fertilizer plant. Goyal et al. (2009) discusses the steady state availability analysis of
a part of rubber tube production system under pre-emptive priority repair using Laplace transform technique. Garg S.et
al. (2010) computed the availability of crank-case manufacturing in a 2-wheeler automobile industry and block board
system under pre-emptive priority discipline.
In this paper a sub-system of the practical plant Liberty Shoes Limited which is a continuous production system is
taken and the time dependent system availability of the system is estimated using a more advance and sensitive
numerical technique known as adaptive step-size runge-kutta method. The earlier work carried out by most of the
research groups do not entertain this aspect of time dependent availability. They just provide the long run or steady
state availability of the system by taking time infinity. The liberty shoe making plant situated in Karnal, Haryana, India
is chosen for study. Numerical results based upon the true data collected from industry are presented to illustrate the

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(2), 90-100, 2012.

SIMULATION MODELING OF OUTBOUND LOGISTICS OF SUPPLY


CHAIN: A CASE STUDY OF TELECOM COMPANY
Arvind Jayant1, S. Wadhwa2, P.Gupta3, S.K.Garg4
1,3

Department of Mechanical Engineering


Sant Longowal Institute of Engg. & Technology, Longowal, Sangrur, Punjab 148106 (INDIA)
2
Department of Mechanical Engineering, Indian Institute of Technology, Delhi (INDIA)
4
Department of Mechanical Engineering, Delhi Technological University, Delhi-110042
Corresponding author: Arvind Jayant, [email protected]
The present work has been done for a telecom company with a focus on cost and flexibility in effectively deals with
changing scenario. In this paper, the major problems faced by company at upper end of supply chain and sales outlet are
analyzed and a complete inventory analysis on one of a company product is done by developing an Inventory model for the
company bound store/distribution center and optimal inventory policy is suggested for the outbound logistics on the basis
of simulation analysis. This model is flexible enough to respond to the market fluctuations more efficiently and effectively.
The model is developed in Microsoft EXCEL.
Significance: Increasing competitive pressures and market globalization are forcing the firms to develop supply chains that
can quickly respond to customer needs. The inventory model for the companys bound store/outbound
logistics has been developed & simulated to reduce the operating cost, stock out, to make supply chain agile.
Key words: Supply Chain, Outbound Logistics, Information Technology, Simulation, Operating Cost, Inventory.
(Received 4 August 2010; Accepted in revised form 28 February 2012)

1. INTRODUCTION
The basis of global competition has changed. No longer are companies competing against other companies, but rather
supply chains are competing against supply chains. Indeed, the success of a business is now invariably measured neither by
the sophistication of its product nor by the size of the market share. It is usually seen in the light of the ability to sometimes
forcefully and deliberately harness its supply chain to deliver responsively to the customers as and when they demand it.
Flexible Supplier-manufacturer relationship is the key enabler in the supply chain management, without the flexibility at
the vendor side the supply chain cant respond fast. Therefore, the relationship with the supplier should be flexible enough
to meets the changing market needs [2].
In this paper several experiments were carried out on the model for visualizing the impact of the various decision
variables on the total cost and then fixing up the values of (s) and (S). The graphs showing the impact of these parameters
on the performance of the individuals and the system were plotted. Based on the systems performance under different sets
of operating decisions we shall try to analyze the effect of the different parameters and in what manner their decisions
affect the performance of others across the chain. The parameters whose impact was studied are stock level (S), reorder
level (s); this paper deals with the impact of increase in stock levels and reorder level of the warehouse on overall system
performance [6].

2. ABOUT THE PRODUCT


Bharti -Teletech is a giant in the manufacturing of all kind of telephone sets for the Department of Telecommunication,
open market and for exports. The company share in this segment is highest in India. This company has 35% share in
telephone segment in India. The company is producing the seven model of telephone with brand name of beetal.
The company is currently facing the problem of delivering the CORAL & MILL I model of phones on schedule
date. Though the number of shortage is small but any delivery made beyond schedule will be considered as the lost
opportunity of sale.
The coral is general model for the open market and its demand is highly uncertain therefore frequent stock outs are
going on at the end of bound store and warehouse side.
The forecasts generated using the 6-month average were not giving the appropriate results.
The warehouse is not using the any inventory policy and the reorder level of the warehouse was made intuitively
made.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(2), 101-115, 2012.

SIMULATION-BASED OPTIMIZATION FOR RESOURCE ALLOCATION AT


THIRD-PARTY LOGISTICS SYSTEMS
Yanchun Pan1, Ming Zhou2, Zhimin Chen3
1,3
2

College of Management, Shenzhen University, P.R. China

Center for Systems Modeling and Simulation, Indiana State University


Corresponding author: Ming Zhou, [email protected]

Allocating resource at third-party logistics systems differs significantly from traditional private logistics systems. The
resources are considered commodities sold to customers of different types. Total yield suffers when over-allocate to
lower-rate or price-sensitive customers; but the resource become spoiled when reserve too much for full-rate or
time-sensitive customers that do not arrive as expected. Uncertain order characteristics make the optimization of such
decisions very hard, if not impossible. In this paper we proposed a simulation-based optimization to address related issues.
A genetic algorithm based optimization module is developed to generate/search good solutions; and a discrete-event
simulation model is created to evaluate the solutions generated. The two modules are integrated to work in evolutionary
cycles to achieve the optimization. The study also compared GA/Simulation model with more traditional approach such as
response surface methodology via designed experiments. The models were validated through experimental analysis.
Keywords: resource allocation; simulation; genetic algorithm; optimization; third-party logistics
(Received 2 September 2010; Accepted in revised form 1 March 2012)

1. INTRODUCTION
Studies on third-party logistics (TPL) systems have been thriving since last two decades, as TPL systems gain popularity in
many parts of the world through the flexibility and convenience they provide to improve the quality and efficiency of
logistics services and customer satisfaction (Lambert et al, 1998; Bowersox et al, 2002). Resource or capacity allocation
(e.g. allocation of warehouse space for temporary storage of customer goods) at TPL systems differs significantly from
traditional private logistics system. Unlike private systems, TPL companies use public warehouses that are usually more
efficient than private ones through better productivity, shared resources, economy of scale, and transportation (delivery)
consolidation (Ackerman, 1994); and consider the resources to be allocated as commodities sold directly to different
customers repeatedly via services generated based on the resources, such as storing, handling, or transporting goods. Also
such resources are considered perishable when they are not sold at or during a period of time, i.e. they cause the loss of
possible revenue that could have been otherwise generated if they were sold (Phillips, 2005).
As in airline or hospitality industries, there are mainly two types of customer demands, and accordingly two different
approaches for allocating resource to customer orders. First, many customers prefer to have their orders placed in advance a
period of time to expect a discounted rate of service. Once allocated, the chunk of resource is locked in and subtracted
(from available stock) for the usage period of the order, which is a time period during which the allocated resource is
consumed to generate service for the order. This type of customer is price-sensitive. The risk of over-allocating resource to
this kind of orders is that we may lose opportunities to serve more profitable full-rate customers (or customers willing to
pay higher rates). This is known as the risk of spill (Humphreys, 1994; Phillips, 2005). On the other hand, there are
customers who are less price-sensitive, but more time-sensitive, i.e. they place orders often at a time very close to (or at) the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 117-127, 2012.

TRACKING AND TRACING OF LOGISTICS NETWORKS: PERSPECTIVE


OF REAL-TIME BUSINESS ENVIRONMENT
AHM Shamsuzzoha and Petri T Helo
Department of Production
University of Vaasa, PO BOX 700, FI-65101, Finland
Todays business environments are full of complexities in terms of managing the value adding supply chain and logistics
networks. In recent years, the development of locating and identifying technologies contribute to fulfill the growing
demands of tracking and tracing the logistics and/or transportation chain. The importance of tracking and tracing of
shipments is considered quite high for manufacturing firms in respect to managing logistics networks efficiently and
satisfying high customers demand. This paper presents a theoretical overview of sophisticated technology-based
methodology or approach required for solving the complex tracking and tracing system in the logistics and supply chain
network. A real-life case example is presented in this paper with the view to demonstrate the tracking technology in terms
of identifying the location and related conditions of the case shipment. The overall outcomes from this research are
concluded with future research direction too.
Significance: This work basically reviews the existing tracking and tracing technologies available over the areas of
logistics and supply chain management. It also demonstrates the methodology for implementing such technologies in reallife business cases and provides insight of tracking and tracing technology with respect to identifying location, position and
conditions of the shipped items.
Keywords: Logistics tracking and tracing, IT-based solution, Transportation and Distribution network, Real-time
information flow, Business competition.
(Received 3 June 2011; Accepted in revised form 31 July 2011)

1. INTRODUCTION
The identification of location and knowing the conditions of the transported items on real-time business environment are
growing increasing concern in todays business. This is very much expected for the manufacturing firms in terms of their
business growth and making the customers happy. The importance of tracking and tracing of shipments is considered quite
high for manufacturing firms in terms of customer service and essential for managing logistics networks efficiently. Global
industries are facing problems both from tracking and tracing in their logistics networks that creates huge coordination
problems in the overall product development sites. This problem looses the track among production, delivery and
distribution in the complete logistics chain from source to destination, which is responsible for opportunity cost through
customers dissatisfaction. Tracking system helps to identify the position of the shipment and informed the customer in well
advance. Without tracking system it is almost impossible to find out delivered items and often considered as lost or stolen
item that causes business loss. This system might fulfill the needs of project manager to map the production process from
transportation to material management (Helo et al., 2005, Helo, 2006).
Recently evolved technologies supports the fundamental needs for tracking and tracing the logistics network. The
tracking technology ensures the real-time status update of the target shipment and provides the detailed information
corresponding to location, conditions of the shipments (vibration, damage, missing, etc). In practice, there are several
tracking systems available through GPS, GTIN (EAN Int., 2001), RFID (ISO/IEC, 2000; Chang, 2011), Barcode etc;
however, all these systems are not fully compatible for industry. Most of the available tracking and tracing systems utilize
proprietary tracking numbers defined by the individual companies operating systems and are based on information
architecture, where the tracking information is centralized to the provider of the tracking service. Existing tracking systems
can not able to identify the contents within a box for example, whether the box is open or the contents are lost or stolen etc.
In order to tackle such misalignments in the logistics channel, a state-of-the art technologies or tools are needed to be
developed for sustainable production process. These tools are needed to be cost effective and at the same time possibility
for reuse or recycling for any circumstances. Before proceed towards the real-time tracking technology, it is crucial to
analyze its possible cause and effects. Optimal performance measures for the technologies could ensure projects success for
any industries.
Tracking technologies in logistics networks are implemented fairly little in the global technology industry. Mostly high
volume of global industries are implemented this technology with limited capabilities. The basic methods for all these
tracking systems are usually confined for the customer to access the tracking information are within the area of tracing the
shipments through manual queries such as using a www-site or telephone call, e-mailing, fax or to engage in developing
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 128-136, 2012.

A MATHEMATICAL PROGRAMMING FOR AN EMPLOYEES CREATIVITY


MATRIX CUBIC SPACE CLUSTERING IN ORGANIZATIONS
Hamed Fazlollahtabar*1, Iraj Mahdavi2, Saber Shiripour2, Mohammad Hassan Yahyanejad3
1
Faculty of Industrial Engineering, Iran University of Science and Technology, Tehran, Iran
2
Department of Industrial Engineering, Mazandaran University of Science and Technology, Babol, Iran
3
Mazandaran Gas Company, Sari, Iran
*Corresponding authors email: [email protected]
We investigate different structural aspects teams network organization and their creativity within a knowledge
development program (KDP). Initially, a pilot group of employees in an organization is selected. This group is evaluated
through creativity parameters using a questionnaire. Considering the questionnaires data, a creativity matrix is configured
by a binary scoring. Applying the creativity matrix, clustering is performed via mathematical programming. The pilot group
is divided into some research teams. The research subjects are submitted to the teams. Finally, an allocated problem is
solved and some new research subjects are evolved to be assigned to the next configured teams. This procedure is repeated
dynamically for different time periods.
Keywords: Creativity matrix; Intelligent clustering; Cubic space clustering
(Received 28 September 2011; Accepted in revised form 20 December 2011)

1. INTRODUCTION
In todays knowledge-intensive environment, Knowledge Development Programs (KDPs) are increasingly employed for
executing innovative efforts (Oxley and Sampson, 2004; Smith and Blanck, 2002). Researchers and practitioners mainly
agree that effective management plays a critical role in the success of such KDPs (Pinto and Prescott, 1988). Unfortunately,
the knowledge and experience base of most managers refer to smaller-scale projects consisting of only a few project teams.
This may be responsible for what Flyvbjerg et al. (2003) call a performance paradox: At the same time as many more
and much larger infrastructure projects are being proposed and built around the world, it is becoming clear that many such
projects have strikingly poor performance records ....
KDPs employ follow a project-management like approach with the team as the organizational nucleus (e.g., van Engelen
et al., 2001). The information network of these teams defines the opportunities available to them to create new knowledge
(e.g., Uzzi, 1996). As many scholars have argued, networks of organizational linkages are critical to a host of
organizational processes and outcomes (e.g., Baum and Ingram, 1998; Darr et al., 1995; Hansen, 1999; Reagans and
McEvily, 2003; Szulanski, 1996). New knowledge is the result of creative achievements. Creativity, therefore, molds the
foundation for poor or high degree of performance. The extent to which teams in KDPs produce creative ideas depends not
only on their internal processes and achievements, but also on the work environment in which they operate (e.g., Amabile et
al., 2004; Perry-Smith and Shalley, 2003; Reiter-Palmon and Illies, 2004). Since new knowledge is mainly created when
existing bases of information are disseminated through interaction between interacting teams with varying areas of
expertise, creativity is couched in interaction networks (e.g., Leenders et al., 2003; Hansen, 1999; Ingram and Robert, 2000;
Reagans and Zuckerman, 2001; Tsai, 2001; Uzzi, 1996).
Any organization needs team work among employees for productivity purposes in problem solving. Organizations face
various problems in their determined missions. A useful approach to address these problems is to configure teams
consisting of expert employees. Due to their knowledge and experience of the organization, these teams understand the
organization's problems better more than external research groups and thus may solve the problems more effectively.
Hence, the significant decision to be made is configuration of the teams. Creative teams would be able to propose more
practical and beneficial solutions for organization's problems. Since creativity is a qualitative concept, analyzing and
decision making require knowledge management algorithms and methodologies. These methodologies are employed in the
different steps of configuring teams, task assignment to teams, teams' progress assessment and executive solution proposals
for problems.
In the present work, we propose a creativity matrix analyzing creativity parameters of a pilot group in an organization.
Then, using an intelligent clustering technique, research teams are configured and research subjects are allocated to them.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 137-148, 2012.

ACCEPTANCE OF E-REVERSE AUCTION USE: A TEST OF COMPETING


MODELS
Fethi Calisir and Cigdem Altin Gumussoy
Department of Industrial Engineering
Istanbul Technical University
This study aims to understand factors affecting e-reverse auction usage in companies by comparing three models:
Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB) and the integrated model (integration of TAM
and TPB). The comparison of the models will answer two important questions: First, with the integration of the models,
whether the explanation rate of behavioral intention to use and actual use is increased. Second, in explaining e-reverse
auction usage, whether TAM is the most powerful method. Since TAM is developed only to explain usages of information
technologies (IT). Using LISREL 8.54, data collected from 156 employees working in the procurement department of
companies in 40 different countries were used to test the models. Results indicated that, TPB may be more appropriate than
the TAM and the integrated model for explaining behavioral intention to use e-reverse auction. Further, the explanation rate
of both behavioral intention to use and actual use is not increased with the integration of the models. The other result
suggests that behavioral intention to use is explained- by only attitude towards use in TAM; by subjective norms, perceived
behavioral control and attitude towards use in both TPB and the integrated model. Actual use of e-reverse auction is
directly predicted by behavioral intention to use in all three models. This study concludes with the discussion of the
findings, implications for practitioners and recommendations for possible future research.
Significance:

This paper aims to identify significant factors affecting e-reverse auction usage among buyers working in
the procurement department of companies by comparing three models: TAM, TPB and the integrated
model. The comparisons will explore that whether the explanation rates of behavioral intention to use and
actual use is increased with the integration of the models and whether TAM is the most powerful method
in explaining the usage behavior of e-reverse auction users.

Keywords: E-reverse auction, TAM, TPB, Integrated model, Actual use, Model comparison
(Received 7 June 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION
E-reverse auction is an online- and real-time auction between a buying company and two or more suppliers (Carter et al.,
2004). Use of the e-reverse auction tool was first offered by FreeMarkets in 1999 and has since then been progressively
adopted more intensively by firms. Several Fortune Global 2000 companies use e-reverse auction as a purchasing tool
(Giampietro and Emiliani, 2007). For example, General Electric spends 50-60 billion $ per year and people in positions of
responsibility believe that 50-66% of this amount can be auctioned (Hannon, 2001).
Using e-reverse auction offers many advantages to buyers as well as suppliers. Price reduction is undoubtedly the most
important one. Suppliers may have to make higher price reductions to win the auction (Giunipero and Eltantawy, 2004). In
addition to the price advantage, increase in buyer productivity, reduction in cycle time, access to many suppliers at the same
time, creating a more competitive environment, standardization, and transparency in purchasing process are the other
advantages of e-reverse auction. All these advantages create more opportunities for companies by reduction in cost and
time, enabling these companies can offer higher quality products (Carter et al., 2004; Bartezzaghi and Ronchi, 2003). In
2000, General Electric saved $480 million by using e-reverse auction from its $6.4 billion expenditure (Hannon, 2001). Ereverse auction has benefits not only for buyers but also for suppliers. These are growing markets, accessed by system users
all over the world, who are enabled to compare their own competitiveness in the market and follow up auctions by potential
customers on the Internet. Besides, they can estimate their customers needs and market trends by checking the e-reverse
auctions specifications and conditions for the products and services. Thus, suppliers can not only see areas for
improvement and but also their own needs for improvement (Emiliani, 2000; Mullane et al., 2001). Therefore, it is
important to explain and understand the factors that affect the use of e-reverse auctions as they aim at improving
performances of company and employees to complement each other. To our knowledge, the only study that compares
models in the context of e-auction is Bosnjak et al. (2006). In their study, they aim to explain English auction use, which is
generally used in business-to-customer and customer-to-customer markets, whereas the current study is related with ereverse auction technology, used for procurement of products or services in the business-to-business markets.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 149-160, 2012.

A METHODOLOGY FOR PERFORMANCE MEASUREMENT IN


MANUFACTURING COLLABORATION
1

Jae-Yoon Jung1, JinSung Lee1, Ji-Hwan Jung2, Sang-Kuk Kim1, and Dongmin Shin3
Department of Industrial and Management Systems Engineering, Kyung Hee University, Korea
2
Business Innovation Center, LG Display, Korea
3
Department of Industrial and Management Engineering, Hanyang University, Korea
Corresponding author: Dongmin Shin, [email protected]

Effective performance measures must be developed in order to effectively maintain successful collaboration. This
paper presents a methodology of collaborative performance measures to evaluate the overall performance of a
collaboration process between multiple manufacturing partners. The partners first define collaborative key performance
indicators (cKPI), and they then measure the cKPIs and calculate the synthetic performance from the cKPI values to
evaluate the result of the collaboration case. To measure different scales of cKPI, we develop a two-folded desirability
function based on the logistic sigmoid functions. The proposed methodology provides a quantitative way to measure
collaborative performance in order to effectively manage collaboration among partners, continuously improving
collaboration performance.
Keywords: Manufacturing collaboration, performance measurement, collaborative key performance indicators, twofolded desirability function, sigmoid function.
(Received 17 May 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION
One important change in the manufacturing industry is that competition between individual companies has been
extended to competition between the manufacturing networks surrounding the companies (NISA, 2001). This is
because the competitive advantages of modern manufacturing companies are derived from manufacturing collaboration
in virtual enterprise networks such as supply chains (Mun et al., 2009). Most existing performance measures, however,
have been developed to evaluate the performance of internal or outsourcing projects from the perspective of a single
company (Ghalayini et al., 1997; Khadem et al., 2008; Koc, 2011). Moreover, some performance indicators such as
trading costs are oriented to a single company, and cannot be directly applied to measuring the collaboration
performance since such indicators conflict between two partners. As a result, new collaborative performance measures
are needed so that collaboration partners can make arrangements and compromises with each other, reflecting their
common interests.
In this paper, we first introduce the concept of collaborative key performance indicators (cKPIs), which are defined
to measure the collaboration performance of multiple manufacturing partners. cKPIs are calculated by using several
key performance indicators (KPIs) which individual partners can measure. For this research, we referred to the Supply
Chain Operations Reference (SCOR) model (SCC, 2006) to define cKPI for manufacturing collaboration. Since the
SCOR model provides corresponding performance metrics as well as several levels of supply chain process models, it
can be a good reference for defining collaborative performance indicators (Barratt, 2004).
In addition, we developed a two-folded desirability function to reflect the characteristics of performance indicators in
manufacturing collaboration. The desirability function, which is based on the sigmoid function, can reflect multiple
cKPI criteria in service level agreements (SLA). Further, unlike existing desirability functions, the sigmoid based
desirability function can transform different scales of cKPIs into values between 0 and 1 without requiring maximum or
minimum values (Lee and Yum, 2003). The weighted values of two-folded desirability functions for all cKPIs are
summed to determine the synthetic performance of a collaboration, which can be compared with prior performance or
partners performance.
This paper is organized as follows. We first introduce the background of our research in Section 2. The framework
of collaborative performance management is presented, along with the concept of cKPI, in Section 3. Subsequently,
how to design the collaborative performance indicators and how to measure the performance indicators of
manufacturing collaboration are described in Section 4 and Section 5, respectively. Finally, Section 6 concludes this
paper.

2. BACKGROUND
2.1 Collaboration in Manufacturing Processes
Manufacturing sector is a critical backbone of a nations economy while other industries such as information and
service sectors are rapidly emerging for economic growth in developed countries. In order for manufacturing
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 161-170, 2012.

A FRAMEWORK FOR THE ADOPTION OF RAPID PROTOTYPING FOR


SMEs: FROM STRATEGIC TO OPERATIONAL
Ayyaz Ahmad*, Muhammad Ilyas Mazhar and Ian Howard
Department of Mechanical Engineering,
Curtin University of Technology, WA 6102, Australia
*Corresponding author: Ayyaz Ahmad, [email protected]
Rapidly changing global markets, unprecedented increase in product flexibility requirements and shorter product life cycles
require more efficient technologies that can help reduce the time to market, which is considered to be a crucial factor to
survive in todays highly volatile market conditions. Rapid prototyping technology (RPT) has the potential to make
remarkable reductions in the product development time. However, its fast development pace combined with increasing
complexity and variety has made the task of RPT selection difficult as well as challenging, resulting in low diffusion
particularly at SME level. This paper systematically presents (i) Low RP adoption issues and challenges (ii) Importance of
SMEs and the challenges they are facing to highlight the magnitude of the problem (iii) Previous work in the area of
technology selection and adoption and finally offers an adoption framework which is exclusive for the adoption of RP
technology by considering the manufacturing, operational, technology and cost drivers for a perfect technology fit into the
business.
Significance:

Rapid Prototyping (RP) exhibits unique characteristics and can have potential impact on all business
functions, which demands a methodological approach for the evaluation and adoption of the technology.
The main focus of this study is to propose a framework that facilitates the RP adoption from strategic to
operational level to ensure complete and effective implementation to obtain the desired objectives, with a
special emphasis on SMEs.

Keywords:

Rapid prototyping, Technology adoption, SMEs, Technology Selection, Competitiveness


(Received 3 June 2011; Accepted in revised form 18 September 2011)

1. INTRODUCTION
The changes in the global economic scenario have posed considerable threats to many companies, especially SMEs as they
strive to stay competitive in world markets. This change in paradigms demands more flexibility in product designs. These
challenges combined with increased variety and very short lead times has a great impact on the business of small to
medium companies in securing a significant proportion of markets in which they operate. The conventional approaches and
technologies are struggling to meet business needs. Consequently, manufacturers are searching for more efficient
technologies, such as rapid prototyping that can help embrace the challenges. A critical activity for small companies is the
decision-making on the selection and adoption of these advanced technologies. The SMEs task becomes more difficult
because of the absence of any formal procedures (Ordoobadi et al., 2001). An advanced technology can be a great
opportunity for a business but it can also be a threat to a company. A wrong alternative or too much investment in the right
one can reduce the competitive advantage of a company (Trokkeli and Tuominen, 2002). The changing picture of the
competition requires synchronization between business and new trends, which demands unique and effective solutions.
These solutions should be designed to support them by keeping in view the specific nature of SMEs and ought to be simple,
comprehensive and very practical so that they remain an effective part of the global value chain.
To meet these global challenges, the design and manufacturing community is adopting the RP technology to remain
efficient as well as competitive. The RP technology has enormous potential to shrink the product design and development
timeline. Despite these great advantages, the adoption of RP at SMEs level is significantly low. A survey of 262 UK
companies showed that 85% do not use RP. Lack of awareness of what the RP technology offers and how it can be
successfully linked into the business functions are the key factors holding back this sector from the RP technology
adoption. The majority of the groups who indicate that RP is irrelevant are unaware of what impact it can have on their
business (Grenada, 2002). The condition is even worst in developing countries. Laar highlights the sensitivity of the issue
by arguing that many engineers and R&D people are still unaware of the future implications of this technology. This is a
major concern in view of the fact that technical departments are ignoring the RP/RM when it has already entered into world
leading markets and has the potential to completely change the way we do business (Laar, 2007). Kidds argues that RP
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(3), 171-180, 2012.

AN INTEGRATED UTILISATION, SCHEDULING AND LOT-SIZING


ALGORITHM FOR PULL PRODUCTION
Olufemi A.B. Adetunji, Venkata S.S. Yadavalli
Department of Industrial and Systems Engineering,
University of Pretoria, Hatfield, Pretoria 0002, South Africa
We present an algorithm that continuously reduces the batch sizes of products on Non-constraining resource in a production
network through the utilization of the idle time on such resource. This leads to reduction in the holding cost and increase in
the frequency of batch release of the production system. This would also lead to reduction in customer facing supply lead
time. Such technique could be valuable in typical pull production systems like lean manufacturing, theory of constraints or
Constant-Work-in-Process CONWIP processes. An example is used to demonstrate a real life application of the algorithm,
and it was found to work better for system cost minimization than a previous algorithm that uses the production run length
as the criterion for batch reduction.
Keywords: Lot-sizing, Utilization, Setup, Pull production, Scheduling algorithm
(Received 23 May 2011; Accepted in revised form 28 May 2012)

1. INTRODUCTION
Traditionally, a lot size is taken to be the quantity of products contained in a production or purchase batch. This definition
is also congruent to the classical batching model of economic order, which basically assumes that decision of what quantity
to produce is made independently of job scheduling, but this is assumption is now being relaxed and the concept redefined.
Potts and Wassenhove (1992), for instance, defined batching as making decision about whether or not to schedule similar
jobs contiguously, and lot sizing as the decision about when and how to split the production of identical items into sub-lots.
They noted that these decisions were traditionally taken as if lot sizing is independent of scheduling of jobs. This is
obviated by the majority of the body of literature available on both subjects that are separate, with the impression being
given that scheduling decisions are taken only after lot sizes of the various products have been decided. This assumption of
independence is not usually true in most cases as the decisions are always inter-twined. Paul and Wassenhove also
proposed a general model for integrated batching, lot sizing and scheduling. Drexl and Kims (1997) noted that lot-sizing
and scheduling are two short term decisions of production planning that must be tied together with the medium term plan,
which is the Master Production Scheduling of the system. Many models are since being published addressing integrated
batching, lot sizing and scheduling. Potts and Kovalyov (2000) and Webster and Baker (1995) together with Potts and
Wassenhove (1992) and Drexl and Kims (1997) are good readings.
There is also a close relationship between system utilization and other system parameters like the Work-in-Process
Inventory (WIP) and consequently the system holding cost and profitability. Variability in resource processing time and/or
input arrival pattern have degrading influence on WIP level, especially as the system gets close to full utilization. This is
succinctly summarized in Littles law. This effect of resource utilization on the production plan and the level of WIP
appears not to have been well studied. Among the few known models incorporating resource utilization into production
scheduling include Rappold and Yoho (2008), and a model proposed in Hopp (2008). The procedure proposed by Hopps is
simple and straightforward to use, and that is what has been extended, and hopefully improved, in this paper.
Next is a brief review of some work currently being done on integrated lot-sizing. We then proceed to briefly review
some necessary principles of the management of constraint system pertinent to our model; especially the emphasis on
balancing flow rather than capacities, which creates pockets of spare capacities (labor and machine), and the useful
breakdown of the total cycle time of manufacturing resources and jobs, which identifies the various locations and quantities
of idle capacities in the system, which can then be used in improved job scheduling due to reduced customer facing lead
time and decreased lot sizes. The insight derived, however, is useful in other pull production environments as well since all
pull techniques (including lean and CONWIP) always prefer to concentrate on flow and to buffer input and process
variability via spare capacities as opposed to excess inventories.

2. INTEGRATED SCHEDULING AND LOT SIZING MODELS


Solving integrated batching, lot sizing and scheduling problems has received more research attention recently. This could
have also been buoyed by the development of many heuristics and techniques for solving difficult combinatorial problems.
Among the recently published work in this area include Toledo et al (2010), which evaluated different parallel algorithms
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(4), 181-192, 2012.

THE OPTIMAL ORGANIZATION STRUCTURE DESIGN PROBLEM IN


MAKE-TO-ORDER ENTERPRISES
Jess A. Mena
Department of Industrial Engineering,
Monterrey Institute of Technology Campus, Chihuahua, Mexico
This paper addresses the organization structure design problem in a make-to-order (MTO) operation environment. A
mathematical model is presented to aid an operations manager in an MTO environment to select a set of potential
managerial layers to minimize the operation and supervision cost. With a given Work Breakdown Structure (WBS) for any
specific project, solving this model leads an optimal organization structure design. The proposed model considers allocation
tasks to workers, considering complexity and compatibility of each task with respect to workers, and the requirement of
management for planning, execution, training and control in a hierarchical organization. This model addresses the span of
control problem and provides a quantitative approach to the organization design problem and is intended for applications as
a design tool in the make-to-order industries.

Keywords
Span of control, Organizational Design, Hierarchical Organization, Assignment Problem, Make-to-order
(Received 20 Sept 2011; Accepted in revised form 2 Jan 2012)

1. INTRODUCTION
The span of management is perhaps the most discussed single concept in classical, neo-classical or modern management
theory. Throughout its evolution it has been referred to by various titles such as span of management, span of control, span
of supervision, and span of authority (Van Fleet & Benedian, 1977). The existing research work focus on principally
qualitative methods to analyze this concept, i.e., heuristic rules based on experiences and/or intuition. This research
develops an analytical modeling to determine the number of managerial layers and it is motivated in order to have an
evaluation tool for functional based companies and also as a design tool for project-based companies.
The challenge of mass customization brings great value to both the customer and the company. For example, building
cars to customer order eliminates the need for companies to hold billions of dollars worth of finished stock. Any company
able to free this capital would improve their competitive position, and be able to reinvest in future product development.
The question for many company executives is how efficient the organizational structure could be. The need for frequent
adjustment to an organizational structure can be found in this type of make-to-order or project-based companies, where
work contents and its organizational structure could vary dramatically over a short period of time.
This paper presents an analytical model for analyzing hierarchical organizations. It considers various factors that affect
the requirement for supervision and formulates them into an analytical model which aims at optimizing the organizational
design. This decision includes allocation tasks to workers, considering complexity and compatibility of each task with
respect to workers, and the requirement of management for planning, execution, training and control in a hierarchical
organization. The model is formulated as a 0-1 mixed integer program. The objective of the model is minimum
operational cost, which are the sum of supervision costs at each level of the hierarchy and the number of workers assigned
with tasks. This model addresses the span of control problem and provides a quantitative approach to the organization
design problem and is intended for applications as a design tool in the make-to-order industries. Each project-based
company may have to frequently readjust its organizational structure, as its capability and capacity shifts over time. It
could also be applied to functionality based companies as an evaluation tool, to assess the optimality of their current
organization structure.
Meier and Bohte (Meier & Bohte, 2003) have recently reinvigorated the debate on span of control and the optimal
manager-subordinate relationship. They offer a theory concerning the impacts and determinants of span of control and test
it using data from educational organizations. The findings of Theobald et al. (Theobald & Nicholson-Crotty, S., 2005)
suggest that manager-subordinates ratios, along with other structural influences on production, deserve considerably more
attention than they have received in modern research on administration.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(4), 193-203, 2012.

A NON-TRADITIONAL CAPITAL INVESTMENT CRITERIA-BASED


METHOD TO OPTIMIZE A PORTFOLIO OF INVESTMENTS
Joana Siqueira de Souza1, Francisco Jos Kliemann Neto2, Michel Jos Anzanello3, Tiago Pascoal Filomena4
Assistant Professor, Engineering School - Pontifcia Universidade Catolica of Rio Grande do Sul, Av. Ipiranga, 6681 Partenon - 90619-900, Porto Alegre, RS, Brazil.
2
Associate Professor, Department of Industrial and Transportation Engineering, Federal University of Rio Grande do Sul
PPGEP/UFRGS. Av. Osvaldo Aranha, 99, 90035-190, Porto Alegre, RS, Brazil.
3
Assistant Professor, Department of Industrial and Transportation Engineering, Federal University of Rio Grande do Sul
PPGEP/UFRGS. Av. Osvaldo Aranha, 99, 90035-190, Porto Alegre, RS, Brazil.
4
Assistant Professor, School Business, Federal University of Rio Grande do Sul Rua Washington Luiz, 855. Centro,
90010-460. Porto Alegre, RS, Brazil.
1

During the capital budgeting, companies need to define a set of projects that bring profitability, perpetuity and also have a
direct link with the strategic objectives. This paper presents a practical model for defining a portfolio of industrial
investments during capital budgeting by making use of traditional methods of investment analysis, such as Net Present
Value (NPV), and by incorporating qualitative attributes on the analysis through the multicriteria analysis method called
Non-Traditional Capital Investment Criteria (Boucher and MacStravic, 1991). Optimization techniques are then used to
integrate economic and qualitative attributes subjected to budget restrictions. The proposed model was validated in an
automotive company.
Keywords: project portfolio, capital budgeting, net present value, multicriteria analysis, linear programming, decisionmaking.
(Received 31 Aug 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
The definition of a portfolio of projects in capital budgeting appears as an important issue in investment decisions and
industrial planning (Chou et al. 2001). Decisions are seldom made for an isolated project; in most situations, the decision
maker needs to consider several alternative projects relying on particular variables (Borgonovo and Peccati, 2006)
associated not only to financial resources, but also to internal and external factors to the company (Kooros and Mcmanis,
1998; Mortensen et al. 2008).
Although a large number of robust approaches related to investment decisions have been suggested in the literature,
simplistic methods for evaluating investments are still widely used, and little structured decision making is applied in
portfolio definition. Many assessment methods use discounted cash flow techniques such as the Internal Rate of Return
(IRR), Net Present Value (NPV) and the Profitability Index (PI) (Cooper et al. 1997).
More sophisticated methods can increase the likelihood of solid investments due to a stronger connection to company's
strategy, leading to a more consistent analysis of opportunities (Verbeeten, 2006). Although many of these methods are
appropriate for investment evaluation, Jansen et al. (2004) state they only enable tactical allocation of capital, and seldom
take qualitative aspects into consideration (e.g. strategic aspects). That is corroborated by Arnold and Hatzopoulos (2000)
who found that many firms invest their capital in non-economic projects (i.e. projects that do not necessarily bring
economic benefits to the company), such as projects driven to workers health and safety.
One way to incorporate qualitative aspects on decision-making process for capital investment is the adoption of
multicriteria techniques, also known as Multiple Criteria Decision Making (MCDM) methods. A widespread method is the
MAUT - Multiattribute Utility Theory - which relies on a simple and easy method for ranking the alternatives; see Min
(1994). Another popular method is the Analytical Hierarchy Process (AHP), which hierarchically accommodates both
quantitative and qualitative attributes of complex decisions (Saaty, 1980; Vaidya and Kumar, 2006). Successful
applications of AHP can be found in Fogliatto and Guimares (2004), Rabbani et al. (2005), Vaidya and Kumar (2006), and
Mendoza et al. (2008).
A drawback of AHP is that it accommodates economic and qualitative aspects in different matrices, and also requires the
comparison of all the alternatives over the same criteria. That is undesired when working with investment projects, since
not all projects impact upon the same criteria. For example, a project to renew a truck fleet may have an impact on workers
ergonomic condition, while a training project might not impact on that criterion. That led Boucher and MacStravic (1991)
to develop an AHP-based multicriteria method for investment decision: the Non-Traditional Capital Investment Criteria
(NCIC).
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(4), 204-212, 2012.

AN ANALYTICAL APPROACH OF SENSITIVITY ANALYSIS FOR EOQ


Hui-Ming Teng1,2, Yufang Chiu1, Ping-Hui Hsu1,3, Hui Ming Wee1*
1

Department of Industrial and Systems Engineering, Chung Yuan Christian University, Chungli, Taiwan

Department of Business Administration, Chihlee Institute of Technology, Panchiao, Taipei, Taiwan

Department of Business Administration, De Lin Institute of Technology, Tu-Cheng, Taipei, Taiwan


Corresponding author:*E-mail: [email protected]

This study develops an analytical sensitivity analysis approach for a traditional economic order quantity (EOQ) model. The
parameters are treated as variables and a direction for deriving the optimal solution is developed using the gradient
approach. The graph of the optimal solution is provided to demonstrate the sensitivity analysis. Numerical example is
provided to illustrate the theory.

Keywords: Economic order quantity (EOQ); Sensitivity analysis; Gradient; Sub-gradient.

(Received 28 Apr 2010; Accepted in revised form 27 Feb 2012)

1. INTRODUCTION
Researches on inventory problems are usually summarized by sensitivity analysis (Koh et al., 2002; Weng and McClurg,
2003; Sarker and Kindi, 2006; Ji et al., 2008; Savsar and Abdulmalek,2008; Patel et al., 2009; Hsu et al., 2010). The
traditional methodology to investigate the impact of parameters sensitivities is done by evaluating the target value based on
varying parameters. Although the performance of traditional methodology is good enough, however, its graphs precision is
limited. This is mainly caused by its inability to express the discrete property completely. Ray and Sahu (1992) provided
the details of sensitivity analysis factors in productivity measurement for multi-product manufacturing firms. Borgonovo
and Peccati (2007) applied Sobols function and variance decomposition method to determine the most influential
parameters on the model output. Borgonovo(2010) introduced a new method to define sensitivity measurement that do not
need differential equations for sensitivity analysis.
Lee and Olson presented a nonlinear goal programming algorithm based on the gradient method, utilizing an optimal
step length for chance constrained goal programming models. Arsham (2007) developed a full gradient method which
consists of three phases: initialization, push and final iteration phase. The initialization phase provided initial tableau which
may not have a full set of basis. The push phase used a full gradient vector of the objective function to obtain a feasible
vertex. The final iteration phase used a series of pivotal steps using sub-gradient, which leads to an optimal solution. For
each iteration, the sub-gradient provides the desired direction of motion within the feasible region.
In this study, sensitivity analysis based on the traditional economic order quantity (EOQ) model is discussed. A
numerical example is provided to illustrate the theory.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(5), 213-220, 2012.

PRODUCTION LEAD TIME VARIABILITY SIMULATION


INSIGHTS FROM A CASE STUDY
Gandolf R. Finke1, Mahender Singh2, Prof. Dr. Paul Schnsleben1
BWI Center for Industrial Management, ETH Zurich, Kreuzplatz 5, 8032 Zurich, Switzerland
2
Malaysia Institute for Supply Chain Innovation, No. 2A, Persiaran Tebar Layar, Seksyen U8, Bukit Jelutong, Shah Alam,
40150 Selangor, Malaysia
1

We study the impact of disruptions to operations that can cause deviations in the individual processing time of
a task, resulting in longer than planned production lead time. Quality, availability of capacity and required
material as well as variability in process times are regarded as drivers of disruption. The focus is to study the
impact of variability in the lead time on the overall performance of the production system, instead of the
average lead time. Structural and numerical application of the approach are provided in a case study.
Additionally, the different dimensions of practical implications of this research are accentuated. Accordingly,
discrete event simulation is used to study the interactions and draw insights based on a case study. Measures to
mitigate lead time variability are discussed and their impact is analyzed quantitatively.
Keywords:

Operations management, Production planning, Simulation, Lead time, Variability, Reliability


(Received 15 Nov 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
1.1 Motivation
Production lead time is a critical driver of process design in a manufacturing company. The concept of time-based
competition stresses the importance of lead times as a competitive advantage and strategic instrument. Shorter lead times
are not only advisable in terms of meeting customer demand and the ability to adapt but also to minimize cost by reducing
inventories and work in progress. As a result, cycle time reduction efforts have garnered a lot of attention in the literature
and industry initiatives.
Although a lower lead time is a worthwhile endeavor, how it is reduced is the all-important decision. Traditionally, these
decisions involve weighing the benefits of the reduction in the average cycle time with the investment required to achieve
the targeted improvement. Little or no attention is paid to the variability in cycle times, however. We will use the terms
variability and reliability to address the same issue in this paper. Through this research we intend to highlight the need for a
formal consideration of the cycle time reliability when implementing measures for lead time reduction.
Although seemingly simple, understanding the system level impact of individual task variability is not straightforward.
Whereas the averages are additive and thus simple to study, the variability is not. We take a simple example from the
reliability domain to illustrate this point. Consider a system that has 20 components, with each one performing at a high
level of 98% reliability individually. Collectively, assuming independence, the reliability of this system is only 65%! This
deteriorates further to near 50% if we add 10 more components! The key point here is that we need to assess reliability in a
holistic manner as individual task processing time variability tends to amplify as it travels through an interconnected
production sequence. In short, a high level of local reliability does not necessarily imply a high level of global reliability.
A deeper understanding of the true system level reliability will motivate the need for redundancy at strategic locations
throughout the system to improve the overall performance. It may in certain situations be more beneficial to have reliable
delivery with longer average lead time, that is minimum or no lead time deviation, than enforcing a shorter average lead
time that is less reliable. This type of analysis will enhance the selection criterion when multiple investment options to
reduce lead time are possible since reliability has direct and indirect cost consequences.
1.2 Classification of disruptions and scope
We classify potential disruptions encountered by a typical manufacturing company into two categories. The first category,
which we call systemic disruptions, covers all factors that affect large portions of a company or the supply chain
simultaneously, for example earthquakes, floods, wars or strikes.
The second category is described as operational disruptions. These include drivers that influence a companys
performance at a micro scale, i.e., individual steps in production sequence for instance, failed quality tests, variability in the
completion time of single production steps and production resource breakdown.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(5), 221-231, 2012.


A COMPUTER SIMULATION MODEL TO REDUCE PATIENT LENGTH OF


STAY AND TO IMPROVE RESOURCE UTILIZATION RATE IN AN
EMERGENCY DEPARTMENT SERVICE SYSTEM
Muhammet Gul1, Ali Fuat Guneri2
1
Industrial Engineering Department
Faculty of Engineering
Tunceli University
62000, Tunceli
2
Industrial Engineering Department
Mechanical Faculty
Yldz Technical University
Yldz, Beikta, stanbul
[email protected]
Corresponding authors e-mail: {Muhammet Gul, [email protected]}
This paper presents a case study of a discrete-event simulation (DES) model of an emergency department (ED) unit in a
regional university hospital in Turkey. In this paper emergency department operations of the hospital were modeled,
analyzed and improved. The goal of the study is to reduce patient average length of stay (LOS) and to improve patient
throughput and utilization of locations and human resources (doctors, nurses, receptionists). Some alternative scenarios in
an attempt to determine optimal staff level were evaluated. These alternative approaches illustrate that vital improvement
in LOS and throughput can be obtained by minor changes in shift hours and number of resources. Considering future
changes in patient demand a scenario which reduces LOS and improves throughput is available in the paper.
Significance: The Key Performance Indicators (KPIs) to determine and improve system performance in healthcare
emergency departments consist of to reduce patient average length of stay (LOS), to improve patient throughput and
resource utilization rates. Alternative scenarios and optimal staff levels are enhanced within the scope of this study.
Keywords: Emergency departments, healthcare modeling, discrete event simulation, length of stay, Servicemodel
(Received 8 Mar 2012; Accepted in revised form 31 Mar 2012)

1. INTRODUCTION
Emergency departments (EDs) in which people consult due to many complaints and demand first medical response have
vital importance in healthcare systems. Today improvements in healthcare lead to an increase in number of tools and
methods. During recent years utilization of emergency department units in Turkey has heavy increased because of fast and
cheap treatment opportunities. Statistics about the consultations to healthcare institutions show that number of arrivals at
emergency departments have increased recently (Arslanhan, 2010).
It is objected decreasing the waiting times that improve performance of the operations in healthcare sector. McGuire
(1994) evaluated alternatives to reduce waiting times of ED patients using Medmodel. He managed to reduce LOS from
157 minutes to 107 minutes. Kirtland et al. (1995) provided an improvement of 38 minutes as combination of optimal
solutions. Performance measures obtained from simulation applications in EDs are to reduce patient length of stay (LOS),
to improve patient throughput, to increase resource utilization rate and to control costs. Evans et al. (1996) described an
Arena simulation model for the emergency department of a particular hospital in Kentucky. In the model patient flows of
13 different types of patients were simulated. Also different feasible schedules for doctors, nurses and technicians were
evaluated. Main performance measure used in the process was average patient length of stay in emergency department.
Model was run 50 replications and patient LOS was found as 142 minutes. Patvivatsiri at al. (2003) evaluated a reducing of
%45 in patients average waiting times with an affective nurse schedule.
Simulation enables how changes system performance based on several factors (Tekkanat, 2007). In EDs, operation times,
arrival rates of entities, costs and utilization of resources are given as example to these factors. Discrete Event Simulation
(DES) techniques have been used a lot for modeling the operations of an emergency department and for the analysis of
patient flows and throughput time (Samaha et al., 2003; Mahapatra et al., 2003; Takakuwa and Shiozaki, 2004). Samaha et
al. (2003) evaluated some alternatives to decrease patient length of stay in system with 24 hours and a week data obtained
from ED using Arena simulation software. Mahapatra et al. (2003) aimed to develop a reliable decision support system
(DSS) using Emergency Severity Index (ESI) triage method which optimizes resource utilization rate. According to three
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(5), 232-240, 2012.

AN EPQ MODEL WITH VARIABLE HOLDING COST


Hesham K. Alfares
Systems Engineering Department, King Fahd University of Petroleum & Minerals,
Dhahran 31261, Saudi Arabia. Email: [email protected]
Instantaneous order replenishment and constant holding cost are two fundamental assumptions of the economic order
quantity (EOQ) model. This paper presents modifications to both of these basic assumptions. First, non-instantaneous order
replenishment is assumed, i.e. a finite production rate of the economic production quantity (EPQ) model is considered.
Second, the holding cost per unit per time period is assumed to vary according to the length of the storage duration. Two
types of holding cost variability with longer storage times are considered: retroactive increase and incremental increase. For
both cases, models are formulated, solutions algorithms are developed, and examples are solved.
Keywords: Economic production quantity (EPQ), Variable holding cost, Production-inventory models.
(Received 13 Apr 2011; Accepted in revised form 28 Oct 2011)

1. INTRODUCTION
In the classical economic order quantity (EOQ) model, the replenishment of the order is assumed to be instantaneous, i.e.
the production rate is implicitly assumed infinite. In practice, many orders are manufactured gradually, at a finite rate of
production. Even if the orders are purchased, the procurement and receipt of these orders is seldom instantaneous.
Therefore, economic production/manufacturing quantity (EPQ/EMQ) models are more representative of real life.
Moreover, the assumption of a constant holding cost for the entire duration of storage may not be always realistic. In many
practical situations, such as in the storage of perishable items, longer storage periods require additional specialized
equipment and facilities, resulting in higher holding costs.
This paper presents an EPQ inventory model with a finite production rate and a variable holding cost. In this model, the
holding cost is assumed to be an increasing step function of the storage duration. Two types of time-dependent holding cost
functions are considered: retroactive increase, and incremental increase. Retroactive holding cost increase means that the
holding cost of the last storage period applies to all previous storage periods. Incremental holding cost increase means that
increasingly higher holding costs apply only to later storage periods. For each of these two types, optimal solutions
algorithms are developed to minimize the total cost per unit time.
Several EOQ and EPQ models with variable holding costs proposed in the literature consider holding cost to be a function
of the amount or value of inventory. Only few EOQ-type models assume the holding cost to vary in relation to the
inventory level. Muhlemann and Valtis-Spanopoulos (1980) revise the classical EOQ formula, assuming the holding cost to
be an increasing function of the average inventory value. Their justification is that the greater the value of inventory, the
higher the cost of financing it. Mao and Xiao (2009) construct an EOQ model for deteriorating items with complete
backlogging, considering the holding cost as a function of the on-hand inventory. A solution procedure is developed, and
the conditions are specified for the existence and uniqueness of the optimal solution when the total holding cost function is
convex. Moon et al. (2008) develop mixed integer programming models and genetic algorithm heuristic solutions to
minimize the maximum EOQ storage space requirement for both finite and infinite time horizons.
Some inventory models have built-in flexibility, allowing the holding to be a function of either the inventory level or
storage time. Goh (1994) considers an EOQ-type single-item inventory system with a stock-dependent demand rate and
variable holding cost. Giri and Chaudhuri (1998) construct an EOQ-type inventory model for a perishable product with
stock-dependent demand and variable holding cost. Considering two types of variation of the holding cost per unit, both
Goh (1994) and Giri and Chaudhuri (1998) treat holding cost either as: (i) a non-linear continuous function of the time in
storage, or (i) a non-linear continuous function of the amount of inventory.
In several EOQ-type models, the holding cost is assumed to be a continuous function of storage time. For a non-linearly
deteriorating item, Weiss (1982) considers the holding cost per unit as a non-linear function of the length of storage
duration. Optimal order quantities are derived for deterministic and stochastic demands, and for both finite and infinite time
horizons. Giri at al. (1996) develop a generalized EOQ model for deteriorating items with shortages, in which both the
demand rate and the holding cost are continuous functions of time. The optimal inventory policy is derived assuming a
finite planning horizon and constant replenishment cycles. Ferguson et al. (2007) apply Weiss (1982) formulas to
approximate optimal order quantities for grocery store perishable goods, using regression to estimate the holding cost curve
parameters.
Alfares (2007) introduces the notion of holding cost variability as a discontinuous step function of storage time, with two
types of holding cost increase. As the storage time extends to the next time period, the new (higher) holding cost can be
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(6), 241-251, 2012.

A MULTI-HIERARCHY GREY RELATIONAL ANALYSIS MODEL FOR


NATURAL GAS PIPELINE OPERATION SCHEMES COMPREHENSIVE
EVALUATION
1

Chang Jun Li1, Wen Long Jia2, En Bin Liu2, Xia Wu2
Oil and Gas Storage and Transportation Engineering Institute of Southwest Petroleum University
2
Southwest Petroleum University

In the condition of satisfying process requirement, determining the optimum operation schemes of natural gas pipeline
network is essential to improve the overall efficiency of network operation. According to the operation parameters of
natural gas network, the multi-hierarchy comprehensive evaluation index system is illustrated, and the weights of each
index are determined with an improved Analytic Hierarchy Process (AHP). This paper presents a multi-hierarchy grey
relational analysis (GRA) method which is suitable for evaluating the multi-hierarchy index system with combining the
AHP and grey relational analysis. Ultimately, the industrial application shows that multi hierarchy grey relational analysis
is effective to evaluate the nature gas pipeline network operation schemes.
Significance This paper presents a multi-hierarchy grey relational analysis model for natural gas operation schemes
comprehensive evaluation with the combination of AHP and traditional GRA. The method is applied to
the Sebei-Ningxia-Lanzhou gas transmission pipeline successfully.
Keywords:

Natural gas pipeline network; Operation schemes; Analytic Hierarchy Process; Grey relational analysis;
Comprehensive evaluation
(Received 27 Jul 2011; Accepted in revised form 2 Jan 2012)

1. INTRODUCTION


Gas transmission and distribution pipelines play an important role in the development and utilization of natural gas. The
network operators can formulate many different schemes in the condition of satisfying process requirement. However, the
overall goal of operators is quality, quantity and timely supply of gas and best economic as well as the social benefit. Thus,
select the optimum scheme from many reasonable options to improve the economic returns and social benefits of pipeline
operation is a problem deserving of study.
The operation scheme of natural gas pipeline network is close related to the flow rate, temperature, and pressure at
each node in the network. As it involves too many parameters, it is almost impossible to list all the relevant and determine
the relationship among them. The traditional probability theory and mathematical methods are used to solve problems with
uncertainty characterized by large sample sizes and multi-data. Consequently, it is not suitable for evaluating the network
operation schemes. However, grey relational analysis is proposed to solve uncertainty problems with less available data
and experiences, small sample sizes and incomplete information. Its main principle is contained in the grey relational
analysis model. This analysis method establishes an overall comparative mechanism, overcomes the limitation of pair-wise
comparison, and avoids conflicting conditions between serialized and qualitative results (Tong and Wang, 2003, Chi and
Hsu, 2005). This method has been widely used in the area of oil and gas pipeline optimum designing and comprehensive
evaluation (Liang and Zhen 2004; Zhao 2007) since Professor Wang (Wang, 1993) introduced this method into the
optimum designing of natural gas pipeline in 1993. But the index system of objects evaluated is only one layer.
This paper builds the multi-hierarchy comprehensive evaluation index system for natural gas network and calculates
their weights firstly. Then the multi-hierarchy grey relational analysis method is presented by the combining of the
calculating method of AHP and traditional grey relational analysis method. Ultimately, this paper evaluates seven different
operation schemes of a natural gas network using the method presented.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(6), 252-263, 2012.

OPTIMAL FLEET SIZE, DELIVERY ROUTES, AND WORKFORCE


ASSIGNMENTS FOR THE VEHICLE ROUTING PROBLEM WITH MANUAL
MATERIALS HANDLING
Prachya Boonprasurt and Suebsak Nanthavanij
Engineering Management Program
Sirindhorn International Institute of Technology, Thammasat University
Pathumthani 12121, Thailand
Corresponding authors e-mail: {Suebsak Nanthavanij, [email protected]}
The vehicle routing problem with manual materials handling (VRPMMH) is introduced. At customer locations, delivery
workers must manually unload goods from the vehicle and take them to the stockroom. The delivery activities require
workers to expend certain amounts of physical energy. In this paper, two models of VRPMMH are developed, namely
VRPMMH models with fixed workforce assignments (FXW) and with flexible workforce assignments (FLW). The
objective of both VRPMMH models is to determine optimal fleet size and delivery routes such that the total cost is
minimized. Additionally, the second model is intended to assign delivery workers to vehicles to minimize the differences
in physical workload.
Significance:

The results obtained from the vehicle routing problem with manual materials handling (VRPMMH) can
help goods suppliers to obtain a delivery solution that not only is economical but also safe for delivery
workers. By adding the workload constraint into consideration, the solution will prevent the delivery
workers from performing daily physical work beyond the recommended limit.

Keywords:

Vehicle routing problem, workforce assignment, manual materials handling, optimization, ergonomics
(Received 9 May 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Danzig and Ramser (1959) firstly introduced the capacitated vehicle routing problem (VRP) several decades ago. Since
then, VRP has been studied extensively by researchers. In the classical capacitated VRP, goods are delivered from a depot
to a set of customers using a set of identical delivery vehicles. Each customer demands a certain quantity of goods and the
delivery vehicles have a limited capacity. Typically, the problem objective is to find delivery routes starting and ending at
the depot that minimize a total travel distance without violating the capacity constraint of the delivery vehicles. In some
problems, the objective might be to determine the minimum number of delivery vehicles to serve all customers.
There are many variants of VRP such as the vehicle routing problem with backhauls (VRPB), the pickup and delivery
problem with time windows (VRPTW), the mixed vehicle routing problem with backhauls (MVRPB), the multiple depot
mixed vehicle routing problem with backhauls (MDMVRPB), the vehicle routing problem with backhauls and time
windows (VRPBTW), the mixed vehicle routing problem with backhauls and time windows (MVRPBTW), and the vehicle
routing problem with simultaneous deliveries and pickups (VRPSDP) (Ropke and Pisinger, 2006). The classical VRP
including its variants are combinatorial optimization problems. Both exact and heuristic methods have been developed to
obtain the problem solution. For example, consider the vehicle routing problem with simultaneous delivery and pickup
(VRPSDP) (Min, 1989). Halse (1992) presented exact and heuristic methods for the problem and Dethloff (2001, 2002)
considered heuristic algorithms. Additionally, simulation and meta-heuristic approaches have also been employed to
investigate the VRP. Park and Hong (2003) evaluated the system performance of the vehicle routing problem under a
stochastic environment using four heuristics. They considered the VRP with time window constraints where traveling time
and service quantity vary. Ting and Huang (2005) used a genetic algorithm with elitism strategy (GAE) to solve the VRP
with time windows. They reported that their GAE are superior to the other three GAs tested in their study in terms of the
total traveling distance.
Virtually all VRPs do not consider ergonomics in the problem formulation. Consider the situation in which goods must
be manually moved from the vehicle to an assigned location at the customer point. This situation is not unusual especially
for short-distance deliveries within the city area using small delivery vehicles. An example is the delivery of goods from a
distribution center to convenience stores which are scattered around the city. The delivered supplies are manually unloaded
from the vehicle and then moved to the stockroom. These convenience stores do not usually keep large inventories. In
fact, they rely on receiving supplies from the distribution center on a daily basis. Another example is the delivery of
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(6), 264-277, 2012.


A QUANTITATIVE PERFORMANCE EVALUATION MODEL BASED ON


A JOB SATISFACTION-PERFORMANCE MATRIX AND APPLICATION
IN A MANUFACTURING COMPANY
Adnan Aktepe, Suleyman Ersoz
Department of Industrial Engineering, Kirikkale University, Turkey
In this study, we propose a performance management model based on employee performance evaluations. Employees
are clustered into 4 different groups according to a job satisfaction-performance model and strategic plans are derived
for each group for an effective performance management. The sustainability of this business process improvement
model is managed with a control mechanism as a Plan-Do-Check-Act (PDCA) cycle as a continuous improvement
methodology. The grouping model is developed with a data mining clustering algorithm. Firstly 4 different
performance groups are determined with a two-step k-means clustering approach. Then the clustering model developed
is testified with an Artificial Neural Network (ANN) model. Necessary data for this study are collected with a
questionnaire application composed of 25 questions, first 13 variables measuring job satisfaction level and last 12
variables measuring performance characteristics where evaluators are employees themselves. With the help of model
developed, human resources department is able to track employees job satisfaction and performance levels and
strategies for different performance groups are developed. Application of the model is conducted in a manufacturing
company located in Istanbul, Turkey.
Keywords: Job Satisfaction-Performance Matrix, K-Means Clustering, Performance Management, Employee
Performance Evaluation, Job Satisfaction.
(Received 12 Aug 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Fast developing new technologies and changing world had made competitive market conditions harsh. Staying
competitive in the market, which is inevitable for organizations to survive, is possible with the efficient use of
resources. While traditional organizations directed their efforts only on increasing profitability and being financially
strong, now non-traditional organizations analyze the input-output interaction of resources to find reasons of low or
high profitability. Today factors affecting financial and non-financial performance of the company are analyzed in
detail. Being financially strong for the moment does not guarantee a long-running organization. In order to see the
whole picture organizations have started to change their strategies according to performance management systems.
Today mostly used performance management systems are Deming Prize Model developed in Japan in 1951, Malcolm
Baldridge Quality Award Model developed in the U.S.A. in 1987, American Productivity Centre Model, EFQM
Excellence Model, Performance Pyramid developed by Lynch ve Cross (1991), Balanced Scorecard developed by
Kaplan and Norton (1992), Quantum Performance Management Model developed by Hronec (1993), Performance
Pyramid by Neely and Adams (2001), Neely et al. (2002) and Skandia Navigator model.
The very first systematic studies on performance started in the beginning of 20th century. Taylor (1911) in his book
Principles of Scientific Management discussed productivity, efficiency, optimization and proposed novel techniques
on increasing productivity. After that he proposed a performance based salary system for employees but this idea was
intensely criticized at that time although today many organizations use this system. Then research on employee
performance was triggered. It was found that ergonomic factors affect performance. Besides ergonomic factors Mayo
(1933, 1949) and his friends proved that, with experiments conducted at Hawthorne, employee performance is much
more affected by behavioral factors. He demonstrated that teamwork, motivation and human affairs much more affect
individual performance. There is an abundance of empirical studies on relationship among job performance, job
satisfaction and other factors in the literature (Saari and Judge, 2004; Shahu and Gole, 2008; Pugno and Depedri,
2009). The performance model used in this study, of which details given in the next section, groups employees
according to both performance and job satisfaction levels. So here we analyze the relationship between them and
present a literature review on job satisfaction, performance and other factors relationships. Other factors affecting job
performance and job satisfaction include stress, organizational commitment, employee attitudes, employee morale, etc.
Several authors in the literature studied the effect of job satisfaction and other factors on performance. In Table 1 we
give a list of studies carried out on relations among job satisfaction, performance and other factors. However, there
exists a controversial debate on the relationship among job satisfaction, performance and other factors. The satisfactory
performance model used in this study enables us to look from a different point of view. Without considering the
relationships among performance and related factors, in this model employees are grouped according to job satisfaction
and performance. This helps us to develop a new approach to individual performance appraisals. If we summarize the
performance factors addressed in the literature, we see the relationship diagram given in Figure 1.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(7), 278-288, 2012.

AN INTRODUCTION TO DISTRIBUTION OPERATIONAL EFFICIENCY


Bernardo Villareal, Fabiola Garza, Imelda Rosas, David Garcia
Department of Engineering, Universidad de Monterrey
Department of Business, Universidad de Monterrey

The Lean Manufacturing approach for waste elimination can be applied in all sorts of operations. In this project is applied
for the improvement of a supply chain and to achieve high levels of chain efficiency. The identification of warehousing and
transportation waste at the chain level is aggregate being difficult its identification within both processes. This work
provides an introduction to the concept of distribution operational efficiency and proposes a scheme for eliminating waste
in a distribution operation. The Operational Effectiveness Index used in TPM is adapted and used as the main performance
measure. Availability, performance and quality wastes are identified using Value Stream Mapping. The scheme is
exemplified by applying it on distribution networks of several Mexican companies.
Keywords: Lean warehousing, Lean transportation, distribution waste, operational effectiveness index, supply chain
efficiency.
(Received 8 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
A key feature of business is the idea that competition is made through supply chains and not between the companies
(Christopher, 1992), success or failure of supply chains is ultimately determined in the market-place by the end consumer.
Therefore, is extremely important the deployment of the right strategies to compete successfully. Fisher (1997) suggests
that supply chains must acquire capabilities to become efficient or agile accordingly to the type of products marketed (see
Figure 1). In particular, an efficient supply chain is suitable for selling functional products. The order winning factor in this
market is cost, having quality, lead time and service level as order qualifiers (Hill, 1993). The main supply chain strategy
recommended to become efficient is waste elimination (Towill et al., 2002).
The origin of waste elimination is associated with the concept of lean manufacturing. This can be traced back to the
1930s when Henry Ford revolutionised car manufacturing with the introduction of mass production. The most important
contribution to the development of lean manufacturing techniques since then came from the Japanese automotive firm
Toyota. Its success is based on its renowned Toyota Production System. This system is based on a philosophy of
continuous improvement where the elimination of waste is fundamental. The process of elimination is facilitated by the
definition of seven forms of waste, activities that add cost but no value: production of goods not yet ordered; waiting;
rectification of mistakes; excess processing; excess movement; excess transport; and excess stock.
Jones et al., (1997) have shown that these seven types of waste need to be adapted for the supply chain environment.
Hines and Taylor (2000) propose a methodology extending the lean approach to enable waste elimination throughout the
supply chain and Rother et al., (1999) recommend the use of the value stream map (VSM) and the supply chain mapping
toolkit described by Hines et al., (2000) as fundamental aids for identifying waste.
As lean expands towards supply chain management, rises the question about its adequate adaptation. Transportation and
warehousing represent good opportunities for the application and could give important benefits if applied properly. It is
well known that both activities are classified as waste. However, when markets are distant, these are certainly necessary
activities to attain competitive customer service levels. Most distribution networks have significant waste and unnecessary
costs say McKinnon et al., (2003) and Ackermann (2007). For the identification of waste between facilities and installations
in a supply chain Jones et al., (2003) recommend Value Stream Mapping for the extended enterprise. When mapping at the
supply chain level, unnecessary inventories and transportation become important wastes. Unnecessary transportation waste
is related to location decisions for the improvement of performance at given points of the supply chain. Therefore, the
solutions suggested for its elimination are concerned with the relocation and consolidation of facilities, a change of
transportation mode or the implementation of milk runs. In addition to transportation, warehousing is another important part
of a distribution network. Value stream mapping at the supply chain level emphasizes on the identification of inventory
waste. This approach does not consider the elimination of waste in warehousing operations. However, it is important to
realize that warehousing could have an important impact on the supply chain cost structure and on the capacity to respond
to customer needs. Lean transportation and warehousing are still new areas in full development.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(7), 289-296, 2012.

ALTERNATIVE CONSTRUCTIVE HEURISTIC ALGORITHM FOR


PERMUTATION FLOW-SHOP SCHEDULING PROBLEM WITH MAKESPAN CRITERION
Vladimr Modrk, Pavol Semano and Peter Knuth
Faculty of Manufacturing Technologies, TUKE, Bayerova, 1, Presov, Slovakia
Corresponding author email: [email protected]
In this paper, a constructive heuristic algorithm is presented to solve deterministic flow-shop scheduling problem with
make-span criterion. The algorithm is addressed to an m-machine and n-job permutation flow shop scheduling problem.
This paper is composed in a way that the different scheduling approaches to solve flow shop scheduling problems are
benchmarked. In order to compare the proposed algorithm against the benchmarked, selected heuristic techniques and
genetic algorithm have been used. Results of experiments show that proposed algorithm gives better or at least comparable
solutions than benchmarked constructive heuristic techniques. Finally, the average computational times (CPU time in ms)
are compared for each size of the problem.
Keywords: make-span, constructive heuristics, genetic algorithm, CPU time

(Received 13 Mar 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Dispatching rules are one of the most common application areas of heuristic methods used for factory scheduling (Caskey,
2001). Basic types, job shop and flow shop production, cope with a scheduling problem to find a feasible sequence of jobs
on given machines with the objective of optimization of some specific function. A selected criterion for purpose of this
study - job completion time (make-span) can be defined as the time span from material availability at the first processing
operation to the completion at the last operation. Johnson (1954) has shown that, in a 2-machines flow shop, an optimal
sequence can be constructed. It was determined that machine flow shop scheduling problem (FSSP) is strongly NP-hard for
m 3 (Garey et al., 1976). FSSPs can be divided into two main categories: dynamic and static. Hejazi and Saghafian (2005)
characterize scheduling problem as an effort to specify the order and timing of the processing of the jobs on machines,
with an objective or objectives respecting above-mentioned assumptions. This paper is concerned with multi machine
FSSP that present a class of Group Shop Scheduling Problems. The criterion of optimality in a flow shop sequencing
problem is usually specified as minimization of make-span. If there are no release times for the jobs then the total
completion time equals the total flow time. Maximum criteria should be used when interest is focused on the whole system
(Mokotoff, 2011). Pan and Chen (2004) studied the re-entrant flow-shop (RFS) with the objective of minimizing the makespan (Cmax) and average flow time of jobs by proposing optimization models based on integer programming technique and
heuristic procedure. In addition, they treated new dispatching rules to accommodate the reentry feature. In a RFS, all jobs
have the same routing over the machines of the shop and the same sequence is traversed several times to complete the jobs.
Chen at al (2009) presented study on hybrid genetic algorithm to solve RFS scheduling problem with the aim to improve
the Genetic Algorithm (GA) performance and the heuristic methods proposed by Pan a Chen (2004).
In some cases for calculating the completion times specific constraints are assumed. For example, such a situation in the
FSSP arises when no idle time is allowed at machines. This constraint creates an important practical situation that arises
when expensive machinery is employed (Chakraborty, 2009). The general scheduling problem for a classical shop flow
gives rise to (n!)m possible schedules. With aim to reduce the number of possible schedules it is reasonable to make
assumption that all machines process jobs in the same order (Gupta 1975). In the classical flow-shop scheduling problem,
queues of jobs are allowed at any of m machines in processing sequence based on assumption that jobs may wait on or
between the machines (Allahverdi et al., 1999, 2008). Moreover setup times are not considered for calculating make-span
parameter.
The currently reported approximation algorithms can be categorized into two types: constructive methods or improvement
methods. Constructive methods include Slope index based heuristics, CDS heuristics and others. Most of improvement
approaches are based on modern meta-heuristics, such as Simulated Annealing, Tabu Search, Genetic Algorithm and
others. Modern meta-heuristic algorithms can be easily applied to various FSPs and usually by them are obtained better
solution than by constructive methods. However, Kalczynski and Kamburowski (2005) showed that many meta-heuristic
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(7), 297-304, 2012.

MEDIA MIX DECISION SUPPORT FOR SCHOOLS BASED ON ANALYTIC


NETWORK PROCESS
1*

Shu-Hsuan Chang , Tsung-Chih Wu , Hwai-En Tseng , Yu-Jui Su , and Chen-Chen Ko


Department of Industrial Education and Technology, National Changhua University of Education
No. 2, Shida Rd., Changhua City 500, Taiwan, ROC
2
Department of Industrial Engineering and Management, National Chin-Yi University of Technology,
35, Lane215, Section 1, Chung-Shan Road, Taiping City, Taichung County 411, Taiwan, ROC
3
Asia-Pacific Institute of Creativity, No.110.Syuefu Rd. Toufen Township.Miaoli County 351.Taiwan, ROC
*Corresponding author: Shu-Hsuan Chang, [email protected]
1

Media Selection is a multi criteria decision making (MCDM) problem. Decision makers with budget constraints should
select media vehicles with the greatest effects on audiences by simultaneously considering multiple and interdependent
evaluation criteria. This work develops a systematic decision support algorithm for media selection. Analytic Network
Process (ANP) is adopted to determine the relative weights of criteria. An Integer Programming (IP) is then applied to
(identify the optimum combination of media below a fixed budget. An empirical example demonstrates the computational
process and effectiveness of the proposed model.
Significance:

The decision model aims to develop a systematic decision support hybrid algorithm to solve the best media
mix for student recruiting advertisement with budget constraints by simultaneously considering multiple
and interdependent evaluation criteria. An empirical example of media selection for school C is
demonstrates the computational process and effectiveness of the proposed model.

Keywords: MCDM, Media Selection, Analytic Network Process (ANP), Integer Programming (IP)
(Received 9 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Consumers have benefited from the revolutionary growth in the number of TV and radio channels, magazines, newspapers
and outdoor media in recent decades. However, the time devoted to a single medium constantly shrinks, and the complexity
of the media landscape undermines the stability of media habits. As the attention of consumers is spread over more media
categories than ever before, only one conclusion is possible: an effective media strategy must take a multimedia selection
approach (Franz, 2000).The media mix decisions, a unique case of a resource allocation problem, is a complex
multi-faceted decision (Dyer, Forman, and Mustafa, 1992). Selecting the best media requires considering not only cost and
the number of readers, but also the efficiency with which the medium reaches the target audience. These developments
have influenced the media usage habits of target audiences as well as the fit between the product and the characteristics of
the medium. The media selection approach is defined as the process whereby the decision maker selects the media vehicles
that affect the audience effectively by simultaneously considering multiple and interdependent evaluation criteria, which is
a multi criteria decision making (MCDM) problem (Lgnizio, 1976; Dyer, Forman, and Mustafa, 1992). Many factors have
increased the complexity of the media selection decision. The criteria are usually interdependent (Gensch, 1973).
Moreover, since some criteria are uncertainty, qualitative, and subjective, consistent expert opinions are rare (Dyer, Forman
and Mustafa, 1992; Calantone, 1981). So far, the literature on media selection problems suggests that the criteria for
evaluating media are independent and ignore the interactions between the criteria (Lgnizio, 1976; Lee, 1972; Dyer, Forman
and Mustafa, 1992). Since the process for media selection is so complicated, an effective tool for assessing interdependent
criteria is needed. However, AHP models a decision-making framework that assumes a unidirectional hierarchical
relationship among decision levels (Triantaphyllou and Mann, 1995; Meade and Presley, 2002; Shen et al., 2010). Analytic
Network Process (ANP) is an effective tool when elements of the system are interdependent (Saaty, 2001). The ANP is
more accurate in complex situation due to its capability of modeling complexity and the way in which comparisons are
performed (Yang et al., 2010).
The ANP has been applied to many areas, including (1) evaluating and selecting alternatives; e.g., ANP has been utilized
to construct a model for selecting an appropriate project (Lee and Kim 2001; Shang et al., 2004; Chang, Yang, and Shen,
2007), a company partner (Chen et al., 2004), and an appropriate product design (Karsak et al., 2002); (2) optimizing a
product mix (Chung et al., 2005) and price allocation (Momoh and Zhu, 2003); (3) constructing models for assessing
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(8), 305-319, 2012.


SOLVING CAPACITATED P-MEDIAN PROBLEM BY A NEW STRUCTURE


OF NEURAL NETWORK
Hengameh Shamsipoor, Mohammad Ali Sandidzadeh, Masoud Yaghini
School of Railway Engineering, Iran University of Science & Technology, Kermanshah University of Technology, Iran
Corresponding author email: Email: [email protected]
One of the most popular and renowned location-allocation problems is Capacitated P-Median Problem (CPMP). In CPMP
locations of p capacitated medians are selected to serve a set of n customers, so that the total distance between customers
and medians is minimized. In this paper primarily we present a new dynamic assignment method based on urgency
function. After that a new formulation for CPMP, based on two types of decision variables with 2( n + p ) linear
constraints, is proposed. Later on, based on the newly presented formulation, we propose a novel neural network structure
that comprises five layers. This neural network is a combination of two-layered Hopfield neural network with location and
allocation layers and three other layers that control the Hopfield neural network. The advantage of the proposed network is
that it always provides feasible solutions, and since the constraints are united in this neural structure instead of the energy
function, the need for tuning parameters is avoided. According to the computational dynamic of the new neural network,
the amount of energy function always decreases or remains constant. The effectiveness and efficiency of this algorithm, for
standard and simulated problems with different sizes, are analyzed. Our results show that the proposed neural network
generates excellent quality and acceptable solutions.
Keywords: Location-allocation, Capacitated p-Median Problem (CPMP), Neural Network, Hopfield Network.
(Received 2 Feb 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Location allocation problem has several applications in the areas of telecommunication, transportation and distribution and
has received a great deal of attention from many researchers recently. One of the most well-known location-allocation
problems is the capacitated p-median problem. Its aim is to locate p facilities within the given space to serve n demand(s)
with the minimum total cost possible. We illustrate a typical p-median model in Fig. 1. The total cost of the solution
presented is the sum of the distance between demand points and selected location which is presented by the black lines [1].


Figure 1. Typical output for the p-median problem

The p-median problem is an improved NP-hard problem in which an increase in the input increases the computation time
of the result logarithmically. Consequently, many heuristic methods have been developed to solve this problem.
In this article we try to use neural network techniques to solve the p-median problem in which each facility can serve only a
limited number of demands.
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(8), 320-329, 2012.

ADOPTING THE HEALTHCARE FAILURE MODE AND EFFECT ANALYSIS


TO IMPROVE THE BLOOD TRANSFUSION PROCESSES
1

Chao-Ton Su1,*, Chia-Jen Chou1, Sheng-Hui Hung2, Pa-Chun Wang2,3,4


Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013,
Taiwan, R.O.C.
2
Quality Management Center, Cathay General Hospital, Taipei 10630, Taiwan, R.O.C.
3
Fu Jen Catholic University School of Medicine, Taipei County 24205, Taiwan, R.O.C.
4
Department of Public Health, China Medical University, Taichung 40402, Taiwan, R.O.C.
*Corresponding author. Email: [email protected]

The aim of this study is to conduct the healthcare failure mode and effects analysis (HFMEA) to evaluate the risky and
vulnerable blood transfusion process. By implementing HFMEA, the research hospital plans to develop a safer blood
transfusion system that is capable of detecting potentially hazardous events in advance. In this case, eight possible failure
modes were identified in total. Regarding the severity and frequency, seven failure modes were identified to have hazard
scores higher which are than 8. Five actions were undertaken to eliminate the potential risk processes. After the completion
of HFMEA improvement, from the end of July, 2008 to December 2009, two adverse events occurred during the blood
transfusion processes and the error rate is 0.012%. The HFMEA proves to be feasible and effective to predict and prevent
potentially risky transfusion processes. We have successfully introduced information technology to improve the whole
blood transfusion process.
Keywords: healthcare failure mode and effect analysis (HFMEA), blood transfusion, hazard score.
(Received 30 Mar 2011; Accepted in revised form 1 Feb 2012)

1.

INTRODUCTION

Reducing medical errors for a given healthcare process is critical to patient safety. Traditionally, risk assessment methods in
healthcare have analyzed adverse events individually. However, risk-evaluated approaches should reflect healthcare
operations, which are usually composed of sequential procedures. In other words, a systematic and process-driven
programming of risk prevention is necessary for every healthcare provider. Many studies have illustrated the necessities to
introduce risk analysis method in preventing the medical error ((Bonnabry et al., 2006); (Bonan et al., 2009)).
Healthcare Failure Mode and Effect Analysis (HFMEA) is a novel technology used to evaluate healthcare processes
proactively. HFMEA was first introduced by the Department of Veterans Affairs (VA) System and developed by the
National Center for Patient Safety (NCPS) in the United States. HFMEA is a hybrid risk evaluation system that combines
the ideas behind Failure Mode and Effect Analysis (FMEA), Hazard Analysis and Critical Control Point (HACCP), and the
VAs root cause analysis (RCA) program. An interdisciplinary team, process and subprocess flow drawing, identification of
failure mode and its cause, a hazard scoring matrix, and a decision tree to determine system weakness are usually included
in HFMEA. Currently, the HFMEA method is encouraged by the American Society for Healthcare Risk Management for
hospitals in the United States (Gilcheist et al., 2008).
Clinical researches have identified blood transfusion as a significant risky process (Klein, 2001; Rawn, 2008). Errors in
blood transfusion result in immediate and long-term negative outcomes including the increase chance of death rates, stroke,
renal failure, myocardial infraction, and infection, among others. Therefore, reducing the risks of blood transfusion is a
major patient safety issue for all hospitals. The blood transfusion process is setting on top of the list for process analysis,
since the process affects a large number of patients and the procedure is complex in nature (Burgmeier, 2002). Linden et al.
(2002) indicated that the blood transfusion is a complicated system involving the hospital blood bank, patient floor,
emergency department, operating room, transfusionist, and transporter. A more comprehensive and risk proactive analysis
of the blood transfusion process is necessary to improve patient safety.
A series of transfusion-related adverse events take place in the research hospital have urged the Patient Safety
Committee to take decisive actions to prevent harmful medical errors resulted from transfusion-related processes. An
efficient risk prevention method was anticipated to reduce the number of adverse blood transfusion events at the research
hospital. The aim of this study is to conduct the HFMEA to evaluate the risky and vulnerable blood transfusion process. By
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(8), 330-340, 2012.


ECODESIGN CASE STUDIES FOR FURNITURE COMPANIES USING THE


ANALYTIC HIERARCHY PROCESS
Miriam Borchardt, Miguel A. Sellitto, Giancarlo M. Pereira, Luciana P. Gomes
Vale do Rio dos Sinos University (UNISINOS)
Address: Av. Unisinos, 950 So Leopoldo CEP 93200-000 RS - Brazil
Corresponding author e-mail: [email protected]
The purpose of this paper is to propose a method to assess the degree of the implementation of ecodesign in manufacturing
companies. This method was developed based on a multi-criteria decision support method known as analytic hierarchy
process (AHP). It was applied in three furniture companies. Ecodesign constructs were extracted from the literature related
to environmental practices and weighted according to the AHP method, allowing for a determination of the relative
importance of the constructs for each company. Finally, the team answered a questionnaire for each company to check each
items degree of application of these processes. One year later, the method was applied again to the same three companies.
By comparing the assessed relative importance of each ecodesign construct and the degree of its application, it was possible
for us to observe the relation of the priorities of the companies to their eco-conception.
Keywords: ecodesign, design for environment, sustainability, furniture industry, Analytic Hierarchy Process, ecoconception.
(Received 11 Sep 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
One of the key contributing causes to the environmental degradation that threatens the planet is the increasing production
and consumption of goods and services. Some of the factors that contribute to environmental degradation are (a) the
lifestyle of some societies, (b) the development of emerging countries, (c) the aging of populations in developed countries,
(d) inequalities between the planets regions and (e) the increasingly short life cycles of products (Manzini and Vezzolli,
2005).
Environmental considerations, such as ecodesign (or design for (the) environment, DfE), cleaner production, recycling
projects and the development of sustainable products, promote a redesign of techniques for the conceptualization, design
and manufacturing of goods (Byggeth et al., 2007). A balance between the environmental cost and the functional
income of a production method is essential for achieving sustainable development, a requirement that has resulted in a
situation in which environmental issues must now be merged into classical product development processes (Luttropp and
Lagerstedt, 2006; Plouffe et al., 2011).
Out of this context, we can define ecodesign as a technique for establishing a product project in which the usual project
goals, manufacturing costs and product reliability are considered, along with environmental goals such as the reduction of
environmental risks, reduction in the use of natural resources, increase in recycling and the efficiency in the use of energy
(Fiksel, 1996). Such a technique makes it possible to relate the functions of a product or service to issues in environmental
sustainability, reducing environmental impact and increasing the presence of eco-efficient products, as well as encouraging
technological innovation (Manzini and Vezzoli, 2005; Santolaria et al., 2011).
The environmental practices observed in the literature on ecodesign are chiefly related to the materials, components,
processes and characteristics of products, including the use of energy, storage, distribution, packing and material residuals
(Wimmer et al., 2005; Luttropp and Lagersted, 2006; Fiksel, 1996). However, even though these techniques have been
explored in the literature, the environmental practices related to ecodesign have a generic shape and are difficult to fit to
specific product projects and industrial processes (Borchardt et al., 2009).
Authors such as De Mendona and Baxter (2004) and Goldstein et al. (2011) have worked to develop performance
indicators associated with ecodesign and have related ecodesign principles with environmental management, showing a
positive correlation between the two. However, notably, there is no consensus regarding this topic. Despite the fact that
environmental assessments are commonly found in the literature, no objective method can generate an ecodesign
measurement instrument to evaluate the degree of implementation. Such an instrument would help organizations to
prioritize their efforts in terms of achieving the most significant environmental gains.
There is a need for a structural approach in ecodesign that can address environmental concerns in a coherent way.
However, the limits in capabilities and resources available to many companies frequently hamper the development of an
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(9), 341-349, 2012.

APPLYING GENETIC LOCAL SEARCH ALGORITHM TO SOLVE THE


JOB-SHOP SCHEDULING PROBLEM
Chuanjun Zhu1, Jing Cao1, Yu Hang2, Chaoyong Zhang 2
School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068, P.R. China
2
State Key Laboratory of Digital Manufacturing Equipment & Technology, School of Mechanical Science and
Engineering, Huazhong University of Science and Technology, Wuhan, 430074, P.R. China
1

This paper presents a genetic local search algorithm for the Job-Shop Scheduling problem, and the chromosome
representation of the problem is based on the operation-based representation. In order to reduce the search space, schedules
are constructed using a procedure that generates active schedules. After a schedule is obtained, a local search heuristic
based on N6 neighborhood structure is applied to improve the solution. In order to avoid premature convergence of the
conventional genetic algorithms (GA), the improved precedence operation crossover (IPOX) and approach of the
generation alteration schema are proposed. The approach is tested on a set of standard instances taken from the literature.
The computation results validate the effectiveness of the proposed algorithm.
Keywords: Genetic Algorithms; Local Search Algorithms; Job-Shop Scheduling Problem
(Received 1 Oct 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Generally the Job-Shop scheduling problem can be described as follows: a set of n jobs is to be processed on a set of m
machines that are continuously available from time zero onwards, and each job has special processing technology. Each job
consists of a sequence of operations, and each of the operations uses one of the machines for a fixed duration. The
scheduling problem is to find a schedule which optimizes some index by determining machining sequence of job in every
machine. The hypothesis is as follows:
(1) The processes of different jobs have no machining sequence constraint;
(2) Any process can not be interrupted once be begun, and every machine can only machining one job a certain time;
(3) Machines can not break down.
The objective of the problem is to find a schedule which minimizes the makespan (Cmax) or optimizes other indices by
determining start time and machining sequence of every job. The Job-Shop Scheduling problem can be simplified as
n/m/G/Cmax.
Job-Shop scheduling problem is a well-known NP-hard problem, which have wide applications in the industrial fields.
In order to solve the hard problem, Job-Shop scheduling has been studied by a significant number of researchers for several
decades, and many theoretical research results have been proposed. The research achievements mainly include heuristic
dispatch rules(Panwalkar S, et al, 1977), mathematical programming(Blazewicz J, et al,1991), simulation-based
methods(Kim M, et al, 1994), and Artificial Intelligence (AI)-based methods(Foo S Y, et al, 1994) and so on. Heuristic
dispatch rules is straightforward and easy to implement, but it can only obtain local optimization and moderate effect.
When using mathematical programming methods, computing burden may increase exponentially with the increasing of
Job-Shop scheduling scale. Simulation methods can lead to higher computational cost and not find optimal solution. With
the development of computer technology, a number of complicated optimization methods by simulating some feature of the
biology evolution system, physical systems and human beings behavior are getting rapid development recently. Therefore,
the meta-heuristic methods, such as genetic algorithms (GA) (Croce et al.,1995, Ibrahim et al.,2008), neural network
method, simulated annealing (SA) (Van Laarhoven et al.,1992), tabu search (TS) (Taillard, 1994, Nowicki et al, 1996),
have become research hotspot of Job-Shop scheduling problem.
GA were originally developed by Professor J. Holland of Michigan University. Professor Holland published a
monograph which systematic exposition the basic theory and method of GA in 1975(Holland J H, 1975). GA main
reference the evolutionary criterion of the survival of the fittest in natural selection from Darwins evolutionism, imitate
biology reproduction, mating and gene mutation by selection, crossover and mutation operation, and find best chromosome
to solve problem by gene attached on it. GA are universal optimization algorithm, their coding technology and genetic
operation are comparatively simple, and the optimization process is no constraint condition and have the characteristics of
implicit parallelism and global solution space searching etc, so GA become the widely used in resolving Job Shop
scheduling problem. However, GA have global searching ability due to their population parallel searching so has poor local
searching ability and is prone to premature convergence. Local Search (LS) algorithm is used to local searching, but it is
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(9), 350-358, 2012.


BIOBJECTIVE MODEL FOR REDESIGNING SALES TERRITORIES


Juan Gabriel Correa Medina1, Loecelia Guadalupe Ruvalcaba Snchez1, Elias Olivares-Benitez2, Vittorio Zanella
Palacios3
1
Department of Information Systems, Autonomous University of Aguascalientes
2
Metallurgical Engineer, National Polytechnic Institute
3
Department of Computer Engineering, Autonomous University of Puebla State

Designing and updating of sales territories are strategic activities that have several causes like mergers and changes in the
markets among others. The new territories must satisfy the planning characteristics defined by each company. In this paper
we propose a biobjective mixed integer programming model for redesigning sales territories. The study was motivated by
the case of a company that distributes its products along Mexico. The model looks for minimizing the total sum of the
distances and the variation of the sales volumes for each salesman with respect to the current situation. The model is solved
using the -constraint method to obtain the true efficient set, and a heuristic method to obtain the approximate efficient set.
Both efficient sets are compared to determine the quality of solutions obtained by the heuristic method.
Keywords: biobjective model, sales territory, integer programming, business strategies
(Received 23 Feb 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
The design and constant updating of the sales territories are important strategic activities that have as intention to improve
the service level to customers through an efficient and effective covering of the markets. The updating of the sales
territories is required mainly because of mergers of firms and changes in the markets (expansion, contraction).
Sometimes just small sales territory realignment can have a big impact on the sales force productivity. Therefore it is a
critical and ongoing process to help maximize sales productivity and revenue. Some of the benefits of sales territory design
include: 1) a better coverage and customer service leading to increased productivity and sales revenue; 2) Increased sales
by prioritizing accounts with the greatest potential; 3) Reduced costs of sales through shorter and cheaper travel times; 4)
improved morale, performance and permanence of sales people due to equitable distribution of accounts and an impartial
system for achieving rewards; 5) competitive advantage through the ability to reach new opportunities faster than the
competitors.
The territories design or redesign groups geographical small areas, defined as sales coverage units (SCUs), in larger
geographical units known as territories. These territories must satisfy certain planning characteristics determined by the
firms management considering the assignment of customers, types of products, geographical areas, workload, sales volume
and territories dimensions for every salesman, among others.
The sales territory design problem is classified as a districting problem. Typical districting problems include the drawing
of political constituencies, school board boundaries, sales or delivery regions (Bozcaya et al., 2003). Although multiple
exact and heuristic methods have been applied to solve this problem, its generalization is difficult because the goals of
every firm are different. In addition, Pereira-Tavares et al. (2007) mention that when there are multiple criteria, the problem
is considered NP-hard. Puppe and Tasnadi (2008) showed that in discrete districting problems with geographical
limitations, the determination of an impartial redistricting turns out to be a problem computationally intractable (NPcomplete).
In this paper a biobjective mixed integer programming model is proposed for redesigning sales territories. The work is
structured as follows. In section 2 the problem is described showing its characteristics. Section 3 presents the mixed integer
programming model, the exact method and the heuristic algorithm used to solve it and the comparison metrics. In section 4
the experiments are explained and the results obtained are shown. Section 5 shows the conclusions and future work for this
research.

2. PROBLEM DEFINITION
The problem analyzed in this paper is motivated by a firm which sells its products along Mexico. This problem was
analyzed originally by Olivares-Bentez et al. (2009). To control its sales force, the firm has divided the Mexican Republic
into regions. In every region, the salesmen have inherited and enlarged their customers portfolio to improve their income
without intervention from the firms management. This absence of control has produced unbalanced territories with regard
ISSN 1943-670X INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(10), 369-388, 2012.

REVERSE LOGISTICS: PERSPECTIVES, EMPIRICAL STUDIES AND


RESEARCH DIRECTIONS
*Arvind Jayant1, P.Gupta2, S.K.Garg3
Department of Mechanical Engineering,
Sant Longowal Institute of Engineering & Technology, Longowal, Punjab, India
(Deemed to be University)
3
Department of Mechanical Engineering,
Delhi Technological University, Delhi-110042
*Corresponding Author E-mail address: [email protected]
1,2

Environmental and economic issues have significant impacts on reverse logistics practices in supply chain management and
are thought to form one of the developmental cornerstones of sustainable supply chains. Perusal of the literature shows that
a broad frame of reference for reverse logistics is not adequately developed. Recent, although limited, research has begun to
identify that these sustainable supply chain practices, which include the reverse logistics factors, lead to more integrated
supply chains, which ultimately can lead to improved economic performance. The objectives of this paper are to: report and
review various perspectives on design and development of reverse SC, planning and control issues, coordination issues,
product remanufacturing and recovery strategies, understand and appreciate various mechanisms available for efficient
management of reverse supply chains and identify the gaps existing in the literature. Ample opportunities exist for the
growth of this field due to its multi-functional and interdisciplinary focus. It also is critical for organizations to consider
from both an economic and environmental perspective. The characteristics of reverse logistics provided here can help the
researchers/practitioners to advance their work in the future.
Significance: The objective of this study is to encourage and provide researchers with future research directions in the field
of reverse logistics for which only empirical research methods are not appropriate. In addition, the research
directions suggested in the paper address several opportunities and challenges that currently face business
managers & academicians operating in closed loop supply chain management.
Keywords: Reverse supply chain management, Remanufacturing, Recycling, Reverse logistics.
(Received 11 May 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Reverse logistics, which is the management or return flow due to product recovery, goods return, or overstock, form a
closed-loop supply chain. The success of the closed-loop supply chain depends on actions of both manufacturers and
customers. Now, manufacturers require producing products which are easy for disassembly, reuse and remanufacturing
owing to the law of environmental protection. On the other hand, the number of customers supporting environmental
protection by delivering their used products to collection points is increasing (Lee and Chan, 2009). According to the
findings, the total cost spent in reverse logistics is huge. In order to minimize the total reverse logistics cost and high
utilization rate of collection points, selecting appropriate locations for collection points is critical issues in RSC/reverse
logistics. Reverse logistics receive increasing attention from both the academic world and industries in recent years. There
are a number of reasons for its attention. According to the findings of Rogers and Tibben-Lembke (1998), the total logistics
cost amounted to $862 billion in 1997 and the total cost spent in reverse logistics is enormous that amounted to
approximately $35 billion which is around 4% of the total logistics cost in the same year. The concerns about energy
saving, green legislation and the rise of electronic retaining are increasing. Also, the emergence of e-bay advocates product
reuse. Online shoppers typically return items such as papers, aluminum cans, and plastic bottles whose consumption and
return rates are high. Although most companies realize that the total processing cost of returned products is higher than the
total manufacturing cost, it is found that strategic collections of returned products can lead to repetitive purchases and
reduce the risk of fluctuating the material demand and cost.
Research on reverse supply chain has been growing since the Sixties (see, for example, Zikmund and Stanton, 1971;
Gilson, 1973; Schary, 1977; Fuller, 1978). Research on strategies and models on RL can be seen in the publications in and
after the Eighties. However, efforts to synthesize the research in an integrated broad-based body of knowledge have been
limited (Pokharel and Mutha, 2009). Most research focuses only on a small area of RL systems, such as network design,
production planning or environmental issues. Fleischmann et al. (1997) studied RL from the perspectives of distribution
planning, inventory control and production planning. Carter and Ellram (1998) focused on the transportation and
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(10), 389-400, 2012.

CONTINUOUS-REVIEW INVENTORY MODELS USING DIFFUSION


APPROXIMATION FOR BULK QUEUES
Singha Chiamsiri1, Hui Ming Wee2 and Hsiao Ching Chen3
School of Management, Asian Institute of Technology, Klong Luang, Pathumthani 12120, Thailand
2
Industrial & Systems Engineering Department, Chung Yuan Christian University, Chungli,32023, Taiwan, ROC
3
Department of Business Management, Chungyu Institute of Technology , Keelung 20103, Taiwan ROC
H.M.Wee, e-mail: [email protected]
1

In this paper, two continuous-review inventory control models are developed using steady-state diffusion approximation
method. Accuracy evaluations of the approximate optimal solutions for the inventory control models are reported for
selected Markovian-like queues to approximate the steady-state queue size behavior of single-server queues with bulkarrival and batch-service. The diffusion approximation method gives a remarkably good performance in approximating the
base stock level one-to-one ordering policy inventory model. The approximation for the order-up to inventory model with
replenishment lot size greater than one is also exceptionally good at selected values of heavy traffic intensity and when the
service time replenishment process distributional characteristic does not differ greatly from the exponential inter-arrival
time of the demands.
Keywords: Inventory; Queueing; Continuous-review policy; Diffusion approximation
(Received 1 Apr 2010; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
There are many applications of diffusion approximations in population genetics modeling (Bahrucha-Reid 1960, Cox and
Miller 1968, and Feller 1966), the optimal control of a stochastic advertising model (Tapiero 1975), storage systems model
and inventory control model (Bather 1966, Harrison and Taylor 1976, and Puterman 1975), and in queuing models
(Kingman 1965, Chiamsiri and Leonard 1981, and Whitt 2004) and queuing networks/systems in computer applications
(Kleinrock 1976).
Diffusion models have been developed in order to mitigate the analytical and the computational complexity of
performance measures and optimal solutions. For example, Chiamsiri and Leonard (1981) developed a diffusion process to
approximate the steady-state queue size behavior of single-server queues with bulk-arrival and batch-service, referred to as
bulk queues. Diffusion approximation solutions for various queue size statistics are developed and evaluated for a number
of special Markovian-like bulk queues. The diffusion approximation method provides a robust solution for the queue size
distribution under heavy traffic conditions. Rubio and Wein (1996) identified specific formula for the base stock levels
under a multi-product production-inventory system by exploiting the make-to-stock system and an open queuing network.
Perry et al, (2001) studied the problem of a broker in a dealership market whose buffer content (cash flow) is governed by
stochastic price-dependent demand and supply. Three model variants are considered. In the first model, buyers and sellers
(borrowers and depositors) arrive independently in accordance with price-dependent compound Poisson streams. The
second and the third models are two variants of diffusion approximations. They developed an approach to analyze and
compute the cost function based on the optional sampling theorem. Wein (1992) noted that diffusion models require a
heavy traffic condition to be valid and used the diffusion process to model a multi-product, single-server Make-to-Stock
system.
Diffusion approximation method provides an approximate solution for a general class of queuing models, and is
particularly valuable when compared with simulation since both methods provide approximate numerical results. However,
the diffusion approximation method requires far less computation time to generate numerical results, especially for queues
under heavy traffic conditions.
Bather (1966) was the first author to develop a diffusion process model for an inventory control problem. The inventory
control problem considered was assumed to have instantaneous replenishments with continuous-review (s, S) operating
policy type. Demands were assumed to be a Weiner (Gaussian) process and statistical decision theory was used to obtain
the optimal solution.
A more general diffusion process model for storage system was considered by Puterman (1975). The diffusion process
model was found to be suitable for a storage system with an infinitely divisible commodity such as liquids, e.g., oil, blood,
or whisky. Puterman (1975) also indicated that: The model might also be used to approximate more lumpy quantities such
as tires, whistles, or people, especially if the numbers are large. This is because the sequences of stochastic input-output
system processes such as queues, dams, and inventory system often converge to limiting stochastic processes which are else
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(9), 359-368, 2012.

A NURSE SCHEDULING APPROACH BASED ON SET PAIR ANALYSIS


1

Jianfeng Zhou1, Yuyun Fan2, Huazhi Zeng3


Department of Industrial Engineering, School of Mechatronics Engineering,
Guangdong University of Technology, Guangzhou, China
2
Responsibility Nurse, Guangzhou Chest Hospital of China
3
Nursing Director, Guangzhou Chest Hospital of China

In practice, multiple sources of uncertainties needed to be treated in nurse scheduling. The problem involves multiple
conflicting objectives such as satisfying demand coverage requirements and maximizing nurses preferences subject to a
variety of constraints imposed by legal regulations, personnel policies and many other hospital-specific requirements. The
aim of this research is twofold: Firstly, to apply SPA (set pair analysis) theory to the nurse scheduling problem (NSP) to
treat uncertainties and to model and solve the nurse schedule assessment problem. Secondly, to integrate the nurse schedule
assessment model with GA (genetic algorithm) to establish a nurse scheduling approach. A case study of nurse scheduling
in a surgical unit of Guangzhou Chest Hospital in China is presented to validate the approach.
Keywords: nurse scheduling problem; set pair analysis; genetic algorithm
(Received 27 Feb 2011; Accepted in revised form 1 Feb 2012)

1. INTRODUCTION
Nurse scheduling problem (NSP) is a highly constrained scheduling problem which involves generating individual
schedules for nurses over a planning period. Usually, the period is a week or a number of weeks. At the end of a period, the
time table of the next period is to be determined. The nurses need to be assigned to possible shifts in order to meet the
constraints, and to maximize the schedule quality by meeting the nurses requests and wishes as much as possible.
Nurse scheduling is a NP complete problem. It is hard to obtain a high quality schedule via automatic approach due to
various constraints including legal regulations, management objectives, and requests of nurses need to be considered. Thus,
the nurse scheduling is often solved manually in many hospitals in practice.
In the past, a considerable number of relevant studies on nurse scheduling problem have been found. The proposed
approaches can be divided into three types, the first is mathematical programming approach, the second is heuristic
approach, and the third is AI (Artificial Intelligence) approach (Cheang et al., 2003; Burke et al., 2004).
The mathematical programming approaches adopt traditional operational research methods, such as linear programming,
integer programming, and goal programming, to solve the objective optimization problem in nurse scheduling. The
objectives of nurse scheduling involve minimum nurses, maximum satisfaction of nurses requests, and minimum costs.
Warner (1976) proposed a nurse scheduling system, which poses the scheduling decision as a large multiple-choice
programming problem whose objective function quantifies preferences of individual nursing personnel concerning length of
work stretch, rotation patterns, and requests for days off. Bartholdi et al. (1980) presented an integer linear programming
model with cyclically structured 0-1 constraint matrix for cyclic scheduling. Bailey et al. (1985) utilized linear
programming for personnel scheduling when alternative work hours are permitted.
Heuristic approaches, especially meta-heuristic approaches, have shown their advantages in solving non-linear and
complex problems. They are generally better suited for generating an acceptable solution in cases where the constraint load
is extremely high and indeed in cases where even feasible solutions are very difficult to find. In recent years, the
meta-heuristic approaches, such as genetic algorithm, simulated annealing algorithm, and ant colony optimization algorithm,
have been adopted to solve nurse scheduling problem. Aickelin et al. (2003) presented a genetic algorithms approach to a
nurse scheduling problem arising at a major UK hospital. The approach used an indirect coding based on permutations of
the nurses, and a heuristic decoder that builds schedules from these permutations. Kawanaka et al. (2001) proposed a
genetic algorithm based method of coding and genetic operations with their constraints for NSP. The exchange of shifts is
done to satisfy the constraints in the coding and after the genetic operations. Thompson (1996) developed a
simulated-annealing heuristic for shift scheduling using employees having limited availability and, by comparing its
performance to that of an efficient optimal integer programming model, demonstrated its effectiveness. Gutjahr et al. (2007)
described the first ant colony optimization (ACO) approach applied to nurse scheduling, analyzing a dynamic regional
problem.
Many results of artificial intelligence research were also used to solve NSP. Petrovic et al. (2003) proposed a new
scheduling technique for capturing rostering experience using case-based reasoning methodology. Examples of previously
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(10), 401-411, 2012.

A FRAMEWORK OF INTEGRATED RECYCLABILITY TOOLS


FOR AUTOMOBILE DESIGN
Novita Sakundarini 1, Zahari Taha2,
Raja Ariffin Raja Ghazilla1, Salwa Hanim Abdul Rashid1, Julirose Gonzales1
1
Department of Engineering Design and Manufacture
Center for Product Design and Manufacturing
University of Malaya, 50603 Kuala Lumpur, MALAYSIA
2
Faculty of Mechanical Engineering
University Malaysia Pahang, 26600 Pekan, Pahang, MALAYSIA
N. Sakundarini, email : [email protected]
Automobiles are major transportation choice for society around the world. Automotive industries in many
countries mostly are one of the drivers of economic growth, job creation and technology advancement.
Although automotive industry gives promising return, problem of managing disposal at the end of
automotives life is quite challenging. Automobile is a very complex product that comprise of thousand
components made from various materials that need to be separately treated. In addition, short supply of
natural resources has provided opportunities to either reuse, remanufacture or recycle automotives
components. End of Life Vehicle (ELV) Directive launched by European Union mandated that recyclability
rate of automobile must reach 85% by 2015. The aim of this legislation is to minimize the impact of end of
life vehicle, contributing to prevention, preservation and improvement of environment quality and energy
conservation. Vehicle manufacturers and suppliers requested to include these aspects at earlier stages of the
development of new vehicles, in order to facilitate the treatment of vehicles at the time when they reach the
end of their life. Therefore, the automobile industry has to establish its voluntary action plan for ELVs, and
has numerical target to improve ELV recycling rate, reduce automotive shredder residue (ASR) landfill
volume, and reduce lead content. Many innovative approaches in improving recyclability have been
implemented, but still called out for more intelligent solutions which integrate recyclability evaluation in
product development stage. This paper attempts to review some of current innovative approach that used to
improve recyclability and introduce a framework for integrated recyclability tool to improve product
recyclability throughout its development phase.
Keywords: End of Life Vehicle, disposal, product life cycle, ELV Directive, recyclability.
(Received 2 June 2009; Accepted in revised form 1 Feb 2012)

1.

INTRODUCTION

Automobile industries provide essential need for society to support easiness of mobility. According to OECD, the total
number of vehicle are expected to increase by 32% from 1997-2020 (Kanari et al., 2003). In Europe, approximately 23
million units of automotive have been produce in 2007, while in Asia there were 30 million units and the number will
be increase every year (Pomykala et al., 2007). Automobile products comprise of thousand parts which 74-75% of
them compose from ferrous and non-ferrous material and 8-10% are from plastics, and typically only less than 75% of
weights to be recycled and the rest are not. This condition leads to the increasing number of landfill space.
Unfortunately, there is no more space available to threat this disposal.
According to Kumar and Putnam (2008), the automotive recycling infrastructure successfully recovers 75% of the
material weight in end-of-life vehicles mainly through ferrous metal separation. However, this industry faces
significant challenges as automotive manufacturers increase the use of nonferrous and non metallic materials. Vehicle
composition has been shifting toward light material such as aluminium and polymer that consequence on higher impact
to the environment. Vehicle affect the environment through their entire life cycle in energy consumption, waste
generation, green house gases, hazardous substances emissions and disposal at the end of their life (Kanari et al., 2003) .
To overcome this problem, European Union has established EU Direction for end of life vehicle and underlined that in
2015, recyclability rate of automobile must reach 85%. According to EU Directive, recyclability means the potential
for recycling of component parts or materials diverted from an end of life vehicle. Vehicle manufacturers and their
supplier are requested to include this aspect at the earlier stage of the development of new vehicle, in order to facilitate
the treatment of vehicle at the time when they reach their end of life. Many countries are now refer to the EU
legislation and try to demonstrate a strategy in fulfilling this requirement by using less of non-recyclable material in
their products, calculating for energy usage, limit waste stream, etc. Additionally, as consumption increases, raw
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(11), 412-427, 2012.

THE OPERATION OF VENDING MACHINE SYSTEMS


WITH STOCK-OUT-BASED, ONE-STAGE ITEM SUBSTITUTION
Yang-Byung Park, Sung-Joon Yoon
Department of Industrial and Management Systems Engineering, College of Engineering,
Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, Republic of Korea
Corresponding authors e-mail: {Yang-Byung Park, [email protected]}
The operation of vending machine systems presents a decision-making problem consisting of item allocation to storage
compartments, inventory replenishment, and vehicle routing, all of which have critical effects on system profit. In this
paper, we propose a two-phase solution with an iterative improvement procedure for the operation problem with stock-outbased, one-stage item substitution in vending machine systems. In the first phase, the item allocation to storage
compartments and the replenishment intervals of vending machines are determined by solving a non-linear integer
mathematical model for each machine. In the second phase, vehicle routes for replenishing vending machine inventories are
determined by applying the savings-based algorithm, which minimizes the sum of transportation and shortage costs. The
accuracy of the solution is improved by iteratively executing the two phases. The optimality of the proposed solution is
evaluated on small test problems. We present an application of the proposed solution to an industry problem and carry out
computational experiments on test problems to evaluate the effectiveness of the stock-out allowance policy with one-stage
item substitution compared to the no-stock-out allowance policy with respect to system profit. The results show the
substantial economic advantage of the stock-out allowance policy. Sensitivity analysis indicates that some input variables
significantly impact the effectiveness of this policy.
Significance: A no-stock-out policy at vending machines may cause excess transportation and inventory costs. Allowing
stock-outs and substitutions for stock-out items might increase the profit of the vending machine system. A
proposed two-phase heuristic generates high quality solutions to the operation problem with stock-out-based,
one-stage item substitution in vending machine systems. The results of the computational experiments with
the proposed heuristic guarantee a substantial economic advantage of the stock-out allowance policy over the
no-stock-out allowance policy and present favorable environments to the stock-out allowance policy. The
proposed two-phase solution can be modified easily for application to various retail vending settings under a
vendor-managed inventory scheme.
Keywords: Vending machine system, inventory management, operation problem, item substitution
(Received 1 Jan 2012; Accepted in revised form 7 Oct 2012)

1. INTRODUCTION
Vending machines have become an essential part of daily life in many countries. Their spread is especially important from
an environmental perspective because they enable consumers in remote locations to make purchases without having to
drive long distances. The USA is estimated to have over four million vending machines, with retail sales over $30 billion
annually. Japan's vending machine density is the highest in the world. The number of vending machines in South Korea has
increased over 10% every year in recent years (Korea Vending Machine Manufacturers Association, 2009). Most vending
machines sell beverages, food, snacks, or cigarettes. Recently, they have expanded to include tickets, books, flower pots,
and medical supplies like sterile syringes.
Vending machine management companies manage a network of vending machines in dispersed locations. A company
assigns between 100~200 vending machines to different business offices based on location, and each business office
manages its machines using 10~20 vehicles. An example of a vending machine system is depicted in Figure 1. Under a
vendor-managed inventory scheme, the business office is responsible for coordinating item allocation to vending machine
storage compartments, inventory replenishment, and vehicle routing, with the objective of maximizing system profit. These
decisions and management practices are referred to as the operation problem for vending machine systems (OPVMS).

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(11), 428-443, 2012.

INDUSTRY ENERGY EFFICIENCY ANALYSIS IN NORTHEAST BRAZIL:


PROPOSAL OF METHODOLOGY AND CASE STUDIES
Miguel Otvio B C. Melo1, Luiz Bueno da Silva2, Sergio Campello3
1,2
Universidade Federal da Paraba
Cidade Universitria,
[email protected], [email protected]
Tel.: +55 83 32167685 ; fax: +55 83 32167549.
Joo Pessoa PB, Brazil 58051-970
3
Portal Tecnologia
Rua Joao Tude de Melo 77
Recife-PE, Brazil52060-010
[email protected]
Energy gains vital importance once it accounts for up to one-third of the product cost. One can also consider energy as a
strategic input for the establishment of any economic and social development policy. Electricity is the basis for industrial
production, agriculture, as well as in providing services chain; hence, the need to reduce the cost for that input is vital. This
produces great benefits to the production chain by making companies more competitive, and people benefit because the
products final price becomes cheaper. The aim of this paper is to present a new methodology for assessing industrial
efficiency energy and identify points of energy losses and the most influenced sectors within the production process, and
propose mitigation measures.
Keywords: Energy Efficiency; Clean Energy; Industrial Energy Management
(Received 8 Mar 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION
Energy management in industry or commerce should not be limited to concerns about assistance in demand and taking
energy-efficiency measures; it should also sustain the idea of knowing policies and rules of energy compound, quality
certificates, as well as environmental and CO2 certificates (Cullen et al. 2010, Siitonen et al. 2010).
Currently, there are several industrial sectors that have already obtained opportunities to improve energy efficiency in
thermal systems, efficient motors, buildings with thermal insulation, efficient automated cooling, expert systems, and more
efficient compressed air and chilled water and boilers (Laurijssen et al. 2010, Hasanbeigi et al. 2010, Kirschen et al. 2009,
Hammond, 2007).
In domestic industries, it is common to apply conventional techniques in the operation of motor system. The interpretation
of this reality drives us to undertake studies in this sector, proposing improvements in the production system. The following
are noteworthy: Replacement of induction motors with a conventional high-yield motor, methods of motor drives with
direct starters or star-delta starting device for smooth, soft-starter, and frequency inverter mainly used in processes that
enable operation to change motor shaft speed (Panesi, 2006).
Energy gains vital importance once it accounts for up to one-third of the product cost. One can also consider energy as a
strategic input for the establishment of any economic and social development policy. Electricity is the basis for industrial
production, agriculture, as well as in providing services chain; hence, the need to reduce the cost for that input is vital. This
produces great benefits to the production chain by making companies more competitive, and people benefit because the
products final price becomes cheaper.
The aim of this paper is to present a new methodology for assessing industrial efficiency energy and identify points of
energy losses and the most influenced sectors within the production process, and propose mitigation measures.

2. GENERAL CONSIDERATIONS
From the scope of production chains, energy efficiency is concerned with productivity, which in turn is linked to economic
results and management. The management aspects are those that relate to project deployment and implementation, hiring,
training and retraining of personnel, as well as system evaluation in general (Jochen et al. 2007).
The most important energy-efficiency evaluation factors in economic terms are data consistency, behavior of the
consumers, and incentive for participation as well as implementation of energy-efficiency programs (Vine et al. 2010).
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(11), 444-455, 2012.

PRODUCT PLATFORM SCREENING AT LEGO


Niels Henrik Mortensen1, Thomas Steen Jensen1 and Ole Fiil Nielsen2
Department of Mechanical Engineering1
Technical University of Denmark
Niels Koppels All, DTU Bygn. 404
DK-2800 Kgs. Lyngby, Denmark
Email: Niels Henrik Mortensen, [email protected], Ole Fiil Nielsen, [email protected]
Product and Marketing Development2
LEGO Group A/S
Hans Jensensvej/Systemvej
DK-7190 Billund, Denmark
Email: Thomas Steen Jensen, [email protected]
Product platforms offer great benefits to companies developing new products in highly competitive markets. Literature
describes how a single platform can be designed from a technical point of view, but rarely mentions how the process
begins. How do companies identify possible platform candidates, and how do they assess if these candidates have enough
potential to be worth implementing? Danish toy manufacturer LEGO has systematically gone through this process twice.
The first time the results were poor; almost all platform candidates failed. The second time, though, has been largely
successful after a few changes had been applied to the initial process layout. This case study shows how companies must
focus on a limited selection of simultaneous projects in order to keep focus. Primary stakeholders must be involved from
the very beginning, and short presentations of the platform concepts should be given to them throughout the whole process
to ensure commitment.
Significance: Product platforms offer great benefits to companies developing new products in highly competitive markets.
Literature describes how a single platform can be designed from a technical point of view, but rarely mentions how the
process begins. This paper describes how platform candidates are identified and synchronized with product development.
Keywords: Product platform, Product family, Multi-product development, Product architecture, Platform assessment
(Received 8 Jul 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION
Numerous publications show the benefits of product platforms. Companies use platforms to develop not a single, but
multiple products (i.e. a product family) simultaneously. This may lead to increased sales due to more customized products
as well as decreased costs due to reuse, making product platforms very profitable for product developing companies.
Designing product platforms is not straightforward, though.
How do companies start designing a product platform? Often they start by looking for a suitable platform candidate.
Many good examples of product platforms exist in literature, and companies will often look for similar candidates within
their own company.
But what if no apparent low-hanging fruits are available? How does the company then start designing a product platform?
Or what if the low-hanging fruits are too plentiful? How does the company then choose among these candidates, or can
they all be undertaken simultaneously?
In the literature cases, the case company always starts by having a generic product, which can then be analyzed and
modularized. The problem for most companies, however, is that they have no generic product. Instead, they have a range of
different products with different structures and different functions, and various restrictions like backwards-compatibility,
license-agreements, and existing production equipment prevent the company from changing this fact.
Secondly, how do companies know if their platforms will be beneficial? Can they simply assume that all candidates will
evolve into profitable platforms?
Although cases where platforms fail are very rare in literature, they are not unheard of in industry. It is only natural that
most companies would not want to share their unsuccessful platform experiences with the rest of the world. Still many
companies, who have finally achieved some degree of success, often describe the process of getting to this level as a
struggle, where several important platform initiatives have failed on the way.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(12), 456-463, 2012.

MULTI-ASPIRATION GOAL PROGRAMMING FORMULATION


Hossein Karimi, Mehdi Attarpour
Department of Industrial Engineering, K.N. Toosi University of Technology, Tehran, Iran,
Postal Address: Iran, Tehran, 470 Mirdamad Ave. West, 19697, Postal Code: 1969764499.
Corresponding author email: mailto:[email protected]
A significant analytical approach contrived to solve many real-world problems is Goal Programming (GP). In many
marketing or decision management problems, the phenomenon of multi-segment aspiration levels and multi-choice
goal levels may exist for decision makers. This problem cannot be solved by current GP techniques such as multichoice goal programming and multi-segment goal programming. This paper provides a new idea to integrate the
multi-segment goal programming and multi-choice goal programming in order to solve multi-aspiration problems.
Moreover, it develops the concepts of these models significantly for real application; in addition, a real problem is
provided to demonstrate usefulness of the proposed model. The results of the problem are analyzed and finally, the
conclusion is remarked.
Keywords: Multi-aspiration levels; Multi-segment goal programming; Multi-choice goal programming; Decision
making; Marketing.
(Received 14 Mar 2011; Accepted in revised form 1 Nov 2012)

1. INTRODUCTION
Goal programming is a form of linear programming that considers multiple goals that are often in conflict with each
other. With multiple goals, all goals usually cannot be achieved properly. For example, an organization may want to:
(1) maximize profits and increase after-sales services; (2) increase product quality and reduce product cost and (3)
decrease credit sales and increase total sales. GP was originally introduced by Charnes and Cooper (1961). Then, it
was extended by Lee (1972), Ignizio (1985), Li (1996), Tamiz et al. (1998), Romero (2001), Chang (2004, 2007)
and Liao (2009). Goal programming seeks to minimize the deviations among the desired goals and the actual results
according to the assigned priorities. The objective function of a goal programming model is provided in terms of the
deviations from the target goals. The general GP model can be described as follows:
n
...
Minimize f i ( x) g i
(1)
i =1

Where

f i (x ) and g i are the linear function and goal of the i th objective, respectively, and n is the number of

goals. The GP model mentioned above can be solved with many techniques such as Lexicographic GP (LGP),
Weighted GP (WGP), and so on. First, some of the GP model formulations are briefly explained.
In WGP model, the achievement function consists of the unpleasant deviation variables; the weight of each one
represents its importance. Ignizio (1976) provided the mathematical formulations of a WGP model. This model is as
following:
n
...
min i d i+ + i d i
(2)
i =1

subject to

fi (x) di+ + di = gi ,
di+ , di 0,

i = 1,2,..., n

i = 1,2,..., n

...
...

(3)
(4)

xF

...
(5)
Where d i+ and d i are orderly the positive and negative deviation between the i th objective and goal. i and i
are the positive weights for the deviations.
In LGP model, an ordered vector makes the structure of achievement function. The dimension of this vector matches
Q , the number of priority levels, which is presented in the model. And its components are related to unpleasant
deviation variables of goal placed in the corresponding priority level. The mathematical formulation of a LGP model

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(12), 464-474, 2012.

COMPATIBLE COMPONENT SELECTION UNDER UNCERTAINTY VIA


EXTENDED CONSTRAINT SATISFACTION APPROACH
1

Duck Young Kim1, Paul Xirouchakis2, Young Jun Son3


Ulsan National Institute of Science and Technology, Republic of Korea
2
cole Polytechnique Fdrale de Lausanne, Switzerland
3
University of Arizona, United States
Corresponding author email: [email protected]

This paper deals with compatible component selection problems, where the goal is to find combinations of
components satisfying design constraints given a product structure, component alternatives available in design
catalogue for each subsystem of the product, and a preliminary design constraint. An extended Constraint
Satisfaction Problem (CSP) is introduced to solve component selection problems considering uncertainty in the
values of design variables. To handle a large number of all possible combinations of components, the paper proposes
a systematic filtering procedure and an efficient method to estimate a complex feasible design space to facilitate
selection of component combinations having more feasible solutions. The proposed approach is illustrated and
demonstrated with a robotic vacuum cleaner design example.
Keywords: Component Selection, Configuration, Design Constraint, Constraint Satisfaction Problem, Filtering
(Received 24 Apr 2012; Accepted in revised form 1 Nov 2012)

1. INTRODUCTION AND BACKGROUND


The product design process involves four main phases: (1) product specification, (2) conceptual design, (3)
embodiment design, and (4) detailed design. At each phase, design teams first generate or search for several design
alternatives, and select the best one considering design criteria and constraints. In conceptual design, for instance,
this generation and selection process consists of four main steps (Pahl and Beitz, 1988) (see Figure 1): (1)
decomposition-establish a function structure of a product, (2) definition-search for components to fulfil the subfunctions and define a preliminary design constraint, (3) filtering-combine components to fulfil the overall function,
select suitable combinations, and firm up into concept variants, and (4) selection-evaluate concept variants against
technical and economic design criteria and select the best one. This divergence and convergence of the search space
in design is intended to allow design teams to have unrestrained creativity by producing many initial component
alternatives for subsystems, as well as to support the filtering and selection processes to find best design alternatives
for a product.
The focus of this paper is on filterning in Figure 1. In particular, we consdider a constraint based compatible
component selection problem under uncertainty in the values of design variables, especially in redesign and variant
design environments. This problem is a combinatorial selection problem, where a component satisfying design
constraints is chosen for each subsystem from a pre-defined set (i.e. design catalogue). It is compounded by multiple
values or continuous space of design variables and discrete choices of components. In this work, it is assumed that a
design catalogue (containing component alternatives for subsystems comprising a product) and a preliminary design
constraint are given as input information (see Table 3). By generalizing the problem characteristics found, we
formulate the constraint based component selection problem with an extended Constraint Satisfaction Problem
(CSP). Finally, a systematic filtering procedure and an efficient method to estimate a complex feasible design space
are proposed to select component combinations having more feasible solutions.
A product usually consists of a number of subsystems (see Table 1), where each of them has its own design
alternatives, namely component alternatives. Table 2 lists a major list of variables used in this paper. Any
combination of components of all subsystems can be a potential design alternative for a product. The selected
components must be mutually compatible to achieve the overall functionality, where compatibility means that when
components are designed to work others without adjustment (Patel et al., 2003). Therefore, design teams need to
find the compatible combinations of components satisfying design constraints from all possible combinations.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 19(12), 476-487, 2012.

MONITORING TURNAROUND TIME USING AN AVERAGE CONTROL


CHART IN THE LABORATORY
Shih-Chou Kao
Graduate School of Operation and Management, Kao Yuan University, Taiwan
Corresponding author email: [email protected]
The long turnaround time (TAT) will prolong the waiting time of patient, increase hospital costs and decrease
service satisfaction. None of the studies on control charts and medical care have applied control charts to monitor
TAT and proposed a probability function for the distribution of the mean. This study proposed a general formula for
the probability function of the distribution of the mean. The control limits of the average chart were determined
according to the type I risks () and the standardized Weibull, lognormal and Burr distributions. Furthermore,
compared to control charts that use =0.0027, weighted variance (WV), skewed correction (SC) and traditional
Shewhart control charts, the proposed control chart is superior to other control chart, in terms of the s for a skewed
process. An example of the TAT of laboratory for the medical center presented to illustrate these findings.
Significance: This study proposes a control chart to TAT of complete blood count (CBC) test of laboratory for a
medical center. Constants of average control chart are calculated in accordance with fixing type I risks( , 0.0027)
with three distributions (Weibull, lognormal and Burr) by using the proposed a general model for the probability
density function of the distribution of the mean. Average control chart using the proposed method is superior to
other control chart, in terms of the type I risks for a skewed process.
Keywords: Average control chart, distribution of the mean, skewed distribution, type I risk, turnaround time.
(Received 23 Nov 2011; Accepted in revised form 3 Oct 2012)

1. INTRODUCTION
Timeliness is one of the most important characteristics of a laboratory test, but its importance has often been
overlooked. The timeliness with which laboratory staffs deliver test results is a manifest parameter of laboratory
service and a general standard by which clinicians and organizations judge laboratory performance (Valenstein,
1996).
The College of American Pathologists Q-Probes study in 1990 identified that the turnaround time (TAT) from
phlebotomy to reporting of results is the most important characteristic for laboratory testing and provided TATs for
various laboratory tests (Howanitz et al., 1992). Many studies also reported that poor laboratory performance in
terms of long TAT had a major impact on patient care (Vacek, 2002; Montalescot et al., 2004; Singer et al., 2005).
Until now, essentially all TAT studies have focused on inpatient testing (especially of an emergency nature),
outpatient testing and outfits (Howanitz and Howanitz, 2001; Novis et al. 2002; Steindel and Jones, 2002; Novis,
2004; Howanitz, 2005; Chien et al. 2007; Guss et al, 2008; Singer et al. 2008; Qureshi et al, 2010). Most of these
studies discussed the main factors that significantly affect the TAT, such as day of the week, drawing location,
ordering method and delivery method. Rare research proposed a statistical process control method to monitor the
TAT.
Valenstein and Emancipator (1989) noted that the distribution of TAT data is non-normal. The skewed nature of
TAT data distribution may result in specimens with excessively long TATs (Steindel and Novis, 1999). Hence, if the
traditional control charts based on the normality assumption are used to monitor a nonnormal process, the
probabilities of a type I error () in the control charts increases as the skewness of the process increases (Bai and
Choi, 1995; Chang and Bai, 2001).
Most studies on statistical process control issue used a simulation method to estimate the and related constant
values of an average control chart. No previous research has derived a probability function of the distribution of the
mean for a skewed distribution. The study will derive a general formula of the probability density function (pdf) of
the distribution of the mean and propose an average control chart that is both simple to use and more effective for
monitoring an outofcontrol signal for TAT process.
In the area of the statistical process control, the = 0.0027 is a well-known criterion for the design of a control
chart or the comparison among control charts. To monitoring a non-normal process, many studies designed some
new control charts by splitting the -risks equally into the two tails (Castagliola, 2000; Chan and Cui, 2003; Khoo
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 211, 2013.

A FRAMEWORK FOR SYSTEMATIC DESIGN AND OPERATION OF


CONDITION-BASED MAINTENANCE SYSTEMS: APPLICATION AT A
GERMAN SEA PORT
M. Lewandowski1, B. Scholz-Reiter1
1

BIBA - Bremer Institut fr Produktion und Logistik GmbH


Germany
Corresponding authors email: Marco Lewandowski, [email protected]
Abstract: Ongoing improvement of logistics and intermodal transport leads to high requirements regarding availability of
machine resources like straddle carriers or gantry cranes. Accordingly, efficient maintenance strategies for port equipment
have to be established. The change to condition-based maintenance strategies promises to save resources while enhancing
availability and reliability. This paper introduces a framework of methods and tools that enable the systematic design of
condition-based maintenance systems on the one hand and offers integrated support for operating such systems on the other
hand. The findings are evaluated based on a case-study of a German seaport and illustrate the usage of the system based on
managing the equipping process of machines with sensors for condition monitoring as well as bringing the system into the
operation phase.
Keywords: Maintenance, Maintenance Management, Condition Monitoring, Condition-based Maintenance, Sensor
Application
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Global distributed production structures and according supply networks have to work efficiently. Sea ports and
transhipment terminals, the backbone of Europes economy, have to possess lean structures and ensure seamless integration
in the supply chain. The ongoing improvement of logistics and intermodal transport with respect to through-put time and
cost reductions shall satisfy future demands for scalable structures in times of economic growth and recession. Accordingly,
this leads to high requirements regarding efficient maintenance strategies for port equipment like straddle carriers or gantry
cranes.
While cyclic and reactive maintenance actions are still the representative method in practice, the change to conditionbased monitoring of equipment is ongoing. The current research focuses on condition-based concepts for different
applications, including monitoring of tools, pumps, gearboxes, electrical equipment etc. (e.g. Al-Habaibeh et al., 2000;
Garcia et al., 2006; Garca-Escudero et al., 2011). Condition-based maintenance itself promises to make maintenance
processes more efficient (Al-Najjar, 2007; Sandborn et al., 2007), among others due to decentralized decision units in terms
of cognitive sensor applications at certain crucial components for instance so that the machine itself will be able to trigger a
maintenance action in terms of automated control and cooperation (e.g. Scholz-Reiter et al., 2007). Hence, this paper
presents the exploration of a fleet management case in a German seaport in which the specific requirements of the design
and operating phase were examined and transferred to an adopted systematic procedure model and a methodology
framework.
The paper is organized as follows: The first chapter introduces the maintenance topic with the specific requirements
regarding port equipment. A state of the art review refers to topical work on condition-based maintenance in general and
according endeavours to build a comprehensive framework for such systems. Chapter two presents the methodologies that
are part of a framework that enables and supports the design and operation of condition-based maintenance systems on top
of existing assets. Its application based, on a case study at a German seaport, verifies the applicability in chapter three. The
last chapter presents a conclusion on the work done and gives an outlook for necessary further research and work to be done
to put such systems into practice.
1.1 Maintenance in General
The term maintenance describes the combination of all technical and administrative actions that have to be fulfilled to
retain the functioning condition of a technical system or to restore it to a state in which it can perform in the required
manner. To this end, the main aim of maintenance is to secure the preferably continuous availability of machines. Based on
this definition, the holistic view on the maintenance topic is clear. The several processes based on the typical maintenance
tasks according to DIN EN 13306 as presented in the following table 1 consequently require task-specific know-how.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 1223, 2013.

A HYBRID SA ALGORITHM FOR INLAND CONTAINER


TRANSPORTATION
Won-Young Yun1, Wen-Fei Wang2, Byung-Hyun Ha1*
1

Department of Industrial Engineering


Pusan National University
30 Jangjeon-Dong, Geumjeong-Gu,Busan
609-735, South Korea
*Corresponding authors e-mail: [email protected]
2
Department of Logistics Information Technology
Pusan National University
30 Jangjeon-Dong, Geumjeong-Gu
Busan 609-735, South Korea

Abstract: Inland container transportation refers to container movements among customer locations, container terminals,
and inland container depots in a local area. In this paper, we consider the inland transportation problem where containers
are classified into four types according to the destination (inbound or outbound) and the container state (full or empty). In
addition, containers can be delivered not only by truck but also by train when time windows are satisfied. We propose a
graph model to represent and analyze the problem, and develop a mixed-integer programming model based on the graph
model. A hybrid simulated annealing algorithm is proposed to obtain the near-optimal transportation schedule of containers.
The performance of the proposed algorithm is investigated by numerical experiments.
Keywords: Inland container transportation; time windows; intermodal transportation; hybrid simulated annealing (SA)
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
By inland transportation, containers are transported from their shippers to a terminal and delivered from another terminal to
their receivers. This study deals with the inland container transportation problem by taking the total transportation cost into
account. We consider four types of containers: inbound full, outbound full, inbound empty, and outbound empty ones. The
transportation process depends on the type of a container. For example, outbound freight transportation by a container is
briefly described as follows. First, a truck is assigned to carry an empty container to a customer and the empty container is
unloaded at the customer location. Then, freight is packed into the container and the container becomes a full one. The full
container is loaded onto the truck again and delivered to a terminal directly or a railway station where the container is
transported to a terminal by train subsequently. Finally, the container is transferred to another terminal by vessel, where the
container gets into another inland container transportation system. In addition, we consider multimodal transportation by
truck and train, and further impose the constraint of time windows when a container can be picked up and unloaded at its
origin and destination, respectively. Hence, containers can be delivered either by truck or by truck and train together, as
long as time windows are satisfied.
There are many papers in which various methods are proposed to find the optimal or good solutions for the inland container transportation problem. Wen and Zhou (2007) developed a GA (genetic algorithm) to solve a container vehicle routing problem in a local area. Jula et al. (2005) formulated truck container transportation problems with time constraints as an
asymmetric multi-traveling salesman problem with time windows (m-TSPTW). They applied a DP/GA (dynamic programming and genetic algorithm) hybrid algorithm for solving large size problems. Zhang et al. (2009) addressed a similar
problem, a graph model was built up, and a cluster method and a reactive tabu search were proposed and compared their
performance with each other. Liu and He (2007) decomposed a vehicle routing problem into several sub-problems according to vehicle-customer assignment structure and a tabu search algorithm was applied to each sub-problem, respectively.
Intermodal transportation problems with time windows are more difficult to deal with, especially when a container is
related to more than one time window. Some researchers tried to transform and/or to relax the constraints related to time
windows. Lau et al. (2003) considered the vehicle routing problem with time windows under a limited number of vehicles,
and they provided a mathematical model to obtain the upper bound by selecting one of the latest-possible times to return to
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 2435, 2013.

A METHOD for SIMULATION DESIGN of REFRIGERATED


WAREHOUSES USING AN ASPECT-ORIENTED MODELING
APPROACH
G.S. Cho1, H.G. Kim2
1

Department of Port Logistics System,


Tongmyong University,
Busan, 608-711, Korea
2
Department of Industrial & Management Engineering,
Dongeui University,
Busan, 614-714, Korea
Corresponding authors e-mail: GS Cho, [email protected]
Refrigerated warehouses play a buffer function in a logistics system to meet the various demands of consumers. Over 50%
of Korean refrigerated warehouses are located in Busan, and Busan has become a strategic region for cold chain industries.
This paper suggests an Aspect-Oriented Modeling Approach (AOMA) for refrigerated warehouses which can design and
analyze system models using a simulation method considering the design conditions.
Significance: The AOMA is an analytic approach that combines the Aspect-Oriented Modeling considering cross cutting
concerns to the existing Object-Oriented Modeling. The purpose of this paper is to suggest a simulation
model using the AOMA for refrigerated warehouses. The suggested model can be utilized for
redesigning refrigerated warehouses for the easy control of reuse, extension and modification.
Keywords: Simulation, Refrigerated Warehouse, System Operation, Aspect-Oriented Modeling Approach.
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
1.1 Background and Purpose of this Research
The refrigerated warehouse industry has grown as the consumption of fresh foods has increased. The Busan area in Korea
has become a strategic region for the refrigerated warehouse industry. Many refrigerated warehouses have been built since
2000s to meet the demands but the refrigerated warehouse industry has been in trouble due to the excessive facilities,
shared stevedoring and lower storage fees etc. (Kim et al., 2010). So, it is needed that the refrigerated warehouse should
support the high value-added service to customers but so far the function of a refrigerated warehouse is only focused on
storing the items. Although the minimum necessity is to consider the design factors such as layouts, facilities and items etc.,
there are needed operating alternatives as a system, to enhance the operations and performance of refrigerated warehouses
supporting the services. Until now the main function of refrigerated warehouses is restricted to storing and there has not
been any systematic approach method to solve the above mentioned problems.
For the complex system, Object-Oriented Modeling (OOM) has been utilized in other industrial system
(Venketeswaran and Son, 2004). The OOM along with applications in the computer science area has long been the essential
reference to object-oriented technology, which, in turn, has evolved to join the mainstream of industrial-strength software
development. The OOM is a modeling paradigm mainly used in computer programming. The OOM emphasizes the use of
discreet reusable code blocks that can stand on their own, take variables, perform a function, and return values. AspectOriented Modeling (AOM) that is an extension of the OOM may also contain interfaces to each model because they also
involve method interactions (Lemos et al., 2009). Such modeling techniques help separate out the different concerns
implemented in a software system and especially some that cannot be clearly mapped to isolated units of implementation.
The main idea of the AOM is suggested to improve the performance of the OOM for realizing the modularization of these
types of concerns. The terms of the AOM can be used to demonstrate the space of programmatic mechanisms for
expressing crosscutting concerns (Kiczales et al., 1997). The AOM should be built upon a conceptual framework and is
able to denote the space of modeling elements for specifying crosscutting concerns at a higher level of abstraction (Chavez
and Lucena, 2002). Recently an Aspect-Oriented Modeling Approach (AOMA) has been suggested and applied to enhance
the performance of the simulation models based on the AOM (France et al., 2004, Wu et al., 2010). In this paper, we
suggest and develop a prototype system which implements the class model composition behavior specified in the
composition metamodel to design the refrigerated warehouse system using the AOMA.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 3646, 2013.

A MULTI-PRODUCT DYNAMIC INBOUND ORDERING AND SHIPMENT


SCHEDULING PROBLEM AT A THIRD-PARTY WAREHOUSE
B. S. Kim1, W. S. Lee2
1

Graduate School of Management of Technology


Pukyong National University
Busan 608-737, Korea
2
Department of Systems Management & Engineering
Pukyong National University
Busan 608-737, Korea
Corresponding Authors Email: [email protected]

Abstract: This paper considers a dynamic inbound ordering and shipment scheduling problem for multiple products
that are transported from a supplier to a warehouse by common freight containers. The following assumptions are made:
(i) each ordering in a period is immediately shipped in the same period, (ii) the total freight cost is proportional to the
number of containers used, and (iii) demand is dynamic and backlogging is not allowed. The objective of this study is
to identify effective algorithms that simultaneously determine inbound ordering lot-sizes and a shipment schedule that
minimize the total cost consisting of ordering cost, inventory holding cost, and freight cost. This problem can be shown
in NP-hard, and this paper presents a heuristic algorithm that exploits the properties of an optimal solution. Also, a
shortest path reformulation model is proposed to obtain a good lower bound. Simulation experiments are presented to
evaluate the performance of proposed procedures.
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
For the couple of decades, the reduction of transportation cost and warehousing cost have been two important issues to
enhance logistic efficiency and demand visibility in a supply chain. The logistic alliances and specialized Third-PartyLogistic (TPL) providers have been growing to reduce the costs in industry. In a dynamic planning period, the issue of
transportation scheduling for inbound ordering and shipping of products to TPL warehouse by proper transportation
modes at scheduled time and the issue of lot size dispatching control including inventory control to the customers have
become significantly important for production and distribution management. Each warehouse purchases multiple
products and uses a freight container as a transportation unit to ship its purchased (or manufactured) products to
retailers, which may lead to the managerial decision problems including lot-sizes for each product, container types used,
loading policy in containers, and the number of containers used. Thus, this provides us with a motivation to investigate
the optimal lot-sizing and shipment scheduling problem. Also, the managerial decision problems have arisen in TPL.
Several articles have attempted to extend the classical Dynamic Lot-Sizing Model (DLSM) incorporating
production-inventory and transportation functions together. Hwang and Sohn (1985) investigated how to
simultaneously determine the transportation mode and order size for a deteriorating product without considering
capacity restrictions on the transportation modes. Lee (1989) considered a DLSM allowing multiple set-up costs
consisting of a fixed charge cost and a freight cost, in which a fixed single container type with limited carrying capacity
is considered and the freight cost is proportional to the number of containers used. Fumero and Vercellis (1999)
proposed an integrated optimization model for production and distribution planning considering such operational
decisions as capacity management, inventory allocation, and vehicle routing. The solution of the integrated
optimization model was obtained using the Lagrangean relaxation technique. Lee et al. (2003) extended the works of
Lee (1989) by considering multiple heterogeneous vehicle types to immediately transport the finished product in the
same period it is produced. It is also assumed that each vehicle has a type-dependent carrying capacity and the unit
freight cost for each vehicle type is dependent on the carrying capacity. Lee et al. (2003) considered a dynamic model
for inventory lot-sizing and outbound shipment scheduling in the third-party warehousing domain. They presented a
polynomial time algorithm for computing the optimal solution. Jaruphongsa et al. (2005) analyzed a dynamic lot-sizing
model in which replenishment orders may be delivered by multiple shipment modes with different lead times and cost
functions. They proposed a polynomial time algorithm based on the dynamic programming approach. However, the
aforementioned works have not considered a multiple product problem.
Emily and Tzur (2005) considered a dynamic model of shipping multiple items by capacitated vehicles. They
presented an algorithm based on a dynamic programming approach. Norden and Velde (2005) dealt with a multiple
product problem of determining transportation lot-sizes in which the transportation cost function has piece-wise linear
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 4759, 2013.

A STRUCTURAL AND SEMANTIC APPROACH TO SIMILARITY


MEASUREMENT OF LOGISTICS PROCESSES
Bernardo Nugroho Yahya1,3, Hyerim Bae1*, Joonsoo Bae2
1

Department of Industrial Engineering


Pusan National University
30-san Jangjeon-dong, Geumjong-gu, Busan 609-735, South Korea
*Coressponding authors email: {bernardo;hrbae}@pusan.ac.kr
2
Department of Industrial and Information Systems Engineering
Chonbuk National University
664-14 Deokjin-dong, Jeonju, Jeonbuk 561-756, South Korea.
[email protected]
3
School of Technology Management
Ulsan National Institute of Science and Technology
UNIST-gil 50, Eonyang-eup, Ulju-gun, Ulsan 689-798, South Korea
[email protected]

Abstract: The increased individuation and variety of logistics processes has spurred a strong demand for a new process
customization strategy. Indeed, to satisfy the increasingly specific requirements and demands of customers, organizations
have been developing more competitive and flexible logistics processes. This trend not only has greatly increased the
number of logistics processes in process repositories but also has resulted processes for business decision making hard.
Organizations, therefore, have turned to process reusability as a solution. One such strategy employs similarity
measurement as a precautionary measure limiting the occurrence of redundant processes. This paper proposes a structureand semantics-based approach to similarity measurement of logistics processes. Semantic information and semantic
similarity on logistics processes are defined based on logistics ontology, available in the supply chain operation reference
(SCOR) model. By combining similarity measurement based on both structural and semantic information of logistics
processes we show that our approach improves the previous approaches in terms of accuracy and quality.
Keywords: Logistics process, SCOR, similarity measurement, business process, logistics ontology
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
To adapt to dynamic business conditions and achieve a competitive advantage, a logistics organization must implement
customized processes that meet customer requirements and that also further its business objectives. Thus, it could be said
that process customizability is integral to an organizations competitive advantage. We define customizability as the ability
of the logistics party to apply logistics process objectives to many different business conditions (Lee and Leu (2010)).
Customization of reference processes or templates to reduce the time and effort required to design and deploy processes on
all levels is common practice (Lazovik and Ludwig (2007)). Customization of reference processes usually involves adding,
removing or modifying process elements such as activities, control flow and data flow connectors. However, the existence
of a large number of customized processes can incur process redundancy. For example, many similar processes with only
slight differences in terminology, structure and semantics can exist in maritime supply chains involving the handling of
containers. In such environments, the establishment of joint procedures among several global communities such as the
International Association of Ports and Harbors (IAPH), the International Network of Affiliated Ports (INAP) and the North
American Inland Port Network (NAIPN) can increase the process redundancy in some way.
For example, the three ports of the country of origin, hub and destination belong to the same global communities (Fig.
1). The conceptual processes of container flows at the hub and the destination are the same; however, their operational
processes might differ slightly according to the respective performers, which is to say, the countrys relevant laws, ports
processing capacities, and other factors. When the ports are in the same communities, they are supposed to have either
similar or standardized processes to handle container flows. The existence of similar or standard processes inspires port
community members to reuse existing processes instead of creating new ones. In this sense, process redundancy encourages
organizations to prioritize process reusability. Process reusability is the ability to develop a process model once and use it
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 6071, 2013.

FUZZY ONTOLOGICAL KNOWLEDGE SYSTEM FOR IMPROVING RFID


RECOGNITION
H. K. Lee1, C. S. Ko2, T. Kim3*, T. Hwang4
1

Research associate [email protected], 2Professor [email protected], 3Professor [email protected], 4Professor


[email protected]
1-3
Department of Industrial & Management Engineering,
Kyungsung University
314-79 Daeyeon-3 dong, Nam-gu,
Busan, Korea
4
Department of Civil Engineering,
Dongeui University
176 Eumkwang-ro, Jin-gu,
Busan, Korea

To remain competitive in business and to be quick responsive in the warehouse and supply chain, the use of RFID has been
increasing in many industry areas. RFID can identify multiple objects simultaneously as well as identifying individual
objects respectively. Some limitations of RFID still remain in the low recognition rate and the sensitive response according
to the material type and its ambient environment. Much effort has been made to enhance the recognition rate and to be
more robust. Examples include tag design change, antenna angle, search angle, and signal intensity to name a few.
The paper proposes fuzzy logic based ontological knowledge system for improving the recognition rate of RFID and the
variance of recognition. In order to improve a performance and reduce a variance, the following sub-goals are pursued.
First, ontology is constructed for the environmental factors to be used as a knowledge base. Second, fuzzy membership
function is defined using the Forward Link Budget in RFID. Finally, a conceptual knowledge system is proposed and tested
to verify the model in the experimental environment.
Keyword: RFID, Performance, Identification, Ontology, SWRL, Fuzzy
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
The radio frequency identification (RFID) technology allows remote identification of objects using radio signal, thus
without the need for line-of-sight or manual positioning of each item. With the rapid development of RFID technology and
its application, we expect a brighter future in the object identification and control. The major advantage of RFID
technology over the barcode is that the RFID system allows detection of multiple items simultaneously as they pass through
a reader field. Additionally, each physical object has its unique ID (even two products of the same type have two different
IDs) enabling to precisely track and monitor the position of each individually labeled product piece.
There is no doubt that the RFID technology has paid off in some areas. But, the effect is not so big in the industry
wide as it has been expected earlier. One of the limitations is the recognition rate of RFID. There are many environmental
factors affecting the performance of RFID. They are material type, packaging type, tag type, reader type and tag location to
name a few. As the variables affecting the RFID performance are unpredictable in advance and changes according to the
domain, a high demand exists in a reusable and robust knowledge system.
An ontology is a formal representation of the knowledge by a set of concepts within a domain and the relationships
between those concepts. It is used to reason about the properties of that domain, and may be used to describe the domain.
An ontology provides a shared vocabulary, which can be used to model a domain that is, the type of objects and/or
concepts that exist, and their properties and relations. The focus of the ontology lies on the representation of the RFID
domain. The ontology can act as a model for exploring various aspects of the domain. Since part of the ontology deals with
the classification of RFID applications and requirements, it can also be used for supporting decisions on the suitability of
particular RFID tags for different applications.
Previous researches of RFID include RFID device, middleware, agent, ontology and industrial applications. Pitzek
(2010) focused ontologies as the representation of the domain for informational purposes, i.e., as a conceptual domain
model, and putting it into context with other domains, such as communication capable devices and automatic identification
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 7283, 2013.

COLLABORATION BASED RECONFIGURATION OF PACKAGE SERVICE


NETWORK WITH MULTIPLE CONSOLIDATION TERMINALS
C. S. Ko1, K. H. Chung2, F. N. Ferdinand3, H. J. Ko4
1

Department of Industrial & Management Engineering


Kyungsung University
309, Suyeong-ro, Nam-gu
Busan, 608-736, Korea
e-mail: [email protected]
2
Department of Management Information Systems
Kyungsung University
309, Suyeong-ro, Nam-gu
Busan, 608-736, Korea
e-mail: [email protected]
3
Department of Industrial Engineering
Pusan National University
Busandaehak-ro, Geumjeong-gu
Busan, 609-935, Korea
e-mail: [email protected]
4
Department of Logistics
Kunsan National University
558 Daehangno, Gunan
Jeonbuk, 573-701, Korea
Corresponding authors e-mail: [email protected]

Abstract: The market competition of package deliveries in Korea is severe because a large number of companies have
entered into the Korean market. A package delivery company in Korea generally owns and operates a number of service
centers and consolidation terminals for high level customer service. However, some service centers cannot create profits
due to low volume acting as the facilities raising the costs. This challenge can be overcome by collaboration strategy in
order to improve its competitiveness. In this regard, this study suggests an approach to the reconfiguration of package
service networks with respect to collaboration strategy. Thus, we propose a multi-objective nonlinear integer programming
model and a genetic algorithm-based solution procedure for participated companies to maximize their profit. An illustrative
numerical example in Korea is presented to demonstrate the practicality and efficiency of the proposed model.
Keyword: network reconfiguration, express package delivery, cutoff time, strategic partnership
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
The market of express package deliveries in Korea has been rapidly expanded according to the progress of TV home
shopping and internet buying and selling. Accordingly, various sized domestic express companies have been established,
and various foreign companies also have entered into the Korean express market. As a result of the surplus of express
companies, they are struggling with remaining competitive at a reasonable price with appropriate level of customer
satisfaction. In this regard, collaboration or partnership strategy can be a possible option in order to overcome such
difficulties. The collaboration or partnership is becoming a popular competitive strategy to be adopted in all business
sectors. Some of well-known examples can be seen in the air transportation system such as Sky Team, Star Alliance, and
Oneworld as well as in sea transportation such as CKYH-The Green Alliance, Grand Alliance, and so on. In addition, the
supply chain management regards the concept of collaboration as a critical factor for its successful implementation.
In Korea, an express company generally operates its own service network which consists of customer zones, service
centers, and consolidation terminals. Customer zones refer to geographical districts in which customers either ship or
receive packages and are typically covered by a service center. And a service center receives customer shipment requests
and picks up parcels from customer zones and then the packages are waited until its cutoff time for transshipment in bulk to
a consolidation terminal. In this way, the service center acts as a temporary storage facility connecting customers to a
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 8498, 2013.

COMPARISON OF ALTERNATIVE SHIP-TO-YARD VEHICLES WITH THE


CONSIDERATION OF THE BATCH PROCESS OF QUAY CRANES
S. H. Choi1, S. H. Won2, C. Lee3
1

Port Management/Operation & Technology Department,


Korea Maritime Institute
1652, Sangam-dong, Mapo-gu,
Seoul, South Korea
2
Department of Logistics,
Kunsan National University
558 Daehangno, Gunsan,
Jeonbuk, South Korea
3
School of Industrial Management Engineering,
Korea University
Anam-dong, Seongbuk-gu,
Seoul, South Korea
Corresponding authors email: Seung Hwan Won, [email protected]

Container terminals around the world fiercely compete to increase their throughput and to accommodate new mega vessels.
In order to increase the port throughput drastically, new quay cranes capable of batch processing are being introduced. The
tandem-lift spreader equipped with a quay crane, which can handle one to four containers simultaneously, has recently been
developed. Such increase in the handling capacity of quay cranes requires significant increase in the transportation capacity
of ship-to-yard vehicles as well. The objective of this study is to compare the performances of three alternative
configurations of ship-to-yard vehicles in a conventional container terminal environment. We assume that the yard storage
for containers is horizontally configured and the quay cranes equip with tandem-lift spreaders. A discrete event simulation
model for a container terminal is developed and validated. We compare the performances of the three alternatives under
different cargo workloads and profiles, represented by different annual container handling volumes and different ratios of
tandem mode operations, respectively. The results show that the performances of the alternative vehicle types are largely
dependent on workload requirement and profile.
Keywords: ship-to-yard vehicle; simulation; container terminal; quay crane; tandem-lift spreader
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
As the size of trade between countries increased, there are rapid changes in the logistics environment concerning ports. The
world container traffic in 2008 is 540 million TEUs, which grew by 2.3 times compared to 230 million TEUs in 2000. It is
forecasted to achieve growth rate of around the annual average of 9% by 2013. Due to this, the marine transportation
industry has made Mega-Carrier appear through mergers and acquisitions between shipping lines to expand market
dominance, and they are continuing to make enormous investments for securing mega ships over 10,000 TEUs in order to
strengthen the competitiveness in shipping cost.
According to such changes in the shipping environment, large ports in the world are engaging in fierce competition for
hub ports by continents to attract mega fleet, and this is leading to the trend of strengthening port competitiveness through
the securing and operation of efficient port facilities. In other words, the worlds leading ports such as Singapore, Shanghai,
Hong Kong, Shenzhen, Busan, Rotterdam, and Hamburg are not only developing large-sized terminals but also investing
highly productive handling equipment for the efficiency of port operation.
The handling equipment in a port generally consists of quay cranes (QCs), ship-to-yard vehicles (terminal trucks or
automated guided vehicles), and yard cranes (YCs). Out of these, QCs and ship-to-yard vehicles are most closely related to
ships. These are the most important factors that determine the ship turnaround time in a port.
Berthing a mega ship over 10,000 TEUs in a port requires water depth, the workable specification of QCs, and the
high productivity of a terminal. Despite increasing the size of ships, shipping lines tend to require the service time in the
past. Therefore, ports unable to meet the trend of such customer requirements may bring about the desertion of customers.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 99113, 2013.

A HIERARCHICAL APPROACH TO VEHICLE ROUTING AND


SCHEDULING WITH SEQUENTIAL SERVICES USING THE GENETIC
ALGORITHM
K. C. Kim1, J. U. Sun2, S. W. Lee3
1

School of Mechanical, Industrial & Manufacturing Engineering,


Oregon State University,
USA
2
School of Industrial & Management Engineering,
Hankuk University of Foreign Studies,
Korea
3
Department of Industrial Engineering,
Pusan National University
Korea
Corresponding authors email: [email protected]

Abstract To survive in todays competitive market, material handling activities need to be planned carefully to satisfy
business and customers' demand. The vehicle routing and scheduling problems have been studied extensively for various
industries with special needs. In this paper, a vehicle routing problem considering unique characteristics of the electronics
industry is considered. A mixed-integer nonlinear programming (MINP) model has been presented to minimize the
traveling time of delivery and installation vehicles. A hierarchical approach using the genetic algorithm has been proposed
and implemented to solve problems of various sizes. The computational results show the effectiveness and the efficiency of
the proposed hierarchical approach. A performance comparison between the MINP approach and the hierarchical approach
is also presented.
Keywords: Vehicle Routing Problem, Delivery and Installation, Synchronization of Vehicles, Genetic Algorithm,
Electronics Industry
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
To survive in this competitive business environment, a company must have ways to handle various materials of concern
cost-effectively. In manufacturing industries, material handling activities for raw materials and works-in-process are as
important as the ones for final products. So that material handling activities satisfy business and customers' demand
effectively, the vehicle routing and scheduling problems have been studied and implemented extensively for various
industries with special needs (Golden and Wasil, 1987; List and Mirchandani, 1991; Chien and Spasovic, 2002; Zografos
and Androutsopoulos, 2004; Ripplinger, 2005; Prive et al, 2006; Claassen and Hendricks, 2007; Ji, 2007). In this paper, a
variant of the vehicle routing problems (VRP), which has been characterized in the electronics industry to satisfy its unique
material handling needs as the paradigm of distribution has been shifted from the past, has been presented.
In recent days, the electronics industry experiences rapidly-emerging changes in their post-sales service, i.e., delivery
and installation. In the past, local stores individually are responsible for the services of the delivery and the installation.
However, due to the growing demand of direct orders from customers and the increasing complexity of advanced
electronics products, electronics manufacturers are acceleratingly required to directly deliver their goods to customers and
to provide on-site professional installation. The sales of electronics via e-commerce, large discount stores, general
merchandise stores, department stores, and etc. are very rapidly increasing. In addition, electronics manufacturers put
intensive efforts to increase sales through professional electronics franchises like Staples, OfficeMax, and etc., which do
not provide such delivery and installation. These trends tend to add the responsibilities of the delivery and the installation
onto electronics manufacturers, and the number of direct deliveries from electronics manufacturers to customers increases
at an explosive pace.
Another unique characteristic of the electronics industry can be identified in installation service. Some products like airconditioners have required the professional installation service even in the past. Many newly-emerging products require not
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 114125, 2013.

THE PROBLEM OF COLLABORATION IN MANUFACTURED GOODS


EXPORTATION THROUGH AUTONOMOUS AGENTS AND SYSTEM
DYNAMIC THEORIES
V. M. D. Silva1, A. G. Novaes2, B. Scholz-Reiter3, J. Piotrowski3
1

Department of Production Engineering


Federal Technological University of Paran,
Ponta Grossa, PR, 84016-210,
Brazil
Corresponding authors e-mail: [email protected]
2
Federal University of Santa Catarina
Florianopolis, SC, 88040-900
Brazil
e-mail: [email protected]
3
Bremen Institut for Production und Logistic
BIBA- University of Bremen,
Hochschulring 20, 28359,
Bremen, Germany
e-mail: [email protected]
e-mail: [email protected]

Abstract: Along export chains transportation has an important cost impacting directly on the efficiency of the whole chain.
Experiments show satisfactory results in terms of reduced delivery time, increased productivity of transportation resources
as well as economies of scale by the implementation of the Collaborative Transportation Management (CTM). In this
context, this incipient study intends to present a real Brazilian problem about exports of manufactured products using
maritime transportation, as well as introducing the concept of CTM as a tool for helping companies on their decision
making. It identifies the major parameters that could support the maritime logistics of manufactured products and, the
Autonomous Agents and System Dynamics theories are described as possible methods to model and analyze this logistic
problem. As a result for this preliminary study, is intended to awake the readers interest about these emergent concepts
applied to such important problem to contribute with the costs reduction of the exports chain.
Key-Words: Collaborative transportation management, Manufactured exporters, Maritime shippers, Autonomous agents,
Decision-making, System Dynamics
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
In Brazil, the foreign trade has been not used as pro-active factor in development strategy because, historically, the
negotiations between the different participants of the export chain have presented conflicts. It is observed that each link
intend to minimize its individual costs, which normally does not converge to the global optimum of the supply chain.
Therefore, companies are being obliged to re-analyze its procedures, to use reengineering techniques and redefine the
relations and models of its supply chains to reduce costs, increase efficiencies and gain competitive advantage.
To reduce such problems it has recently emerged the concept of CTM, in the new concept of collaborative logistics. It
has been spread out from the year 2000 through Collaborative Planning, Forecasting and Replenishment (CPFR) approach,
and CTM has been defined by experts as a helpful tool to provide reductions in the costs of transactions and risks, enhance
the performance of service and capacity, as well as the achievement of a more dynamic supply chain (Silva et al., 2009).
As the exporter Brazilian companies are looking for higher competitiveness, they shall not act in an individual manner
and start acting in a collaborative manner. Therefore, it is required a detailed sharing of data and information by the agents
of the logistics chain to compose a solid partnership. It is understood as agent each integrant of this chain, as in the
maritime logistics chain: the producer company, road transportation, shipowners and maritime shippers.
After bibliographic studies and contacting entrepreneurs of this area, it is verified that there is restrict scientific work
exploring this subject comprising manufactory industries, freight contractors and maritime shippers, in order to contribute
with exportation. Therefore, this study, which is part of a Ph.D. thesis in development, intends to summarily present an
overview of the Brazilian exportation and its operation of manufactured exportation chain using maritime shippers, the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 126140, 2013.

EVOLUTION OF INTER-FIRM RELATIONSHIPS: A STUDY OF


SUPPLIER-LOGISTICAL SERVICES PROVIDER-CUSTOMER TRIADS
P. Childerhouse1, W. Luo1, C. Basnet1, H. J. Ahn2, H. Lee3, G. Vossen4
1

Department of Management Systems, Waikato Management School,


University of Waikato,
Hamilton 3216, New Zealand
2
College of Business Administration,
Hongik University,
Seoul, Korea
3
Brunel Business School,
Brunel University, Uxbridge,
Middlesex, UK
4
School of Business Administration and Economics,
University of Muenster, 48149 Muenster,
Germany
Corresponding authors email: Paul Childerhouse, [email protected]
The concept of supply chain management has evolved from focussing initially on functional co-ordination within an
organisation, then to external dyadic integration with suppliers and customers and more recently towards a holistic
network perspective. The focus of the research described in this paper is to explore how and why relationships within
supply chain networks change over time. Since a triad is the simplest meaningful sub-set of a network, we use triads as
the unit of analysis in our research. In particular, we consider triads consisting of a supplier, their customer, and the
associated logistics services provider. An evolutionary triadic model with eight relational states is proposed and the
evolutionary paths between the states hypothesised, based on balance theory. The fundamental role of logistical service
providers is examined within these alternative triadic states with a specific focus on the relationships between the actors
in the triad. Empirical evidence is collected from three very different triads and cross-referenced with our proposed
model. How the interactions and relationships change over time is the central focus of the case studies and the
conceptual model. Our findings indicate that some networks are more stable than others and depending on their
position in a triad some actors can gain power over their business partners. Further, those organisations that act as
information conduits seem to have greater capacity to influence their standing in a supply chain network.
Significance: We make conceptual contribution to supply network theory, as well as reporting empirical investigation
of the theory.
Keywords:

Supply networks, Inter-firm relationships, Triads, Balance theory, Logistical service providers.
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
This paper investigates inter-firm relationships from a social network perspective. In particular, we examine the
relationship dynamics of a network of inter-connected firms with shared end consumers. The social network
perspective has gained significant momentum in the management literature (Wang and Wei, 2007). In this paper, we
use the psychological concept of balance theory (Simmel, 1950; Heider, 1958) to make sense of the dynamic interrelationships in a supply chain network. The most important dimensions of change in business networks that will be
focussed upon concern the development of activity links, resources ties, and actor relationship bonds (Gadde and
Hakansson, 2001).
A triad is the smallest meaningful sub-set of a network (Madhavan, Gnyawali and He, 2004) and as such will be
used as the unit of analysis throughout this paper. Figure 1 is a simplistic representation of the multi-layered complex
business interactions that make up supply chain networks. The actors are represented by nodes (circles) and the
connections between them as links. A triadic sub-set of the entire network is illustrated as the grey shaded area in
Figure 1. Three actors, A, B, and C are highlighted and their three links, A with B, A with C and B with
C. Each actor also has a potential mediating role in the relationship between the other two as indicated by the dashed
arrow from actor A to the link between B and C. Thus, we contend that a representative sub-set of a network can
be investigated via triads. This cannot be said for dyads, which overly simplify the social complexities of real world
business interactions.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 141152, 2013.

OPERATION PLANNING FOR MARITIME EMPTY CONTAINER


REPOSITIONING
Y. Long1, L. H. Lee2, E. P. Chew3, Y. Luo4, J. Shao5, A. Senguta6, S. M. L. Chua7
1,2,3,4,5

Department of Industrial and Systems Engineering,


National University of Singapore,
Singapore,119260
1
Corresponding authors email: [email protected]
2
[email protected]
3
[email protected]
4
[email protected]
5
[email protected]
6,7
Neptune Orient Lines Ltd.,
Singapore,119962
6
[email protected]
7
[email protected]

Abstract
One of the challenges that liner operators face today is to effectively operate empty containers to meet demands and to
reduce inefficiency. In this study, we develop a decision support tool to help the liner operator in managing the maritime
empty container repositioning efficiently. This tool considers the actual operations and constraints of the problems faced by
the liner operator and uses mathematical programming approaches to solve it. We present a case study, which considers 49
ports and 44services.We also compare our proposed model with a simple rule, which attempts to mimic the actual operation
of a shipping liner. The numerical results show that the proposed model is promising. Moreover, our model is able to
identify potential transshipment hubs for intra-Asia empty container transportation.
Keywords Empty Container Repositioning Optimization Network Transshipment Hub Decision Support Tool
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Since 1970s, containerization has become increasingly popular in global freight transportation activities, especially in
international trade routes. Containerization helps to improve port handling efficiency, reduce handling costs, and increase
trade flows. To increase the utilization of containers, containers should be reloaded with new cargoes as soon as possible
after reaching its destination. However, this is not always possible due to the trade imbalance between different regions in
the world and this has resulted in holding large inventory of empty containers by ocean liners and thereby increasing the
operating cost. Generally export-dominated ports need a large number of empty containers, while import-dominated ports
hold a large number of surplus empty containers. Under this imbalanced situation, a profitable movement of a laden
container usually generates an unprofitable empty container movement. The main challenge is when and how many empty
containers we should move from the import-dominated ports to export-dominated ports to meet the customer demands
while reducing the operational cost.
There are a large number of studies on the empty container repositioning problem. One area is to use inventory-based
control mechanisms for empty container management (e.g., Li et al., 2004; Song, 2007; Song and Earl, 2008). Another area
is to apply dynamic network programming methods to container management problem (e.g., Lai et al., 1995; Shen and
Khoong, 1995; Shitani et al., 2007; Liu et al., 2007; Erera et al., 2009). Some studies of this area focus on inland container
flow (e.g., Crainic et al., 1993; Jula et al., 2006), while some studies are on maritime transportation (e.g., Francesco et al.,
2009). Besides, there are studies developing intermodal models, which consider both inland and maritime transportation
(e.g., Choong et al., 2002; Erera et al., 2005; Olive et al., 2005).The general maritime network model for empty container
repositioning was proposed by Cheung and Chen (1998).They develop a time space network model and their study paves
the way for maritime empty container repositioning network modeling. To apply the general networking techniques to the
shipping industry, researchers tend to consider the actual services and the real scale network in the latest decade. Actual
service schedule is considered in Lam et al. (2007). They develop an approximate dynamic programming approach in
deriving operational strategies for the relocation of empty containers. Although actual services schedule is considered in
their study, the proposed dynamic approximation programming is limited to a small scale problem. One paper that
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 153162, 2013.

OPTIMAL PRICING AND GUARANTEED LEAD TIME WITH LATENESS


PENALTIES
K. S. Hong1, C. Lee2
1,2

Division of Industrial Management Engineering


Korea University
Anamdong 5-ga, Seongbuk-gu, 136-713
Seoul, Republic of Korea
1
[email protected]
2
Corresponding authors e-mail: [email protected]

This paper studies the price and guaranteed lead time decision of a supplier that offers a fixed guaranteed lead time for a
product. If the supplier is not able to meet the guaranteed lead time, the supplier must pay a lateness penalty to customers.
Thus, the expected demand is a function of the price, guaranteed lead time and lateness penalty. We first develop a
mathematical model for a given supply capacity to determine the optimal price, guaranteed lead time and lateness penalty to
maximize the total profit. We then consider the case where it is also possible for the supplier to increase capacity and
compute the optimal capacity.
Keyword: Time-based competition, Guaranteed lead time, Pricing, Lateness penalty decision, Price and time sensitive
market
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Increased competition has forced service providers and manufacturers to introduce new products into the market and time
has evolved as a competitive paradigm (Blackburn 1991, Hum and Sim 1996). As time has become a key to business
success, lead time reduction has emerged as a key competitive edge in service and manufacturing (Van Beek and Van
Putten 1987, Suri 1998, Hopp and Spearman 2000, White et al. 2009). This new competitive paradigm is termed as timebased competition.
Suppliers exploit customers sensitivity to time to increase prices in return for shorter lead time. For instance,
amazon.com charges more than double the standard shipping costs to guarantee delivery in two days, while its normal
delivery time may be as long as a week (Ray and Jewkes, 2004). Likewise, suppliers differentiate their products based on
lead time in order to maximize the suppliers revenue (Boyaci and Ray, 2003). In this case, the lead time reduction provides
suppliers with new opportunities. Additionally, in todays global economy, suppliers are increasingly dependent on fast
response time as an important source of sustainable competitive advantage. As a result, one needs to consider the influence
of lead time on demand.
This paper considers a supplier that is using guaranteed lead time to attract customers and supply a product in a price and
time sensitive market. Time-based competition was first studied by Stalk and Hout (1990) who addressed the effect of time
as strategic competitiveness. Hill and Khosla (1992) developed an optimization model to calculate the optimal lead time
reduction, and compares the costs and benefits of lead time reduction. Palaka et al. (1998), So and Song (1998) and Ray
and Jewkes (2004) assumed that demands are sensitive to both the price and the guaranteed lead time, and investigated the
optimal pricing and guaranteed lead time decisions. Palaka et al. (1998) employed an M/M/1 queueing model, and
developed a mathematical model to determine the optimal guaranteed lead time, the capacity utilization and the price with a
linear demand function. So and Song (1998) extended the Palaka et als. (1998) results to consider a log-linear (CobbDouglas) demand function, and analyzed the impact of using delivery time guarantees as a competitive strategy in service
industries. Ray and Jewkes (2004) assumed that the mean demand rate is a function of price and guaranteed lead time, and
the price is determined by the length of the lead time, and developed the optimization model to determine the optimal
guaranteed lead time. They also extended their results by incorporating economies of scale where the unit operating cost is
a decreasing function of the mean demand rate.
So (2000), Tsay and Agarwal (2000), Pekgun et al. (2006) and Allon and Federgruen (2008) also developed a
mathematical model to determine the optimal price and the optimal guaranteed lead time in a competitive setting where
suppliers selling a product compete on price and lead time. However, their models do not consider the lateness penalty
decision. When suppliers employ a guaranteed lead time strategy, they may have to pay the lateness penalty to customers
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 163175, 2013.

OPTIMAL CONFIGURATION OF STORAGE SYSTEMS


FOR MIXED PYRAMID STACKING
D. W. Jang1, K. H. Kim2
1

Port Research Division Port Management/Operation & Technology Department


Korea Maritime Institute
Seoul, Korea
Email: [email protected]
2
Department of Industrial Engineering
Pusan National University
Busan, Korea
Corresponding authors email: [email protected]

Abstract: Pyramid stacking is a type of block stacking method for cylindrical unit loads such as drums, coils, paper rolls,
and so on. This study addresses how to determine the optimum configuration of a storage system for mixed pyramid
stacking of multi-group unit loads. It is assumed that multiple groups of unit loads, with different retrieval rates and
duration of stays from each other, are stored in the same storage system. The configuration of a storage system is specified
by the number of bays, the assignment of groups to each bay, and the height and width of each bay. A cost model
considering the handling cost and the space cost is proposed. Numerical experiments are provided to illustrate the
procedures for the optimization in this study.
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Pyramid stacking is a storage method in which cylindrical unit loads are stacked on the floor as shown in Figure 1. Pyramid
stacking is one of the storage methods with a high re-handling cost and space utilization. The bay of pyramid stack in
Figure 1 consists of 3 tiers by 4 rows at the bottom resulting in 9 unit loads in total. There are 4, 3 and 2 unit loads at each
tier, respectively, from the bottom. When a retrieval order is issued for a unit load at a low tier, one or more than one
relocation must be performed before the target unit load is retrieved. Such relocations are a major source of inefficiency in
handling activities in pyramid stacking systems.
Figure 2 shows the total number of handling each unit load for retrieval in the pyramid stacking system of Figure 1.
The k is the index of the tier from the top and l is the index of the position in each tier from the left hand side. The s
represents the total number of handlings for retrieving a target unit load from each corresponding position. In case of a unit
load at (3,2), it requires 4 relocations of unit loads at (1,1), (1,2), (2,1) and (2,2) and thus the total number of handlings
becomes 5.
For a given number of unit loads in a bay, when the number of unit loads at the lowest tier decreases, the number of
tiers in pyramid stacking system must increase, which results in an increase in the expected number of relocations per
retrieval. However, the height of a pyramid stacking bay cannot exceed the number of unit loads at the lowest tier because
the number of unit loads per tier decreases one by one as the tier goes up. When the number of unit loads at the lowest tier
increases, the space required for the bay increases. Park and Kim (2010) attempted to estimate the number of re-handles for
a given number of unit loads at the lowest tier and the number of tiers in a bay when all the unit loads are heterogeneous,
which means that all the unit loads in the bay are different from each other and a retrieval order is issued for a specific unit
load in the bay. However, this study extends Park and Kim (2010) to the case where multiple unit loads in the bay are the
same type and thus a retrieval order is issued for the unit loads in the same type. Figure 3-(a) illustrates the case where all
the unit loads in a bay are different types, while Figure 3-(b) illustrates the case where there are three unit loads in each of
three types.
Many researchers have analyzed the re-handling operation. Watanabe (1991) analyzed the handling activities in
container yards and proposed a simple index, termed as the index of accessibility to express the accessibility of a stack
considering the number of relocations required to lift a container. Castilho and Daganzo (1993) analyzed the handling
activities in inbound container yards. Based on a simulation study, they proposed a formula for estimating the number of
relocations for the random retrieval of a container. Kim (1997) proposed a formula for estimating the number of relocations
for a random retrieval of an inbound container from a bay. Kim and Kim (1999) analyzed the handling activities for
relocations in inbound container yards and used the result for determining the number of devices and the amount of space
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 176187, 2013.

PLANNING FOR SELECTIVE REMARSHALING IN AN AUTOMATED


CONTAINER TERMINAL USING COEVOLUTIONARY ALGORITHMS
K. Park1, T. Park1, K. R. Ryu1
1

Department of Computer Engineering


Pusan National University
Busan, Korea
Corresponding authors email: [email protected]

Abstract: Remarshaling in a container terminal refers to the task of rearranging containers stored in the stacking yard to
improve the efficiency of subsequent loading onto a vessel. When the time allowed for such preparatory work is limited,
only a selected subset of containers can be rearranged. This paper proposes a cooperative co-evolutionary algorithm (CCEA)
that decomposes the planning problem into three subproblems of selecting containers, determining target locations, and
finding a moving order, and conducts a cooperative parallel search to find a good solution for each subproblem. To cope
with the uncertainty of crane operation in real terminals, the proposed method iteratively replans at regular intervals to minimize the gap between the plan and the execution. For an efficient search under real-time constraint of iterative replanning,
our CCEA reuses the final populations of the previous iteration instead of restarting from scratch.
Significance:

Keywords:

This paper deals with an optimization problem having three constituent subproblems that are not independent of each other. Instead of solving the subproblems in turn and/or heuristically, which sacrifices
solution quality for efficiency, we employ a CCEA to conduct a cooperative parallel search to find a
good solution efficiently. For applications to real world, issues like real-time constraint and uncertainty
are also addressed.
Automated container terminal, remarshaling, container selection, iterative replanning, cooperative coevolutionary algorithm
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
The productivity of a container terminal is critically dependent on the vessel dwell time that is mainly determined by how
efficiently the export containers are loaded onto the vessels. The efficiency of loading operation is dependent on how the
containers are stacked in the stacking yard where the containers are temporarily stored. The export containers should be
loaded in a predetermined sequence taking account of the weight balance of vessel and the convenience of operations at the
intermediate and the final destination ports. If a container to be fetched next is stored under some other containers, additional operations are required for the yard crane to relocate the containers above it. This rehandling is the major source of
inefficiency of loading, causing delays at the quayside. Loading operation is also delayed if a yard crane needs to travel a
long distance to fetch a container for loading. The delay of loading caused by rehandling or long travelling can be avoided
if the export containers are arranged in an ideal configuration respecting the loading sequence. There have been many studies on deciding ideal stacking positions of export containers coming into the yard (Kim et al., 2000, Duinkerken et al., 2001,
Dekker et al., 2006, Yang et al., 2006, Park et al., 2010a, and Park et al., 2010c). However, appropriate stacking of incoming containers is difficult because most of the containers are carried into the terminal before the loading plan is made available. Remarshaling refers to the preparatory task of rearranging the containers during the idle times of yard cranes to avoid
rehandling and long travelling at the time of loading. In real container terminals, however, not all the export containers can
usually be remarshaled because the crane idle time is not long enough and the loading plan is fixed only a few hours before
the loading operation starts.
In this paper, we propose a cooperative coevolutionary algorithm (CCEA) that can derive a remarshaling plan for a selected subset of the export containers under time constraint. The idea of CCEA is to efficiently search for a solution in a
reduced search space by decomposing a given problem into subproblems (Potter et al., 2000). In CCEAs, there is a population of candidate solutions for each subproblem, and these populations evolve cooperatively via mutual information exchanges. Park et al. (2009) developed a planning method for remarshaling all the export containers using a CCEA assuming
no time constraint. In their CCEA, the problem of remarshaling is decomposed into two subproblems: one for determining
the target slots to which the containers are relocated and the other for determining the order of moving the containers. Another work by Park et al. (2010b) paid attention to the problem of selective remarshaling and proposed a genetic algorithm
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 188210, 2013.

SEASONAL SUPPLY CHAIN AND THE BULLWHIP EFFECT


D. W. Cho1, Y. H. Lee2
1

Department of Industrial and Management Engineering


Hanyang University, Ansan, Gyeonggi-Do, 426-791, South Korea,
e-mail: [email protected]
2*
Department of Industrial and Management Engineering
Hanyang University, Ansan, Gyeonggi-Do, 426-791, South Korea,
Corresponding authors e-mail: [email protected]

Abstract In this study, we quantify the bullwhip effect in a seasonal two echelon supply chain with stochastic lead time.
The bullwhip effect is the phenomenon of demand variability amplification when one moves away from the customer
to the supplier in a supply chain. The amplification effect poses very severe problems for a supply chain. The retailer
faces external demand for a single product from end customers where the underlying demand process is a seasonal
autoregressive moving average, SARMA (1,0)X(0,1)s demand process. And the retailer employs a base stock periodic
review policy to replenish its inventory from the upstream party every period using the minimum mean-square error
forecasting technique. We investigate what parameters influence the bullwhip effect and how large each parameter
affects it. In addition, we investigate the respective relationship between the seasonal period and the lead time, the
seasonal moving average coefficient, and the autoregressive coefficient on the bullwhip effect in a seasonal supply
chain.
Keywords: Supply chain management, Bullwhip effect, Seasonal autoregressive moving average process, Stochastic
lead time
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
Seasonal supply chains are affected by seasonal behavior that impact material and information flows both in and
between facilities including vendors, manufacturing and assembly plants, and distribution centers. The seasonal
patterns of demand, which exist when time series data fluctuates according to some seasonal factor, are a common
occurrence in many supply chains. This may intensify the bullwhip effect, which causes severe problems in supply
chains. Seasonal peaks of demand may increase demand variability amplification across the supply chain. In the long
run, this result may lead to a reduction in supply chain profitability, the difference between the revenue generated from
the final customer and the total cost across the supply chain.
A basic approach to maintain supply chain profitability is for each independent entity of a supply chain to maintain
stable inventory levels to fulfill customer requests at a minimum cost. However, the main one among barriers both
internal and external to achieving this objective is recognized as the bullwhip effect. This effect is the phenomenon of
the increasing amplification of variability in orders occurring within a supply chain the more one moves upstream. This
amplification effect includes demand distortion described as a phenomenon where order to the suppliers tends to have
larger variance than the sales to the buyer. The occurrence of the bullwhip effect in a supply chain poses severe
problems such as lost revenues, inaccurate demand forecasts, low capacity utilization, missed production schedules,
ineffective transportation, excessive inventory investments, and poor customer service (Lee et al., 1997a, b).
Forrester (1969) proves evidence of the bullwhip effect. Sterman (1989) exhibites the same phenomenon through an
experiment known as the beer game. In addition, Lee et al. (1997a, b) discoveres five main sources that may lead to the
bullwhip effect, including demand signal processing, non-zero lead-time, order batching, rationing game under shortage,
and price fluctuations and promotions. They argue that eliminating its main causes may significantly reduce the
bullwhip effect. In the concrete, the demand process, lead times, inventory policies, supply shortage and the forecasting
techniques have a significant influence on the bullwhip effect. Among these, forecasting techniques, inventory policies
and to some extent replenishment lead time are controllable by supply chain members and hence can be decided upon
to mitigate the bullwhip effect. However, demand process whether seasonal or not is uncontrollable because of external
parameter occurring at the customer. It is reasonable for supply chain members to suitably respond to demand process
they face. Changing demand trends has a significant influence on supply chain performance measures (Byrne and
Heavey, 2006). Therefore, it is important to understand the impact of the seasonal demand process on the bullwhip
effect in a seasonal supply chain.
There have been many studies in the bullwhip effect including demand process, forecasting techniques, lead times
and an ordering policy. Alwan et al. (2003) studied the bullwhip effect under an order-up-to policy by applying the
mean squared error optimal forecasting method to an AR(1) and investigated the stochastic nature of the ordering
process for an incoming ARMA(1,1) using the same inventory policy and forecasting technique. Chen et al. (2000a,
2000b), Luong (2007), and Luong and Phien (2007) studied the bullwhip effect resulting from an order-up-to policy
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 211224, 2013.

SCHEDULING ALGORITHMS FOR MOBILE HARBOR:


AN EXTENDED M-PARALLEL MACHINE PROBLEM
I. Sung1, H. Nam1, T. Lee1
1

Korea Advanced Institute of Science and Technology


Korea, Republic Of
Corresponding authors email: [email protected]

Abstract: Mobile Harbor is a movable floating structure with container loading/unloading equipment on board. Mobile
Harbor is equivalent to a berth with a quay crane in a conventional port, except that it works with a container ship
anchoring on the open sea. A Mobile Harbor-based system typically deploys a fleet of Mobile Harbor units to handle a
large number of containers, and operations scheduling for the fleet is essential to the productivity of the system. In this
paper, a method to compute scheduling solutions for a Mobile Harbor fleet is proposed. Jobs are assigned to Mobile Harbor
units, and their operations sequence is determined, with an objective of minimizing the sum of completion times of all
container ships. This problem is formulated as a mixed integer programming (MIP) problem, which is modified from an mparallel machine problem. A heuristic approach using Genetic Algorithm is developed to obtain a near optimal solution
with reduced computation time.
(Received November 30, 2010; Accepted March 15, 2012)

1. INTRODUCTION
In todays global economy environment, demand for maritime container transportation has been steadily increasing. This,
in turn, has stimulated the adoption of very large container ships of over 8,000TEU1 capacity, in an effort to reduce the
transportation costs. With the introduction of such large container ships, container terminals are now facing challenges to
dramatically improve their service capability to efficiently serve container ships. The challenges include providing
sufficient water depth at their berths and in their sea routes, and improving container handling productivity to reduce port
staying time for container ships. Resolving these problems by conventional approaches expanding existing ports or
building new ones requires massive investment and causes environmental concerns.
Mobile Harbor is a new concept developed by a group of researchers at Korea Advanced Institute of Science and
Technology (KAIST) as an alternative solution to this problem. Mobile Harbor is a container transportation system that can
load/unload containers from a container ship anchoring on the open sea. It can transfer containers from a container ship to a
container terminal, and vice versa. A concept design and dimensional specifications of Mobile Harbor are shown in Figure
1 and Table 1, respectively.

Figure 1. A concept design of Mobile Harbor

An illustrative operational scenario of Mobile Harbor is as follows:


A container ship calls at a port, and instead of berthing at a terminal, it anchors at an anchorage, remotely located from
the terminal,
1

TEU stands for twenty-foot equivalent unit.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(1-2), 225240, 2013.

SHORT SEA SHIPPING AND RIVER-SEA SHIPPING IN THE


MULTI-MODAL TRANSPORT OF CONTAINERS
J. R. Daduna
Berlin School of Economics and Law
Badensche Str. 52
D - 10825 Berlin, Germany
e-mail: [email protected]

Abstract: The constantly increasing quantitative and qualitative requirements for the terrestrial container and Ro/Ro
transport can not only be dealt with in road and rail freight transport and from transportation on inland waterways in the
upcoming years. Also suitable solutions have to be found, which include other modes of transport, where both economic
and ecological factors as well as macroeconomic considerations are of importance. One possible approach is to increase the
use of Short Sea Shipping and River-Sea Shipping that is less applied so far. In this contribution, the underlying structures
are presented and reviewed for their advantages and disadvantages. Potential demand structures are identified and illustrated by various examples. The paper concludes with analysis and evaluation of these concepts and the summary of necessary
measures for their implementation.
(Received November 30, 2010; Accepted March 15, 2012)

1. POLITICAL FRAMEWORK FOR FREIGHT TRANSPORT


The realization of cargo traffic, both as inland and port hinterland transport, largely occurs in road freight transport at the
moment. This situation is very contradictory to the (worldwide increasingly coming to the fore) transport policy objectives,
which provide a sustainable change of modal split for the benefit of rail freight transport and freight transport on inland
waterways. Considerations regarding the efficient use of resources and the reduction of mobility-based pollution receive
priority here. However, the results of a realization of these goals should not be overestimated. The critical question is to
which extend a modal shift can actually be achieved under the existing technical and organizational framework and the requirements for operational processes in logistics (see e.g. Daduna 2009). In general, this concept does not exclude undertaking measures to shift road transport to other modes of transport, but more importantly the existing potentials should be exhausted, especially in long-distance haulage.
Targeted governmental measures in various countries, for example in the Federal Republic of Germany with the introduction of the road toll (for heavy trucks over 12 tones admissible gross vehicle weight on highways), which has caused
an (administratively enforced) increase in cost of road transport, do not show the aspired effect (see e.g. Bulheller 2006;
Bhler / Jochem 2008). Only the economical behavior of suppliers of services in the road transport has led to noticeable
ecological effects, for example by increasing use of vehicles with lower pollutant category (see e.g. BAG 2009: 19p).
The (often existing and desired) political prioritization of multi-modal freight transport for road / rail has not yet led to
the expected results regarding a significant change in modal split, as from user perspective in many cases process efficiency
and adequate quality of services is not provided. In addition, in rail transport there are the often existing capacity restrictions regarding the available network infrastructure, as well as (for example within the European Communities (EC))
the sometimes significant interoperabilities in cross-border traffic. This especially occurs concerning the monitoring and
control technology and the energy supply as well as the legal framework (see e.g. Pachl 2004: 16pp).
Another possibility is the inclusion of inland waterway and maritime navigation in the structures of multi-modal
transport, regardless of (process-related) limits. The inland waterway navigation can offer only limited shift potentials because of capacity restrictions (referring to the authorized breadth and draught of inland waterway vessels) and the geographical structures of the available network. Also in maritime navigation accordant restrictions occur regarding (possible)
access to the (often close to the customer located but smaller) ports and therewith to the hinterland, for example with a (further) increase in ocean-going vessel sizes (s. e.g. Imai et al. 2008).
An increasingly discussed and also worldwide in various areas implemented solution is given by the concept of Short
Sea Shipping (SSS) (also in the context of feeder traffic), whereupon a larger number of smaller ports (with local and / or
regional importance) is involved in the configuration of transport processes. In the focus of attention are multi-modal
transport chains, in which primarily the (coastal) shipping is efficiently linked with the (classical) terrestrial modes of
transport. A specific extension of these considerations results from the integration of River-Sea Shipping (RSS), because
not only coastal transport routes are used here, but with a suitably designed inland network of waterways also access to the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 241-251, 2013.

A PREMLIMINARY MATHEMATICAL ANALYSIS FOR


UNDERSTANDING TRANSMISSION DYNAMICS OF NOSOCOMIAL
INFECTIONS IN A NICU
1

Taesu Cheong1, and Jennifer L. Grant2


Department of Industrial and Systems Engineering, National University of Singapore
Singapore 117576, Singapore
2
Rollins School of Public Health, Department of Health Policy and Management,
Emory University, Atlanta, GA 30322, USA

Nosocomial infections (NIs) have been a major concern in hospitals, especially in high-risk populations. Neonates
hospitalized in the intensive care unit (ICU) have a higher risk of acquiring infections during hospitalization than other
ICU populations, which often result in prolonged and more severe illness and possibly death. The corresponding
economic burden is immense, not only for parents and insurance companies, but also for hospitals faced with increased
patient load and resource utilization. In this paper, we attempt to systemically understand the transmission dynamics of
NIs in a neonatal intensive care unit (NICU). For this purpose, we present a mathematical model, perform sensitivity
analysis to evaluate effective intervention strategies, and discuss numerical findings from the analysis.
Keywords: Nosocomial infections, Hospital-acquired infections, Infection control, Mathematical model, Pathogen
spread

1. INTRODUCTION
Nosocomial infection (NI; also known as hospital-acquired infection or HAI) is defined as an infection during
hospitalization that was not present or incubating at the time of admission, according to the US Department of Health
and Human Services, Centers for Disease Control and Prevention (CDC) (Lopez Sastre et al.,2002). Data from CDC
suggests that 1 in 10 hospitalized patients in the United States acquire an infection each year (Buus-Frank,2004). This
calculates to approximately two million hospitalized patients with NIs and approximately 90,000 deaths that result from
these infections. The associated economic burden is also immense, and in fact, approximately USD 6.7 billion are spent
annually, primarily on costs associated with increased length of stay.
HAIs often lead to morbidity and mortality for neonates in intensive care. A 2007 study by Gastmeier et al. (2007)
compared reports of HAI outbreaks in the NICU to those in other intensive care units (ICUs). They found that, out of
729 outbreaks in all ICUs, 276 were in NICUs, totaling 37.9% of all ICU outbreaks. NICU outbreaks included 5718
patients making it the most frequent subgroup in ICU outbreaks.
Critically ill infants cared for in the intensive care environment are among the most vulnerable patient groups for
HAIs. Since these babies are underdeveloped and have weak skin, they have a higher risk of acquiring these infections.
The immunologic immaturity of this patient population, the need for prolonged stay, and the large use of invasive
diagnostic and therapeutic procedures also contribute to higher rates of infection in the NICU than in pediatric and adult
ICUs (Mammina et al.,2007). Rates of infections have varied from 6% to 40% of neonatal patients, with the highest
rates occurring most often in facilities having larger proportions of very low-birth-weight infants or neonates requiring
surgery (Brady, 2005). This group of infants also experiences more severe illness as a result of these infections, mainly
because of their profound physiologic instability and the diminished functional capacity of their immune system. Efforts
to protect NICU infants from infections must therefore be concomitant. Children's Healthcare of Atlanta (CHOA) 1 has
three ICUs, of which their NICU has the highest rate of infection. This is a concern for the management since these
rates are also higher than the national average.
A systemic approach and mathematical modeling have been increasingly used to understand the transmission
dynamics of infectious diseases in hospitals - particularly, to test hypotheses of transmission and explore
transmission dynamics of pathogens (Grundmann et al., 2006). In this study, we perform a preliminary mathematical
analysis of the spread of NIs in the CHOA NICU. We then evaluate the effectiveness of different intervention strategies,
including increased hand hygiene, patient screening at admission, and NICU-wide explicit contact precautions against
colonized or infected patients, numerically through sensitivity analysis. We remark that, in the field of industrial
engineering, the application of quality control charts to detect the outbreak of infections has been mainly discussed in
literature (e.g., Benneyan,2008).
We also consider facility surface disinfection including medical equipment and devices as an intervention strategy.
1

A non-profit pediatric hospital (http://www.choa.org/)


ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 252-261, 2013.

AUTOMATED METHODOLOGY FOR SCENARIO GENERATION AND ITS


FEASIBILITY TESTING
Sang Chul Park1, Euikoog Ahn1, Yongjin Kwon1,
1
Department of Industrial Engineering
Ajou University, Suwon, 443-749 South Korea
Email: [email protected]
The main purpose of this study is to devise a novel methodology for automated scenario generation, which
simultaneously checks the feasibility and the correctness of scenarios in terms of event sequence, logical propagation,
and violation of constraints. Modern day warfare is highly fluidic, fast moving, and unpredictable. Such situation
stipulates the fast decision making and rapid deployment of fighting forces. Management of combat assets and
utilization of battlefield information, therefore, become the key factors that deice the outcome of engagement. In this
context, the Korean Armed Forces are building a framework, in which commanders can rapidly and efficiently evaluate
every conceivable engagement scenario before committing real assets. The methodology is derived from the Conflict
Table, event transition probabilities, DEVS formalism, and DFS algorithm. The presented example illustrates an oneon-one combat engagement scenario with two submarines, of which results validate the effectiveness of the proposed
methodology.

Keywords: Defense M&S; DEVS; DFS; Automated scenario generation; Conflict Tables; Event transition
probabilities.

1.

INTRODUCTION

Defense modeling and simulation (M&S) technology enables a countless number of testing and engagement scenarios
evaluated without having to commit real assets (Lee et al. 2008). In defense M&S, real objects (e.g., soldiers, trucks,
tanks, and defense systems) are modeled as combat entities and embedded into a computer generated synthetic
battlefield. The interaction between the combat entities and the synthetic battlefield is dictated by the rules within the
sequence of events, which is basically an engagement scenario. Defense M&S manifests two broad categories: (1) a
testing of weapons effectiveness; and (2) a virtual engagement. The first category is highly favored due to many
benefits, including cost savings, less environmental damages, and reduced safety hazards. The second category
represents virtual war games or engagements, depending on the size of forces and theaters involved. By examining the
engagement scenarios, war strategists can formulate the factors important to the conduct of battles and visualize the
tactical merits and flaws that are otherwise difficult to identify. One problem is, however, the scenarios must be
manually composed, incurring much time and effort (Yoon, 2004). Due to complex and unpredictable nature of modern
warfare, every possible scenario needs to be evaluated to increase the chance of operational success. A manual
composition of engagement scenario, therefore, has been a great hindrance to the defense M&S.
To cope with the problem, a new method is needed, which is automatic and self-checking. In other words, it
automatically composes scenarios for every possible eventuality, while automatically ascertaining the correctness of the
scenarios. Such notion is well aligned with the concept of concurrent engineering (CE) that intends to improve
operational efficiencies by simultaneously considering and coordinating disparate activities spanning the entire
development process (Evanczuk, 1990; Priest et al. 2001; Prasad, 1995; Prasad, 1996; Prasad, 1997; Prasad, 1998;
Prasad, 1999). CE is known to successfully reduce the product development cycle time and the same can be true for the
defense M&S development process. In this context, the automated scenario generation is based on the atomic model of
DEVS (Discrete Event System specification) formalism and the DFS (depth first search) algorithm. DEVS provides a
formal framework for specifying discrete event models in hierarchical and modular manner (DEVS, 2010). It is
represented by the state transition tables. Many defense related studies capitalize on the DEVS formalism (Kim et al.
1997), such as a small scale engagement (Park, 2010), a simulation model for war tactics managers (Son, 2010), and
defense-related spatial models using the Cell-DEVS (Wainer et al. 2005). Correctness is checked out by the Conflict
Table, representing the possible or impossible pathways for the events. For this study, any discernible activities are
referred to as events, and each event should propagate into the next event on a continuous time scale. The Conflict
Table controls the transition of any events from one state to the next. By doing so, an event can unfold along any
feasible path. While the Conflict Table only manifests the tractable ways for the event transition, the probabilities of one
event becoming the next are very different when there exist many subsequent events to propagate into. Therefore, the
event transition probabilities (ETP) can be prescribed by the simulation planners (Figure 1).
For war tacticians and military commanders, the result of this study brings about an immediate enhancement in their
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 262-272, 2013.

A NEW APPROXIMATION FOR INVENTORY CONTROL SYSTEM WITH


DECISION VARIABLE LEAD-TIME AND STOCHASTIC DEMAND
Serap AKCAN1, Ali KOKANGUL2
Department of Industrial Engineering1
University of Aksaray
68100, Aksaray, Turkey
E-mail: [email protected]
Department of Industrial Engineering2
University of ukurova
01330, Adana, Turkey
E-mail: [email protected]
Demand for any material in a hospital depends on a random arrival rate and random length of stay in units. Therefore,
the demand for any material shows stochastic characteristics that make determining the optimum level of r and Q
problem more difficult. Thus, in this study, a single item inventory system for healthcare was developed using a
continuous review (r, Q) policy. A simulation meta-model was constructed to obtain equations for the average on-hand
inventory and average number of orders per year. Then, the equations were used to determine the optimal levels of r and
Q while minimizing the total cost in an integer non-linear model. The same problem investigated in this study was also
solved using OptQuest optimization software.

Significance: In this study, an applicable new approximation for inventory control system is constructed and this
approximation is examined by presenting a healthcare case study.
Keywords: Healthcare systems; Inventory control; (r, Q) policy; Integer non-linear programming; Simulation meta-

modeling

1. INTRODUCTION
There are a growing number of studies on continuous review inventory systems. The majority of these studies relate to
production applications, and backordering and shortages are allowed. However, there are very few studies concerning
the area of healthcare (Sees, 1999). Thus, this study aimed to determine the optimal reorder point (r) and the order
quantity (Q) required to minimize the expected annual total cost considering a single-item continuous review (r, Q)
policy for a hospital.
Many models have been developed for continuous review (r, Q) policies. akanyldrm et al. (2000) modeled (Q, r)
policy where the lead-time depends on lot size. Salameh et al. (2003) considered a continuous inventory model under
permissible delays in payments. In this model, it was assumed that expected demand was constant over time and the
order lead-time was random. Durn et al. (2004) developed a continuous review inventory model to find the optimal
inventory algorithm when there was an expediting option. In their inventory policy, decision variables were integers.
They also discussed the case when the decision variables were real values. Mitra and Chatterjee (2004) modified a
continuous review model for two-stage serial systems first developed by De Both and Graves. The model was examined
for fast-moving items. Park (2006) used analytic models in the design of inventory management systems. Chen and Levi
(2006) examined a continuous review model with infinite horizon and single product; pricing and inventory decisions
were made simultaneously and ordering cost included a fixed cost. Mohebbi and Hao (2006) investigated a problem of
random supply interruptions in a continuous review inventory system with compound Poisson demand, Erlangdistributed lead-times and lost sales. Axster (2006) developed a single-echelon inventory model controlled by
continuous review (r, Q) policy in which it was assumed that the lead-time demand was normally distributed and in
which the aim was to minimize holding and ordering cost under fill rate constraint. Lee and Schwarz (2007) considered
a continuous review (Q, r) inventory system with single-item from an agency perspective, in which the agents effort
influences the items replenishment lead-time. Their findings revealed that the possible influence of the agent on the
replenishment lead-time could be large, but that a simple linear contract was capable of recapturing most of the cost
penalty of ignoring agency. Hill (2007) investigated continuous review lost-sales inventory models with no fixed order
cost and a Poisson demand process. In addition, Hill et al. (2007) modeled a single-item, two-echelon, continuous
review inventory model. In their model, demands made on the retailers follow a Poisson process and warehouse leadtime cannot exceed retailer transportation time. Darwish (2008) examined a continuous review model to determine the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 273-281, 2013.

MANPOWER MODELING AND SENSITIVITY ANALYSIS FOR AFGHAN


EDUCATION POLICY
1

Benjamin Marlin 1, 2 and Han-Suk Sohn 2 *


United States Army TRADOC Analysis Center, TRAC-WSMR, White Sands MR, NM 88002, USA
2
Dept. of Industrial Engineering, New Mexico State University, Las Cruces, NM 88003, USA
E-mail address: [email protected] (B. Marlin) and [email protected] (H. Sohn)

This paper provides a demand based balance of flow manpower model premised in mathematical programming to
provide insight into the potential futures of the Afghan Education System. Over the previous three decades, torn from
multiple wars and an intolerant governing regime, the education system in Afghanistan has been decimated. Over the
past 10 years Afghanistan and the international community have dedicated a substantial amount of resources to educate
the youth of Afghanistan. By forecasting student demand we are able to determine points of friction in the teacher
production policy regarding grade level, gender, and province across a medium-term time horizon. We modify the
model to provide sensitivity analysis to inform policies. Examples of such policies are accounting for the length of
teacher training programs and encouragement of inter-provincial teacher moves. By later applying a stochastic
optimization model potential outcomes regarding changes in teacher retention attributed to policy decisions, incentives
to teach, or security concerns are highlighted. This model was developed in support of the validation of a large scale
simulation regarding the same subject.

Keywords: Manpower model, sensitivity analysis, Afghanistan, education policy, mixed integer linear program.

1. BACKGROUND
Over the previous three decades, torn from multiple wars and an intolerant governing regime the education system in
Afghanistan has been decimated. Only in the recent decade has there been a unified effort toward the improvement of
education. This emphasis regarding education has provided benefit, but has also brought unexpected problems. There
has been a seven fold increase in the demand for primary and secondary education with nearly seven million children
enrolled in school today (Ministry of Education, 2011). Unfortunately, in a country with 27% adult literacy, an ongoing
war upon its soil, an opium trade as a primary gross domestic product, and an inefficient use of international aid,
meeting the increasing demand for education is difficult at best (Sigsgaard, 2009). The Afghanistan Ministry of
Education (MOE) has stated the future of Afghanistan depends on the capacity of its people to improve their own lives,
the well being of their communities, and the development of the nation. Concurrently, the United Nations (UN) has
supported a tremendous amount of research stating that primary and secondary education is directly linked to the ability
of a people to better their lives and their community (Dickson, 2010). This has resulted in the UN charter for universal
primary education and improved secondary education by 2015.
As of 2012, there are 56 primary donors who have donated approximately $57 billion U.S. to Afghanistan
(Margesson, 2009). The UN Coalition is dedicated to the security and infrastructure improvement of Afghanistan in
order to ensure Afghan Government success. In 2014, with the anticipated withdrawal of coalition forces and a newly
autonomous Afghan state, the future is uncertain. The purpose of this research is to use mathematical modeling to
demonstrate potential outcomes and points of friction regarding the demand for teachers in Afghanistan given the
substantial forthcoming changes in the country.

2. INTRODUCTION
Teacher management is a critical governance issue in fragile state contexts, and especially those in which the education
system has been destroyed by years of conflict and instability (Kirk, 2008). For this reason, this research focuses on the
capacity for teacher training in Afghanistan as it pertains to the growing demand for education. Although the current
pool of teachers has a mixed training background (73% of teachers have not met the grade 14 graduate requirement
(Ayobi, 2010), the Afghanistan Ministry of Education requires a two year teacher training college (TTC) after a
potential teacher has passed the equivalent of 12th grade (Ministry of Education, 2011). Therefore, it is rather important
to determine the number of future teachers required to enter the training base each year to support the increasing
education demand. Of equal importance is discovering potential weaknesses in training capacity, and where these
potential friction points exist. The issues cannot be remedied in the short run; therefore, it is beneficial to use insights
gained through modeling to inform policy decision.
The technique presented in this paper is based on a network flow integer program which has been successfully applied
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 282-289, 2013.

QUICK RELIABILITY ASSESSMENT OF TWO-COMMODITY


TRANSMISSION THROUGH A CAPACITATED-FLOW NETWORK
Yi-Kuei Lin
Department of Industrial Management
National Taiwan University of Science and Technology
Taipei 106, Taiwan, R.O.C.
Tel: +886-2-27303277, Fax: +886-2-27376344
[email protected]
Each arc in a capacitated-flow network has discrete and multiple-valued random capacities. Many studies evaluated the
probability named system reliability herein that the maximum flow from source to sink is no less than a demand d for a
capacitated-flow network. Such studies only considered commodities of a same type transmitted throughout the
network. Many real-world systems allow commodities of multiple types to be transmitted simultaneously, especially in
the case that different type of commodity consumes the arcs capacity differently. For simplicity, this article assesses the
system reliability for a two-commodity case as follows. Given the demand (d1,d2), where d1 and d2 are the demands of
commodity 1 and 2 at the sink, respectively, an algorithm is proposed to find out all lower boundary points for (d1,d2).
The system reliability can be computed quickly in terms of such points. The computational complexity of the proposed
algorithm is also analyzed.
Keywords: Reliability; two-commodity; capacitated-flow networks; minimal paths

1. INTRODUCTION
A minimal path (MP) is a path whose proper subsets are no paths and a minimal cut (MC) is a cut whose proper subsets
are no cuts. When the system is binary-state and composed of binary-state components (Endrenyi, 1978; Henley, 1981),
the typical method uses MPs or MCs to compute the system reliability, the probability that the source node s connects
the sink node t. When the system is multistate (Aven, 1985; Griffith, 1980; Hudson and Kapur, 1985; Xue, 1985), the
system reliability, the probability that the system state is not less than a state d, can be evaluated in terms of d-MPs or dMCs. Note that a d-MP (not a MP) and a d-MC (not a MC) are both vectors denoting the state of each arc. In the case
that the considered multistate system is a single-commodity capacitated-flow network (i.e., flow is considered), the
system reliability is the probability that the maximum flow (from s to t) is not less than a demand d. The typical
approach to assesse such a reliability is to first search for the set of d-MPs (Lin et al., 1995; Lin, 2001, 2003, 2010a-d;
Yeh, 1998) or d-MCs (Jane et al., 1993; Lin, 2007, 2010e).
However, in real world many capacitated-flow networks allow commodities of multiple types to be transmitted from s
to t simultaneously, especially in the case that different type of commodity consumes the capacity on an arc differently.
A broadband telecommunication network is one of such flow networks as several types of services (audio, video, etc.)
share the bandwidth (capacity of an arc) simultaneously. The purpose of this article is to extend the reliability
assessment from single-commodity case to a two-commodity case. The source node s supplies commodities
unlimitedly. The demands of commodity 1 and 2 at the sink t are d1 and d2, respectively. An algorithm is first proposed
to generate all lower boundary points for (d1,d2), called (d1,d2)-MPs, in terms of MPs. Then the system reliability, the
probability that the system satisfies the demand (d1,d2), can be computed in terms of (d1,d2)-MPs. The remainder of this
paper is organized as follows. The two-commodity capacitated-flow model is presented in section 2. Theory and
algorithm are proposed in section 3 & 4, respectively. In section 5, a numerical example is presented to illustrate such
an approach and also how the system reliability be calculated. The analysis of computational time complexity is shown
in section 6.

2. TWO-COMMODITY CAPACITATED-FLOW NETWORK


Notation and Nomenclature
G
A
N
C
s, t
ik

(A, N, C): a capacitated-flow network


{ai|1 i n}: the set of arcs
the set of nodes
(C1, C2, , Cn): Ci is the maximal capacity of ai
the unique source node, the unique sink node
(real number) the weight of commodity k (k = 1, 2) on ai. It measures the consumed amount of capacity on ai

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 290-299, 2013.

APPLICATIONS OF QUALITY IMPROVEMENT AND ROBUST DESIGN


METHODS TO A PHARMACEUTICAL RESEARCH AND DEVELOPMENT
Byung Rae CHO1, Yongsun CHOI2and Sangmun SHIN2*
1
Department of Industrial Engineering, Clemson University
Clemson, South Carolina 29634, USA
2
Department of Systems Management & Engineering, Inje University
Gimhae, GyeongNam 621-749, South Korea
Researchers often identify robust design, based on the concept of building quality into products or processes, as one of
the most important systems engineering design concepts for quality improvement and process optimization. Traditional
robust design principles have often been applied to situations in which the quality characteristics of interest are typically
time-insensitive. In pharmaceutical manufacturing processes, time-oriented quality characteristics, such as the
degradation of a drug, are often of interest. As a result, current robust design models for quality improvement which
have been studied in the literature may not be effective in finding robust design solutions. In this paper, we show how
the robust design concepts can be applied to the pharmaceutical production research and development by proposing
experimental and optimization models which should be able to handle the time-oriented characteristics. This is perhaps
the first attempt in the robust design field. An example is given and comparative studies are discussed for model
verification.
Keywords: Robust design; mixture experiments, pharmaceutical formulations, censored data, Weibull distribution,
maximum likelihood estimation.

1. INTRODUCTION
Continuous quality improvement has become widely recognized by many industries as a critical concept in maintaining
a competitive advantage in the marketplace. It is also recognized that quality improvement activities are efficient and
cost-effective when implemented during the design stage. Based on this awareness, Taguchi (1986) introduced a
systematic method for applying experimental design, which has become known as robust design which is often referred
to as robust parameter design. The primary goal of this method is to determine the best design factor settings by
minimizing performance variability and product bias, i.e., the deviation from the target value of a product. Because of
the practicability in reducing the inherent uncertainty associated with system performance, the widespread application
of robust design techniques has resulted in significant improvements in product quality, manufacturability, and
reliability at low cost. Although the main robust design principles have been implemented in a number of different
industrial settings, our literature study indicates that robust design has been rarely addressed in the pharmaceutical
design process.
In the pharmaceutical industry, the development of a new drug is a lengthy process involving laboratory
experiments. When a new drug is discovered, it is important to design an appropriate pharmaceutical dosage or
formulation for the drug so that it can be delivered efficiently to the site of action in the body for the optimal therapeutic
effect on the intended patient population. The Food and Drug Administration (FDA) requires that an appropriate assay
methodology for the active ingredients of the designed formulation be developed and validated before it can be applied
to animal or human subjects. Given this fact, one of the main challenges faced by many researchers during the past
decades is the optimal design of pharmaceutical formulations to identify better approaches to various unmet clinical
needs. Consequently, the pharmaceutical industrys large investment in the research and development (R&D) of new
drugs provides a great opportunity for research in the areas of experimentation and design of pharmaceutical
formulations. By definition, pharmaceutical formulation studies are mixture problems. These types of problems take
into account the proportions within the mixture, not the amount of the ingredient; thus, the ingredients in such
formulations are inherently dependent upon one another and consequently experimental design methodologies
commonly used in many manufacturing settings may not be effective. Instead, for mixture problems, a special kind of
experimental design, referred to as a mixture experiment, is needed. In mixture experiments, typical factors in question
are the ingredients of a mixture, and the quality characteristic of interest is often based on the proportionality of each of
those ingredients. Hence, the quality of the pharmaceutical product is influenced by such designs when they are applied
in the early stages of drug development.
In this paper, we propose a new robust design model in the context of pharmaceutical production R&D. The main
contribution of this paper is two-fold. First, traditional experimental design methods have often applied to situations in
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 300-310, 2013.

USING A CLASSIFICATION SCHEMA TO COMPARE BUSINESS-IT


ALIGNMENT APPROACHES
Marne de Vries
University of Pretoria
South Africa
Department of Industrial and Systems Engineering
Enterprise engineering (EE) is a new discipline that emerged from existing disciplines, such as industrial engineering,
systems engineering, information science and organisation science. EE has the objective to design, align and govern the
development of an enterprise in a coherent and consistent way. Within the EE discipline, knowledge about the
alignment of business components with IT components is embedded in numerous business-IT alignment frameworks
and approaches, contributing to a fragmented business-IT alignment knowledge base. This paper presents the BusinessIT Alignment Model (BIAM) as a conceptual solution to the fragmented knowledge base. The BIAM provides a
common frame of reference to compare existing business-IT alignment approaches. The main contribution of this article
is a demonstration of BIAM to compare two business-IT alignment approaches: the foundation for execution approach
and the essence of operation approach.
Significance: To provide enterprise designers/architects with a qualitative analysis tool for understanding and
comparing the intent, scope and implementation means of existing/already-implemented business-IT alignment
approaches.
Keywords: enterprise engineering, enterprise architecture, enterprise ontology, enterprise design, business-IT alignment

1. INTRODUCTION
Enterprise systems of the 21st century are exceedingly complex, and in addition, these systems need to be dynamic to
stay ahead of competition. Information technology opened up new opportunities for enterprises to extend enterprise
boundaries in offering complementary services, entering new business domains and creating networks of collaborating
enterprises. The extended enterprise however still needs to comply with corporate governance rules and legislation and
need to be flexible and adaptable to seize new opportunities (Hoogervorst, 2009).
Supporting an overall view of a complex enterprise, enterprise engineering (EE) emerged as a new discipline for
designing, aligning and governing the development of an enterprise. EE consists of three subfields: enterprise ontology,
enterprise governance, and enterprise architecture (Barjis, 2011). One of the potential business benefits of EE, is to
design and align the entire enterprise (Kappelman et al., 2010). However, a strong theme within enterprise alignment, is
alignment between business components and IT components, called business-IT alignment. Although various theoretical
approaches and frameworks emerged in literature (Schekkerman, 2004) to facilitate business-IT alignment, a study
performed by OVUM (Blowers, 2012) indicates that 66% of enterprises had developed their own customised
framework, with one third of the participants making use of two or more theoretical frameworks. The expanding
number of alignment approaches and frameworks create difficulties in comparing or extending a current alignment
approach with knowledge from the existing business-IT alignment knowledge base. Previous studies circumvented this
problem by providing a common reference model, the Business-IT Alignment Model (BIAM) (De Vries, 2010,
2012)for understanding and comparing alignment approaches.
This article applies the BIAM in contextualising two business-IT alignment approaches, the foundation for execution
approach (Ross et al., 2006) and the essence of operation approach (Dietz, 2006). The aim is to enhance the foundation
for execution approach, due to certain method deficiencies of its associated operating model (OM), with another
approach, the essence of operation approach.
The main contribution of the article is to demonstrate how the classification categories of the BIAM are used to
compare two alignment approaches in confirming their compatibility. As demonstrated by the comparison example,
BIAM is useful to enterprise engineering practitioners for contextualising current alignment approaches implemented at
their enterprise, to identify similarities and differences between current approaches and opportunities for extension.
The paper is structured as follows: Section 2 provides background on the topic of business-IT alignment, the
Business-IT Alignment Model (BIAM) and two alignment approaches, the foundation or execution approach and
essence of operation approach. Section 3 defines the problem of assessing the feasibility of combining current
alignment approaches. Section 4 suggests the use of the BIAM components as comparison categories in contextualising
two alignment approaches, presenting the results of the a comparison demonstration in section 5. Section 6 concludes
with opportunities for follow-up research.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(3-4), 311-318, 2013.

VARIABLE SAMPLE SIZE AND SAMPLING INTERVALS WITH FIXED


TIMES HOTELLINGS T2 CHART
M. H. Lee
School of Engineering, Computing and Science,
Swinburne University of Technology (Sarawak Campus),
93350 Kuching, Sarawak, Malaysia.
Email: [email protected]
The idea of variable sample size and variable sampling interval with sampling at fixed times is extended to the Hotelllings
T2 chart in this study. This chart is called variable sample size and sampling intervals with fixed times (VSSIFT)
Hotelllings T2 chart, in which samples with sample size n always be taken at some specified fixed equally spaced time
points but additional samples larger than n are allowed between these time points whenever there is some indication of a
process mean shift. The numerical comparison shows that the VSSIFT Hotelllings T2 chart and the variable sampling
interval and variable sample size (VSSI) Hotelllings T2 chart give almost the same effectiveness in detecting shifts in the
process mean. However, from the administration viewpoint, the VSSIFT chart is considered to be more convenient than the
VSSI chart.
Keywords: sampling at fixed times; steady-state average time to signal; variable sample size; Hotellings T2 chart; Markov
chain method

1. INTRODUCTION
The usual practice in using the control charts is to take samples of fixed size from the process at fixed sampling interval.
Recently the variable sample size and variable sampling interval (VSSI) Hotelllings T2 chart has been shown to give
substantially faster detection of most process mean shifts than the standard Hotelllings T2 chart (Aparisi and Haro, 2003).
In the design of the VSSI chart, the sample size and the sampling interval are allowed to change based on the chart statistic.
It is reasonable to relax the control by taking the next sample at long sampling interval with small sample size if the current
sampling point is close to the target. On the other hand, it is reasonable to tighten the control by taking the next sample at
short sampling interval with large sample size if the current sampling point is far from the target but still within the control
limit. Thus the actual number of samples taken in any time period will be a random variable, and the time points at which
the samples are taken will be unpredictable. The variability in the sampling intervals may be inconvenient from an
administrative viewpoint and also undesirable for drawing inferences about the process (Reynolds, 1996a; Reynolds,
1996b). To alleviate the disadvantage of unpredictable sampling times, Reynolds (1996a; 1996b) proposed a modification
of the variable sampling interval (VSI) idea for X chart in which samples always be taken at some specified fixed equally
spaced time points but additional samples are allowed between these time points whenever there is some indication that the
process has shifted from the target. This chart is called variable sampling interval with sampling at fixed times (VSIFT)
control chart. The VSIFT control chart may conform more closely to the natural periods of the process and be more
convenient to administer. It seems reasonable to increase the size of such samples to improve the performance of the
control chart since the additional samples are always taken when there is some indication that the process has changed
(Costa, 1998). Lin and Chou (2005) extended this idea of sampling at fixed times to the VSSI X chart, and they showed
that the VSSI X chart with sampling at fixed times gives almost the same detection ability as the original VSSI X chart.
From the practical viewpoint of administration, the variable sample size and sampling intervals with fixed times (VSSIFT)
X chart is relatively easy to set up and implement. In this study, the VSSIFT feature is extended to the multivariate chart,
which is the Hotelllings T2 chart.

2. VSSIFT HOTELLINGS T2 CHART


Consider a process with p quality characteristics of interest for each item are observed over time, and the distribution of the
observations is p-variate normal with mean vector 0 and covariance matrix 0 when the process is in-control. Assume that
a sample of size n is taken at every sampling point, and let X t be the average vector for tth sample. Then the chart statistic
Tt 2 = n( Xt 0 ) 01 ( Xt 0 )
2

is plotted in the Hotellings T chart with control limit CL =

2
p ,

where

(1)
2
p ,

is the upper percentage point of the chi-

square distribution with p degrees of freedom. As pointed out by Aparisi (1996), = 0.005 has been widely employed in
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 319-328, 2013

SYSTEM RELIABILTIY WITH ROUTING SCHEME FOR A


STOCHASTIC COMPUTER NETWORK UNDER ACCURACY RATE
Yi-Kuei Lin and Cheng-Fu Huang
Department of Industrial Management
National Taiwan University of Science & Technology
Taipei 106, Taiwan, R.O.C.
Under the assumption that each branch capacity of the network is deterministic, the quickest path problem is to
find a path sending a specific of data from the source to the sink such that the transmission time is minimized.
However, in many real-life networks such as computer systems, the capacity of each branch is stochastic with a
transmission accurate rate. Such a network is named a stochastic computer network. Hence, we try to compute the
probability that d units of data can be sent through the stochastic computer network within both the time and
accuracy rate constraints according to a routing scheme. Such a probability is a performance indicator to provide to
managers for improvement. This paper mainly proposes an efficient algorithm to find the minimal capacity vector
meeting such requirements. The system reliability with respect to a routing scheme then can be calculated.
Keywords: Accuracy rate; Time; Quickest path; Routing scheme; Stochastic computer network; System reliability.

1. INTRODUCTION
From the perspectives of network operations, management, and engineering, service level agreements (SLAs) are an
important part of the networking industry. SLAs are used in contracts between network service providers and their
customers. An SLA can be measured by many criteria: for instance, availability, delay, loss, and out-of-order
packets. A basic index is the accuracy rate, which is often used to measure the performance of enterprise networks.
Therefore, from the viewpoint of quality of service (QoS) (Sausen et al., 2010; Wei et al. 2008), maintaining a high
network traffic accuracy rate is essential for enterprises to survive in a competitive environment. Many researchers
have discussed issues related to measuring local area network (LAN) traffic (Amer, 1982; Chlamtac, 1980; Jain and
Routhier, 1986) and previous studies have considered flow accuracy in traffic classification. Such flows are called
elephant flows. Because high packet-rate flows have a great impact on network performance, identifying them
promptly is important in network management and traffic engineering (Mori et al., 2007). A conventional method
for estimating the accuracy rate of large or elephant flows is the use of packet sampling. However, packet sampling
is the main challenge in network or flow measurements. Feldmann et al. (2001) presented a model for traffic
demands to support traffic engineering and performance debugging of large Internet service provider networks.
Choi et al. (2003) used packet sampling to accurately estimate large flows under dynamic traffic conditions. The file
is said to be transmitted correctly only if the file received at the sink is identical to the original file. In fact, data
transfer is done through packet transmission. The network supervisor should monitor the number of error packets to
assess the accuracy rate of the network. However, the previous papers did not involve system reliability when
measuring the accuracy rate
Nowadays, computer technology is becoming more important to modern enterprises. Computer networks are the
major medium for transmitting data/information in most enterprises. As the stability of computer networks strongly
influences the quality of data transmissions from a source to a sink, especially for accurate traffic measurement and
monitoring, the system reliability of the computer network is always of concern for information technology
departments. Many enterprises regard system reliability evaluation or improvement as crucial for network
management, traffic engineering, and security tasks. In general, a computer network is usually modeled as a network
topology with nodes and branches, in which each branch represents a transmission line and each node represents a
transmission device such as a hub, router, or switch. In fact, a transmission line is combined with several physical lines
such as twisted pairs, coaxial cables, or fiber cables. Each physical line may provide a capacity or may fail; this
implies that a transmission line has several states where state c means that c physical lines are operational. Hence, the
capacity of each branch has several values. In other words, the computer network should be multistate due to the
various capacities of each transmission line. Such a network is a typical stochastic flow network (Aven, 1985; Cheng,
1998; Jane et al., 1993; Levitin, 2001; Lin et al., 1995; Lin, 2001, 2003, 2007a, 2007b, 2009a-c; Yeh, 1998, 2004, 2005)
and is called a stochastic computer network (SCN) herein.
Another important issue is transmission time for the computer network. From the point of view of quality
management and decision making, it is an important task to reduce the transmission time through a computer
network. When data are transmitted through the computer network, it should select a shortest delayed path to

[email protected]

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 329-338, 2013.

OBSERVED BENEFITS FROM PRODUCT CONFIGURATION SYSTEMS


Lars Hvam1, Anders Haug2, Niels Henrik Mortensen3, ChristianThuesen4
Department of Management Engineering1
Operations Management
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: [email protected]
Department of Entrepreneurship and Relationship Management2
University of Southern Denmark
Engstien 1, DK-6000 Kolding
Email: [email protected]
Department of Mechanical Engineering3
Product Architecture Group
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: [email protected]
Department of Management Engineering4
Production and Service Management
Technical University of Denmark
Building 426, DK-2800 Kgs. Lyngby
Email: [email protected]
This article presents a study of the benefits obtained from applying product configuration systems based on a case
study in four industry companies. The impacts are described according to main objectives in literature for implementing product configuration systems: lead time in the specification processes, on-time delivery of the specifications, and resource consumption for making specifications, quality of specifications, optimization of products and
services, and other observations.
The purpose of the study is partly to identify specific impacts observed from implementing product configuration
systems in industry companies and partly to assess if the objectives suggested are appropriate for describing the
impact of product configuration systems and identifying other possible objectives. The empirical study of the companies also gives an indication of more overall performance indicators being affected by the use of product configuration systems e.g. increased sales, decrease in the number of SKUs, improved ability to introduce new products,
and cost reductions.
Significance: Product configuration systems are increasingly used in industrial companies as a means for efficient
design of customer tailored products. There are examples of companies who have gained significant benefits from
applying product configuration systems. However companies considering use product configuration systems have a
challenge in assessing the potential benefits to reach from applying product configuration systems. This article provides a list of potential benefits based on a case study of four industry companies.
Keywords: Mass Customization, product configuration, engineering processes, performance measurement, complexity management.

1. INTRODUCTION
Customers worldwide require personalised products. One way of obtaining this is to customise the products by use
of product configuration systems (Tseng and Piller, 2003), (Forza and Salvador, 2007), (Hvam et al 2008). Product
configuration systems are increasingly used as a means for efficient design of customer tailored products, and this
has led to significant benefits for industry companies. However, the specific benefits gained from product configuration are difficult to measure. This article discusses how to assess the benefits from the use of product configuration based on a suggested set of measurements and an empirical study of four industry companies.
Several companies have acknowledged the opportunity to apply product configuration systems to support the activities of the product configuration process (see for example www.configurator-database.com). Companies like
Dell Computer and American Power Conversion (APC) rely heavily on the performance of their configuration sysISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 339-371, 2013.

Decision Support for the Global Logistics Positioning


Chun-Wei R. Lin a, Sheng-Jie J. Hsu b,c,*
a

Department of Industrial Engineering and Management, National Yunlin University of Science and Technology,
123, University Road Section 3, Douliou, Yunlin, Taiwan, 640, R.O.C.
E-mail: [email protected]
b

Graduate School of Management, National Yunlin University of Science and Technology, Douliou, Yunlin,
Taiwan, 640, R.O.C.
E-mail: [email protected]

Department of Information Management, Transworld Institute of Technology, Douliou, Yunlin, Taiwan, 640,
R.O.C.
E-mail: [email protected]

According to the enterprise's global operations, its global logistics system had to be cooperated. It is clear that the
global logistics (GL) is more complicated than the local logistics. However, it lacks a generic structure for GL's
position to support its decision-making.
Therefore, this article proposed a Global Logistic Positioning (GLP) framework by means of literatures review
and practice experience. And, constructed the variables in this framework to be a Decision Support System (DSS),
which is useful for the GLP decision-making. This DSS can suggest the decision-maker to decide the positions of
the operation headquarters, research and development bases, production bases, and distribution bases.
For efficiency, this article proposed a four-phase algorithm which integrates the goal programming, revised
Analytic Hierarchy Process method, Euclidean distance, and Fitness concept, to execute the GLP computation.
Finally, by a virtual example: ABC Company, to verify the GLP theoretical feasibility.
KeywordsGlobal Logistic Management, Global Logistic Positioning, Framework, Decision Support System.

1. INTRODUCTION
Many organizations have a significant and growing presence in resource and/or demand markets outside their
country of origin. Current business conditions blur the distinctions between domestic and international logistics.
Successful enterprises have realized that to survive and prosper they must go beyond the strategies, policies, and
programs of the past and adopt a global view of business, customers, and competition. [Stock and Lambert, 2001]
Therefore, enterprise extends its operation to the global and become a Multi-National Enterprise (MNE). And its
logistics system must to match up its enterprise strategy, to be a global logistics system. Dornier et al. [1998] argued
that geographical boundaries are losing their importance. Companies view their network of worldwide facilities as a
single entity. Implementing worldwide sourcing, establishing production sites on each continent, and selling in
multiple markets all imply the existence of an operations and logistics approach designed with more than national
considerations in mind. Bowersox et al. [1999] argued that the business model of successful global operations is

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 372-386, 2013.


A CLASSIC AND EFFECTIVE APPROACH TO INVENTORY MANAGEMENT


J. A. Lpez, A. Mendoza, and J. Masini
Department of Industrial Engineering
Universidad Panamericana
Guadalajara, Jalisco 45010, MEXICO
Corresponding authors email: {Abraham Mendoza, [email protected]}
Many organizations base their demand forecasts and replenishment polices only on judgmental or qualitative approaches.
This paper presents an application where quantitative demand forecasting methods and classic inventory models are used to
achieve a significant inventory cost reduction and improved customer service levels at a company located in Guadalajara,
Mexico. The company currently uses a naive method to forecast demand. By proposing the use of Winters method, the
forecast accuracy was improved by 41.12%. Additionally, as a result of an ABC analysis for the product under analysis, a
particular component was chosen (it accounts for the 70.24% of the total sales and 60.06% of the total volume) and two
inventory policies studied for that particular component. The first inventory policy considers the traditional EOQ model,
whereas the second one uses a continuous-review (Q,R) policy. The best policy achieves a 43.69% total cost reduction,
relative to the current inventory policy. This policy translates into several operational benefits for the company, e.g.,
improved customer demand planning, simplified production and procurement planning, lower level of uncertainty and a
better service level.
Significance: While many organizations base their demand forecast and replenishment decisions only on judgmental or
qualitative approaches, this paper presents an application where forecasting methods and classic inventory models are used
to achieve a significant inventory cost reduction and improved customer service levels at a company located in Guadalajara,
Mexico.
Keywords: Inventory Management, Forecasting Methods.

1. INTRODUCTION
On one hand, small and medium companies seem to be characterized by the poor efforts they make optimizing their
inventory management systems. They are mainly concerned with satisfying customers demand by any means and barely
realize about the benefits of using scientific models for calculating optimal order quantities and reorder points while
minimizing inventory costs (e.g., holding and setup costs) and increasing customer service levels. On the other hand, large
companies have developed stricter policies for controlling inventory. However, most of these efforts are not supported by
scientific models either.
Many authors have proposed inventory policies based on mathematical models that are easy to implement in practical
situations. For example, Harris introduced the well-known Economic Order Quantity (EOQ) model to calculate optimal
inventory policies for situations in which demand is relatively constant (Harris, 1990). This model has been extended to
include transportation freight rates, production rates, quantity discounts, quality constraints, stochastic environments and
multi-echelon systems. The reader is referred to Silver, Pyke and Peterson (1998), Nahmias (2001), Chopra and Meindl
(2007), Mendoza (2007), and Hillier and Lieberman (2010) for more detailed texts on these extensions. Moreover, the EOQ
has been successfully applied by some companies around the world. For instance, Presto Tools, at Sheffield, UK, obtained
a 54% annual reduction in their inventory levels (Liu and Ridgway, 1995).
Despite the benefits shown in some companies, in these days of advanced information technology, many companies are
still not taking advantage of fundamental inventory models, as stated by Piasecki (2001). For example, companies do not
rely on the effectiveness of the EOQ model because of its apparent simplicity. Part of the problem is due to the lack of
thorough knowledge of the models assumptions and benefits. Along these lines, Piasecki (2001) stated: many ERP
packages have built-in calculations for EOQ that work automatically, so the users do not know how it is calculated and
therefore do not understand the data inputs and system set-up that control the output.
The success of any inventory policy depends on an effective customer demand planning (CDP), which begins with
accurate forecasts (Krajewski and Ritzman, 2005). At least in small companies, the application of quantitative methods for
forecasting, as well as the implementation of replenishment policies, through scientific models, is not well-known. Many
organizations base their demand forecasts and replenishment polices only on judgmental or qualitative approaches.
This paper presents an application where quantitative demand forecasting methods and classic inventory models are used
to achieve a significant reduction in inventory costs. We offer an analysis of the current inventory replenishment policies of
a company located in Guadalajara, Mexico, and propose significant cost improvements. Because of confidentiality issues,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 401-411, 2013.

PRACTICAL DECOMPOSITION METHOD FOR T2 HOTELLING


CHART
Manuel R. Pia-Monarrez
Department of Industrial and Manufacturing Engineering
Institute of Engineering and Technology
Universidad Autnoma de Ciudad Jurez
Cd. Jurez Chih. Mxico, C.P. 32310
Ph (656) 688-4843Fx (656) 688-4813
Corresponding authors e-mail: [email protected]
In multivariate control process, T2 Hotelling chart had shown to be useful to efficiently detect a change in a system, but
it is not capable of diagnosing the root causes of the change. This because the used MTY decomposition method
presents p! different possible decompositions of the T2 statistic and p*2(p-1) terms to be estimated for each possible
decomposition; so when p is large the estimation of the terms and their diagnostic became too complex. In this article by
considering the inverse of the covariance matrix of phase I as the standard one, a practical decomposition method, based
on the relations of each pair of variables is proposed. In the proposed method only p*p different terms are estimated and
its decomposition gives the variable contribution due its variance and due its covariance with each one of the other (p-1)
variables. Since the proposed method is a transformation of the T2 polynomial, the estimated T2 and its corresponding
decomposition always hold. Detailed guide for the application of the T2 chart and numerical application to a set of three
and twelve variables is given.
Significance: Since the proposed method let to practitioners determine which variable(s) generate the out of control
signal, and because it quantifies the proportion of the estimated T2 statistic that is due the variance and due the
correlation, its application to a multivariate control process is useful.
Keywords: Decomposition method, T2 Hotelling chart, Mahalanobis distance, Multivariate control process.

1. INTRODUCTION
Nowadays, the manufacturing processes are more complex and products are multifunctional, so they have more than
one quality characteristic to be controlled. For these processes, one of the most useful multivariate control charts is the
T2 Hotelling chart, which is based on the multivariate normal distribution theory (for details see Alvin 2002). When a
multivariate control chart signals, it is necessary to identify the variable(s) which causes the out of control signal. With
this particular purpose the Minitab 16 (MR) software, presents a decomposition method based on the MTY method
proposed by (Mason et. al 1995, 1997,1999), (for details go in Minitab16 to Stat>Control Charts>Multivariate
Charts>TsquaredGeneralized variance>Help>see also>methods and formulas >Decomposed T2 statistic).
Unfortunately since the Mahalanobis distance (MD) used in the T2 Hotelling chart is estimated as a nested process (see
section 3.1 and Pia 2011), its individual decomposition, and the estimated MD does not hold (see section 3 for details).
Other decomposition methods had been proposed. Among them we find in literature the methods proposed by Roy
(1958), Murphy (1987), Doganaksoy et al. (1991), Hawkins (1991, 1993), Timm (1996) and Runger et. al (1996).
Recently, Alvarez et, al (2007), proposed the method called Original Space Strategy (OSS), which unfortunately as
Alvarez mention (pp. 192), in the approach there exist several methods to calculate the used R value, therefore the
decision to choose the R value could be very subjective, consequently; certain amount of the available information
could be lost by the Projection (for details see Alvarez et. al (2007)). Li, et. al. (2008), with the objective to reduce the
computational complexity of the MTY method, they proposed a method called causation T2 decomposition method,
which integrates the causal relationships revealed by a Bayesian network with the traditional MTY approach, and by
theoretical analysis and simulation studies they demonstrated that their proposed method substantially reduces the
computational complexity and enhances diagnosticability, this by comparing their method with the traditional MTY
approach. Mason et. al. (2008), presented an interesting analysis to apply the MTY method to data of phase I, in order to
use it as standard in phase II. Alfaro et. al. (2009) proposed a boosting approach by training a classification method with
data of phase I and then by using the trained method in phase II, they determine the variable which causes the out of
control signal. In their study, they use data sets of 2, 3, and 4 variables, and found their method was inconsistent for the
2 variable case, and for the 3 variable case, the error was below of 5%. (for details see Alfaro et. al. (2009)). Cedeo et.
al. (2012), because the MTY approach has p! different but non-independent partitions of the overall T2 proposed a
decomposition method based only in the first two unconditional elements of the MTY method. Nevertheless, when the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 412-418, 2013.

COST ASSESSMENT FOR INTEGRATED LOGISTIC SUPPORT


ACTIVITIES
Maria Elena Nenni
Department of Industrial Engineering, University of Naples Federico II, Italy
An Integrated Logistic Support (ILS) service has the objective to improve system availability at an optimum life
cycle cost. It is usually offered to the customer by the system constructor, who becomes the Contractor Logistic
Support (CLS). The aim of this paper is to develop a clear and substantial cost assessment method to support the
CLS budgetary decisions. The assessment concerns the cost elements structure for ILS activities and includes an
economic analysis to provide details among competing alternatives. A simple example derived from an industrial
application is also provided in order to illustrate the idea.
Significance: many documents and standards have been produced by military about ILS but the focus is always on
performance or life cycle cost. The CLS perspective is completely not attended. Even models from scientific
literature are not useful to support CLS decisions because they seem too far from ILS or too general to be
implemented effectively. The lack of specific models has become a general problem because if the ILS service has
been originally developed for military purposes, now it is applied in commercial product support or customer
service organizations as well. Therefore many CLSs are requiring a deeper and wide-ranging investigation on the
topic. The method developed in this paper approaches the problem from the perspective of the CLS and it is
specifically tailored to the main issues of an ILS service.
Keywords: Logistic Support, maintenance, cost model, lifecycle management, after-sale contract.

1. INTRODUCTION
The Integrated Logistic Support (ILS) aims at ensuring the best system capability at the lowest possible life cycle
cost (DOD Directive, 1970). According to this purpose the system owner builds a partnership with the Contractor
Logistic Support (CLS) who implements the ILS process in a continuous way throughout the life cycle, frequently
very long, 30 or 35 years.
The CLS has usually specific technical skills on the system but he needs to improve decision-making about costs
since early stages (Mortensen et alii, 2008). Literature is not really exhaustive. Many documents and standards have
been produced about ILS by military (MIL-STD-1388/1A, 1983; Def-Stan 00-60, 2002; Army Regulation 700-127,
2007) and they dont attend the CLS perspective.
Basically the CLS requires appropriate methods to optimize overall costs in the operation phase that is the longest
and the most costly (Asiedu and Gu, 1998; Choi, 2009) but approaches from scientific literature are often
inadequate. Many authors have spent themselves to develop optimization models: Kaufman (1970) has provided a
first original contribution on the structure of life cycle costs in general; other authors (Lapa et alii, 2006; Chen and
Trivedi, 2009; Woohyun and Suneung, 2009) have focused more specifically on costs of operation phase with the
aim to optimize preventive maintenance policies. Hatch and Badinelli (1999) have instead studied the way of
gathering in a single objective function two conflicting components, Life Cycle Cost (LCC) and system availability
(A). All the contributions partially address the issue and they are lacking into considering the problem from the
perspective of CLS actor. A most fitting paper is from the same author (Nenni, 2013) but it is really recent and it
takes the first step on the topic highlighting the discrepancies between the Life Cycle Management approach and
the cost management from the perspective of the CLS and through the proposition of a basic cost element structure.
The aim of this paper is to develop a cost assessment method based on a clear and substantial cost element
structure. Moreover the author proposes a simulation in a real case to point out the importance of determining
sensitivity to key inputs in order to find the best value solution among competing alternatives.

2. THE COST MODEL


The CLS needs of cost estimates to develop annual budget requests, to evaluate resource requirements at key
decision points, and to choose about investment. A specific cost model, really fitting with the ILS issues, is the base
for the estimation. The author proposes a cost model where most of the elements are derived from DOD Guide
(2005) but the link between cost and performance is original as well as some key decision parameters (Nenni,
2013).
Before going through the cost model, it is necessary describe some assumptions. The first one concerns the area in
which it runs. In this paper only costs for activities in maintenance planning and supply support have been
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 419-428, 2013.

AXIOMATIC DESIGN AS SUPPORT FOR DECISION-MAKING IN A


DESIGN FOR MAINTENANCE IMPROVEMENT MODEL: A CASE STUDY
Jorge Pedrozo, Alfonso Aldape, Jaime Snchez and Manuel Rodrguez
Graduate Studies & Research Division
Juarez Institute of Technology
Ave. Tecnolgico 1340, Cd. Jurez, Chih. 32500 Mxico
Corresponding authors e-mail :{ Jorge Pedrozo, [email protected] }
Decision-making is one of the most critical issues in design models. The design of new Maintenance methodologies that
improve the levels of reliability, maintainability and availability of equipment has roused a great interest in the last
years. Axiomatic Design (AD) is a design theory that provides a framework to decision-making in the designing
process. The objective of this paper is to present the validation of a new maintenance improvement model as an
alternative model to improve maintenance process..
Significance:The usage of information axiom as decision-making tool is examined, this paper present an example used
to describe how AD was applied to select the best maintenance model in order to meet the maintenance functional
requirements.
Keywords: Decision Making, Axiomatic Design, Information Axiom, Maintenance, Reliability, Availability

1. INTRODUCTION
Axiomatic Design (AD) theory provides a valuable framework for guiding designers through the decision process to
achieve positive results in terms of final design object (Nordlund and Suh, 1996). Several companies have used the
axiomatic design methodology successfully in order to develop new products, processes and even approaches. AD was
born about 20 years ago and was conceived as a systematic model for engineering education and practice (Suh, 1990). It
addresses designers in the complex process of the design, at the same time, it is catalogued as one of the most difficult
tools to master (Eagan et al., 2001).
Historically maintenance has evolved throughout time (Moubray 1997), from maintenances point of view, we can
differentiate approaches of best practices applied each one at certain period. For a better understanding of the
evolution and development of maintenance from its beginnings until these days, Moubray distinguishes three different
generations, see Fig 1.
First generation: It includes the period until the end of Second World War, at this time the industries had few
machines, they were very simple, easy to repair and normally oversized. The volumes of production were low, reason
why the down times were not important. The prevention of equipments failures were not of high priority for
management, and only was applied the reactive or corrective maintenance the maintenance policy was run to failure.
Second generation: It was born as a result of the war, at this time more complex machineries were gotten up, and the
unproductive time began to be a preoccupation of the management since they were let perceive gains by effects of new
demands, from this reason arose the idea that equipments failures could and must be prevented, idea that would take
the name of preventive maintenance. In addition new control and planning systems of maintenance started to be
implemented, in other words, revisions to predetermined time. This change of strategy made it not only possible to plan
maintenance activities; it also made it possible to start controlling maintenance performance, costs and production assets
availability.
Third generation: It begins in the middle of the Seventies where the changes, as a result of the technological advance
and of new researches are accelerated, mechanization and automatization in the industry were increased, operates with
higher volumes of production, downtime achieve more importance due to the costs by losses of production,
machineries reach greater complexity and increases our dependency of them, products and services of quality are
demanded considering aspects of security and environmental and the development of preventive maintenance was
consolidated.
In the latter years we have lived a very important growth of new concepts of maintenance and methodologies applied
to the management of maintenance (Duran 2000). Until ends of 90s, the developments reached in 3 generation of the
maintenance included:
Decision Aid tolls and new maintenance techniques.
Design teams, giving high relevance to reliability and maintainability
An important change in organization thinking towards the participation, the team work and the flexibility
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 429-443, 2013.


A STUDY OF KEY FACTORS IN THE INTRODUCTION OF RFID INTO


SUPPLY CHAINS THROUGH THE ADAPTIVE STRUCTURATION
THEORY
Mei Ying Wu,
Department of Information Management,
Chung-Hua University, 707, Sec.2, WuFu Road, Hsinchu 300.
Chun Wei Ku,
Department of Information Management, Chung-Hua University
Taiwan, Province of China
Since 2003, the Radio Frequency Identification System (RFID) technology has gained importance and has been widely
applied. Numerous statistics indicate a high potentiality for RFID development in the near future. This study focuses on
the issues derived from RFID technology and explores the impact of its introduction into supply chains. Based on the
framework of the Adaptive Structuration Theory (AST), a questionnaire is designed for collecting research data, and
Structural Equation Modeling (SEM) is adopted in order to identify the relationships among research constructs.
The research findings indicate that technological features, promoters, and group cooperation systems of RFID have
significant effects on the supply chain operation structure and indirectly influence factors of RFID introduction. It is
evident from this studys results that certain factors of RFID and a good supply chain operation structure have positive
effects on the introduction of RFID into supply chains.
Keywords: Radio Frequency Identification System, Adaptive Structuration Theory, Structural Equation Modeling,
Supply Chain Operation Structure, Introduction of RFID.

1. INTRODUCTION
The objective of this study is to investigate the effects of the introduction of RFID into supply chains and the
interactions between upstream and downstream firms. This objective is similar to that of the Adaptive Structuration
Theory (AST) proposed by DeSanctis and Poole (1994). AST was developed in order to examine the interactions
among groups and organizations using information technology (IT). Thus, based on the AST framework, Structural
Equation Modelling (SEM) is adopted in order to analyse the relationship among research constructs.
This study focuses on firms in a supply chain from upstream to downstream, whose businesses encompass
manufacturing, logistics, warehousing, retailing, and selling. The firms are selected from the list of Top 500
Manufacturers released by Business Weekly; members of the Taiwan Association of Logistics Management; and the
database of publicly-listed manufacturers, logistics firms, and retailers owned by the Department of Commerce in the
Ministry of Economic Affairs. The results are expected to serve as a reference for enterprises that are planning or
preparing for RFID introduction.

2. LITERATURE REVIEW
2.1 Introduction to RFID
RFID was created in an attempt to replace the widely used barcode technology. Thus far, it has garnered much attention
and has been extensively applied. Capable of wirelessly reading a large amount of data of various types, RFID can be
used to create an information system that can easily identify an object and extract its attributes. Table 1 presents the
features of RFID technology that have been mentioned in previous studies.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(5-6), 444-452, 2013.

ORDER PROCESS FLOW OF MASS CUSTOMIZATION BASED


ON SIMILARITY EVALUATION
Xuanguo XU, Zhongmei LIANG
Department of Economics & Management
Jiangsu University of Science and Technology
Mengxi street No.2
Zhenjiang, China 212003
Email: Xuanguo XU, [email protected]
The main result presented in this paper is the order process flow of mass customization based on similarity
evaluation. An order process flow of mass customization is put forward with a view to meet customers specific
needs as well as to reduce difficulties in subsequent production. As the basis of this order process flow, we suppose
that all the orders in the accepted order pool are profitable confirmed by the market. A similarity evaluation method
is put forward which includes the following steps: determine whether an order is acceptable or not; put the
profitable order into the accepted pool; for those not profitable, negotiate with customer to determine which pool it
belongs to; order similarity analysis with system clustering method; arrange for batch production for those have
much similarities; arrange for completely customized production for those specific orders; arrange order insertion
for those have little similarity but can be inserted to the scheduled plan. At the end of this paper, an example case
study of one China Air Conditioning Company is presented to illustrate the application of the process flow and the
similarity evaluation method.
Significance: Order acceptance has been studied in mass production mostly. This paper discussed how to process
the orders in mass customization after they are being accepted, so as to make more profit.
Keywords: Order process, Mass customization, System clustering, Similarity evaluation

1. INTRODUCTION
Since 1990s, customer requirements have become increasingly diversified and individual. Manufacturing
enterprises are gradually transferring their production mode from traditional mass production to customization in
order to survive in severe competition. Especially, in recent years, due to the demand diversification, order is
getting more and more obvious characteristics of personalized, and customized production has become very popular
in manufacturing production. Production orders can be customized based on the need to provide customers with
personalized products and services. On the other hand, complete customization is too much expensive, long
delivery time, low productivity and low capacity utilization (X.-F. SHAO and J.-H.JI, 2008). In this context, mass
customization (MC) came into being.
As customers' requirements are different from each other, manufacturing enterprises must analyze customers'
requirements and adopt specific procedures according to different customization requests and customization degree.
Order acceptance is a critical decision-making problem at the interface between customer relationship management
and production planning of order-driven manufacturing systems in MC. To solve this problem, the key issue is
order selection to get the maximum profit by capacity management. Over the past decade the strategic importance
of order acceptance has been widely recognized in practice as well as academic research in mass production (MP)
and MTO (make to order) systems. Some papers have discussed order acceptance decisions when capacity is
limited and penalty for late delivery (Susan A. Slotnick etc., 2007). Some paper uses different algorithm to solve the
order acceptance problems (Walter O. Rom etc., 2008). Simultaneous order acceptance and scheduling decisions
were examined in a single machine environment (Ceyda Oguz etc., 2010). All these papers studied the order
properties such as release dates, due dates, processing times, setup times and revenues, and offer trade-off between
earnings and cost. This strategy is more suitable separately to accept orders for the inventory production or order
driven production (such as make to stock and make to order), only need to consider the profits and capacity.
Therefore, in make to stock (MTS) mode, the problem is to consider the earnings and profits as a condition to
accept or reject orders for the subsequent production is in accordance with high-volume production as quickly as
possible to achieve the final product.
Unlike MTS mode which holds final finished products in stock as a buffer against demand variability, MC
production systems must hold production capacity and work in process (WIP) inventories to accept only orders of
the most profitable type. Generally speaking, customers' requests can be divided into three categories in the view of
enterprise: standard parts, simple customization and special customization. Standard part means the commonly used
accessories in the customized product. Simple customization can further be divided into customization based on
parameters and customization based on configurations. If customers' needs for customization are beyond the scope
of simple customization, such as changing product's shape dramatically or adding some functions which are not
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(7-8), 453-467, 2013

MEAN SHIFTS DIAGNOSIS AND IDENTIFICATION IN BIVARIATE


PROCESS USING LS-SVM BASED PATTERN RECOGNITION MODEL

Cheng Zhi-Qiang1, Ma Yi-Zhong1 , Bu Jing2, Song Hua-Ming1


Department of Management Science and Engineering, Nanjing University of Science and Technology, Nanjing
Jiangsu, 210094, P.R.China.
[email protected], [email protected], [email protected]
2
Automation Institute, Nanjing University of Science and Technology,
Nanjing Jiangsu, 210094, P.R.China.
[email protected]

This study develops a least squares support vector machines (LS-SVM) based model for bivariate process to
diagnose abnormal patterns of process mean vector, and to help identify abnormal variable(s) when Shewhart-type
multivariate control charts based on Hotellings T 2 are used. On the basis of studying and defining the
normal/abnormal patterns of the bivariate process mean shifts, a LS-SVM pattern recognizer is constructed in this
model to identify the abnormal variable(s). The model in this study can be a strong supplement of the Shewharttype multivariate control charts. Furthermore, the LS-SVM techniques introduced in this research can meet the
requirements of process abnormalities diagnosis and causes identification under the condition of small sample size.
An industrial case application of the proposed model is provided. The performance of the proposed model was
evaluated by computing its classification accuracy of the LS-SVM pattern recognizer. Results from simulation case
studies indicate that the proposed model is a successful method in identifying the abnormal variable(s) of process
mean shifts. The results demonstrate that the proposed method provides an excellent performance of abnormal
pattern recognition. Although the proposed model used for identifying the abnormal variable(s) of bivariate process
mean shifts is a particular application, the proposed model and methodology here can be potentially applied to
multivariate SPC in general.
Key words: multivariate statistical process control; least squares support vector machines; pattern recognition;
quality diagnosis; bivariate process

1. INTRODUCTION
In many industries, complex products manufacturing in particular, statistical process control (SPC)[1] is a widely
used tool of quality diagnosis, which is applied to monitor process abnormalities and minimize process variations.
According to Shewharts SPC theory, there are two kinds of process variations, common cause variations and
special cause variations. Common cause variations are considered to be induced by the inherent nature of normal
process. Special cause variations are defined as abnormal variations of process, which are induced by assignable
causes. Traditional univariate SPC Control charts are the most widely used tools to reveal abnormal variations of
monitored process. Abnormal variations should be identified and signaled as soon as possible to the effect that the
quality practitioners can eliminate them in time and bring the abnormal process back to the normal state.
In many cases, the manufacturing process of complex products may have more than two correlated quality
characteristics and a suitable method is needed to monitor and identify all these characteristics simultaneously. For
the purpose of monitoring the multivariate process, a natural solution is to maintain a univariate chart for each of
the process characteristics separately. However, this method could result in higher fault abnormalities alarms when
the process characteristics are highly correlated [2] (Loredo, 2002). This situation has brought about the extensive
research performed in the field of multivariate quality control since the 1940s, when Hotelling introduced that the

Corresponding author of this paper. E-mail address: [email protected], [email protected]

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(7-8), 468-486, 2013

PARALLEL KANBAN-CONWIP SYSTEM FOR BATCH PRODUCTION IN


ELECTRONICS ASSEMBLY
Mei Yong Chong, Joshua Prakash, Suat Ling Ng, Razlina Ramli, Jeng Feng Chin
Universiti Sains Malaysia,Malaysia
School of Mechanical Engineering, Universiti Sains Malaysia (USM), Engineering Campus
This paper describes a novel pull system based on value stream mapping (VSM) in an electronics assembly plant. Production
in an assembly plant can be characterized as multi-stage, high mix, by batch, unbalanced, and asynchronous. The novelty of
the system lies on the two kanban systems working simultaneously: a standard lot size kanban system for high-demand
products (high runners) and a variable lot size constant work-in-process (ConWIP) system for low-demand products (low
runners). The pull system is verified through computer simulation and discussions with production personnel. Several
benefits are achieved, including level scheduling and significant reduction in the work-in-process (WIP) level. Production
flows are regulated through a process called pacemaker, which involves varying the standard lot size and the number of
ConWIP kanban. The available interval time could be utilized for other non-kanban-driven parts and routine maintenance
(5S). Only a moderate decline in the production output is seen, compared with the target, because of the increase in the
overall set-up time, along with the small lot size production.
Keywords: Pull system, kanban system, ConWIP system, batch production

1. INTRODUCTION
The philosophy of lean manufacturing originated from the Toyota Production System (TPS) and was envisioned by Taiichi
Ohno and Eiji Toyoda (Liker, 2004). This practice considers the deployment of resources only for activities that add value
from the perspective of end customers. Other activities that depart from this intention are viewed as wasteful and should be a
target for total elimination. Taiichi Ohno identified seven forms of waste: overproduction, queue, transportation, inventory,
motion, over-processing, and defective product (Heizer and Render, 2008). Ultimately, the production must be a continuous
single flow throughout the shop floor, driven by customer demand.
However, this ultimate objective would take years to realize. Moreover, the service work applied to work-in-process (WIP),
even if considered as waste, is still needed. The main function of WIP is to decouple parts among machines running at
different capacities, set-up times, and failure rates. An excessive amount of WIP prolongs lead time, whereas an insufficient
amount of WIP results in the occasional starving and blocking of machine during production (Hopp and Spearman, 2000;
Silver et al., 1998). Thus, the pertinent question is how to maintain the minimum amount of WIP in the manufacturing
system. One way is to move WIP only when needed, rather than pushing it on the next machine. This is the essence of the
pull system. Specifically, a preceding machine produces parts only after receiving a request from its succeeding machine for
the immediate replacement of items removed from the stock. Therefore, the flow of information is in the opposite direction of
the material flow (Bonney et al., 1999). Lean practitioners often use a kanban (card) to signal the production (authorization)
for the next container of material (Gaury et al., 2000).
A review of literature (Berkley, 1992; Lage Junior and Godinho Filho, 2010) reveals at least 2030 existing kanban
systems, all of which differ in terms of the medium used, lot sizes, and transferring mechanism. Hybrid systems involving
multiple types of kanbans have also been established and studied. The development has led to belief that future kanban
systems will be increasingly complex. Some systems, such as those by Takahashi and Nakamura (1999), require the aid of
software for real-time adjustments. Other researchers (Markham et al., 2000; Zhang et al., 2005; Moattar Husseini et al.,
2006) have ventured into creating optimum kanban settings using advanced computer techniques, such as artificial
intelligence.
In this paper, we offer a new hybrid system based on a value stream mapping (VSM) exercise. The system is a combination
of two well-known techniques: kanban and ConWIP. To the best of our knowledge, even though the mechanism employed is
simple and naturally fits into the production under study, this system has yet to be proposed elsewhere. The system is also
sufficiently generic to warrant wider applications, especially as the production setting and problems faced in the case study
are not unique.
The paper begins with an introduction on the pull system and its various types. Afterwards, VSM is introduced as the main
methodology, and the sequences of its implementation leading to the final value stream map are presented. The description of
the proposed system is then given. Finally, the setup of computer simulation and discussions on the results obtained are
provided.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(7-8), 487-501, 2013

LEAN INCIPIENCE SPIRAL MODEL FOR SMALL AND MEDIUM


ENTERPRISES
Mei Yong Chong, Jeng Feng Chin, Wei Ping Loh
Universiti Sains Malaysia
Malaysia
School of Mechanical Engineering, Universiti Sains Malaysia (USM), Engineering Campus, 14300 Nibong Tebal,
Penang, Malaysia

Small and medium enterprises (SMEs) support a balanced local economy by providing job opportunities and
industry diversity. However, weak management practices result in suboptimal operations and productivity in SMEs.
Few generic lean models conceived with SMEs in mind. In this light, a lean transformation framework for SMEs is
conceived. The model is known as lean incipience spiral model (LISM). It aims to effectively introduce lean
concepts and later to facilitate sustainable transformation in SME which has limited relevant prior exposure to the
concepts. The model builds upon a steady and parsimonious diffusion of lean practices. A progressive
implementation is promoted through a spiral life cycle model, where each cycle must undergo four phases. The lean
transformation is guided with value stream mapping and a commercial lean assessment tool. Finally, the model was
implemented in a suitable case study.
Keywords: Lean manufacturing, lean enterprise, small and medium enterprises, value stream mapping

1. INTRODUCTION
In general, small and medium enterprises (SMEs) are business enterprises operating with minimal resources for a
small market. However, the actual definition tends to vary among countries and is subject to constant revision. In
Malaysia, a manufacturing company with less than 150 employees or less than RM 25 million sales turnover is
categorized as an SME (Small and Medium Industries Development Corporation, 2011).
SMEs are acknowledged as key contributors to the development of the economy, increase in job opportunities, and
general health and welfare of global economies. SMEs provide more than half of the employment and value-added
services in several emerging countries; their impact is bigger in developed countries. Nevertheless, SMEs always
fall behind large enterprises in terms of gross domestic product (GDP). For example, a 2006 report in Kaloo (2010)
showed that although SMEs accounted for 99.2% of the total establishment in Malaysia, with 65.1% of the total
workforce, they only generated 47.9% of the GDP.
Kaloo (2010) reasoned that large enterprises have a size advantage and are able to acquire mass production
technologies to reduce production cost. The required setup costs and relevant technologies may be financially
infeasible or unavailable for SMEs. With thinner profit margin, SMEs are also more vulnerable to financial losses
than large enterprises.
SMEs also face fierce competition due to low marketing channels and small niche market share (Analoui and
Karami, 2003). Most SMEs heavily rely on general machines that are shared by a high variety of products. To
maximize machine utilization, batch production is adopted with ad-hoc scheduling. Inefficient management
practices further add to the variability of production forms. This entails long production lead times, impeding rapid
response to customer demand.
SMEs are seed beds for future large enterprises. Eventually, a SME needs to grow into a large enterprise. The
inability of an SME to capitalize on the benefits of accumulated knowledge and experiences over long periods of
time may be a sign that something is amiss. In this premise, the constant upgrade of operations is vital. Best
practices have to be introduced and adopted for SMEs to achieve high performance in operational areas, rightly with
the stage of expansion. Unfortunately, case studies by Davies and Kochhar (2000) found the predominance of
malpractices and the high rate of fire fighting in SMEs. The mixed implementation of best practices also did not
appear to be the result of a structured selection process. Poor understanding of the relationship between practices
and the effects of implementing practices was ascertained. With protecting capital investment as top priority, SMEs
largely adopt a conservative and follower mindset that prefers short-term benefits.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(7-8), 502-514, 2013

DESIGN AND IMPLIMENTATION OF LEAN FACILITY LAYOUT


SYSTEM OF A PRODUCTION LINE
Zhenyuan Jia, Xiaohong LU, Wei Wang, Defeng Jia
To resolve the problem that the unreasonable facility layout of a production line directly or indirectly leads to
inefficient production efficiency are very common in Chinese manufacturing workshops, facility lean layout system
of a production line is designed and developed. By analyzing the influence factors of the facility layout, Optimization
objectives and constraint conditions of facility layout were summarized. A functional model and a design structure
model of the lean layout system are built. Based on the in-depth analyses of the mathematical model designed to
denote the optimization of facility layout of a production line, a prototype lean facility layout system of a production
line is developed. The results of applying the facility layout system in cylinder liner product line showed that the
designed lean facility layout system can effectively enhance the productivity efficiency and increase the efficiency of
the using of equipments.

Key words: production line, lean, facility layout, model, design

1. INTRODUCTION
Due to the phenomena that the unreasonable facility layout of a production line directly or indirectly leads to
inefficient production efficiency are very common in Chinese manufacturing workshops, the research on facility
layout of a production line has always been the key research area of industrial engineering domain(Sahin, Ramazan
and Trkbey, Orhan, 2009, and Zhang, Min et al., 2009, Diego-Mas, J.A. et al., 2009 and Raman, Dhamodharan et al.,
2009). The facility layout form of a production line depends on the types of enterprises and forms of production
organization(Khilwani, N. et al., 2008 and Amaral, Andr R. S. ,2008). The facility layout types are divided into the
technological layout, product layout, fixed-station layout (SUO Xiaohong and LIU Zhanqiang, 2007), chain
distribution (SUN Hailong, 2005) and the particular layout combining with the actual situation (CAO Zhenxin et al.,
2005), and so on.
Traditional qualitative methods of facility layout mainly include modeling method, sand table method, drawing
pictures method and graphic illustration method, etc. (QU Shiyong and MU Yongcheng, 2007); these methods rely
mainly on personal experience and lack of scientific basis. When there are many production units the relationships
between the facilities become more complex and the qualitative layout methods are often unable to meet the
requirements of the workshop, thus the quantitative distribution technologies emerge. The quantitative layout
methods mainly include process flow diagram method, from-to table method, relationships of the work units method
and SLP method (advanced by Richard Muther), etc (ZHU Yaoxiang and ZHU Liqiang, 2004). SLP method provides
a layout planning method, which selects the relationship analyses of the logistics and non-logistics of the production
units as the main line of the planning method, and is the most typical system layout planning method (Richard Muther,
1988). In recent years, with the improvement of the computers performance and the development of the digital
analysis methods, appeared computer-aided system layout planning (CASLP) method on the basis of applying
computer and its related technologies on SLP method (CHANG Jiane and MA Likun, 2008). The CASLP method
not only greatly speeds up the layout plan process, but also provides simulation display for the layout scheme
depending on the advanced functions of people-machine-interaction and computer aided drawing.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(7-8), 515-525, 2013

A CRITICAL PATH METHOD APPROACH TO A GREEN PLATFORM


SUPPLY VESSEL HULL CONSTRUCTION
Eda TURANa, Mesut GNERb
a

Department of Naval Architecture and Marine Engineering, Yildiz Technical University, 34349 Besiktas, Istanbul,
Turkey
E-mail: [email protected] Phone: +902123833156 Fax: +902122364165
b
Department of Naval Architecture and Marine Engineering, Yildiz Technical University, 34349 Besiktas, Istanbul,
Turkey
E-mail: [email protected] Phone: +902123832859 Fax: +902122364165
This study generates a critical path method approach for the first Green Platform Supply Vessel hull constructed in
Turkey. The vessel was constructed and partly outfitted in a Turkish Shipyard and delivered to Norway. The project
management of the vessel was conducted utilizing Critical Path Method (CPM) and the critical paths during
construction and partly outfitting period of this sophisticated vessel were presented. Additionally, the precautions in
order to prevent the delay of the project were discussed.
Keywords: Project Management, Production, Critical Path Method (CPM), Green Vessel, Platform Supply Vessel.

1. INTRODUCTION
A Platform Supply Vessel (PSV) carries various types of cargoes such as chemicals, water, diesel oil, fuel oil, mud,
brine oil etc. between the platforms and ports. She supplies the requirements of the platforms during operations and
brings the wastes to the port.
Platform supply vessels are separated into three groups as small-sized, medium-sized and large-sized platform supply
vessels according to their deadweight tonnages. The platform supply vessels with a capacity less than 1500 DWT are
named as small-sized, between 1500 DWT 4000 DWT are medium-sized and more than 4000 DWT are large-sized
platform supply vessels.
The vessel in this paper is a large-sized platform supply vessel with her 5500 DWT capacity. This type of construction
is the first application in the Turkish Shipyards. The vessel is the first and biggest merchant ship using a fuel cell to
produce power on board. The length of the vessel is 92,2 meters and the beam is 21 meters. After completion of the hull
construction and partly outfitting in a Turkish Shipyard, the vessel was delivered to Norway. Remaining works were
completed in a Norwegian Shipyard. The vessel operates in the North Sea.
The vessel uses not only heavy oil or diesel but also liquefied natural gas engines and fuel cell. This is the difference
of the vessel from other merchant vessels. SOx, NOx and CO2 emissions are reduced with the combination of gas
engines and the fuel cell on board.
The construction of Platform Supply Vessels is more difficult and complicated than the vessels that the Turkish
Shipyards are experienced in construction of vessels such as chemical carriers and cargo vessels. The length of these
vessels are shorter than conventional cargo vessels, however since the steel weights are more than conventional cargo
vessels, there is a high demand to build these vessels from the shipyards. Nowadays, these vessels also become the
mostly demanded vessels for construction subject to the above reasons.
Ship production is a project type production. Therefore project management is a vital factor in the construction of a
vessel. The most common planning type in Turkish Shipyards is block planning. In this planning approach, the ship is
divided into various sizes of blocks before the construction commences. Firstly, blocks are constructed separately and
they are erected on the slipway after completion (Odabasi, 1996). Since the block weights are determined according to
crane capacities of the shipyards, the block weights may vary from one shipyard to another.
There are various processes during a shipbuilding stage. In order to complete the ship construction profitably on time,
information, material, workmanship and workflows should be managed under control, which is appropriate to the
shipyard (Turan, 2008).
The material flow is also significant for the delivery of the vessels on time. Required materials should be present at
the needed time and location. The delays on material supplies may slow down the production or even stop it (Acar,
1999).
In the literature, Yang and Chen (2000), performed a study in order to determine the critical path in an activity network.
Lu et al. (2008), deals with resource constrained critical path analysis for construction area. Duan and Liao (2010),
evaluated improved ant colony optimization for determining project critical paths. Guerriero and Talarico (2010),
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(9-10), 526-533, 2013

RECENT DIRECTIONS IN PRODUCTION AND OPERATION


MANAGEMENT: A SURVEY
Vladimir Modrak1 and Ion Constantin Dima2
Technical University of Kosice, Bayerova, Nr. 1, Presov, Slovakia
e-mail: [email protected]
2
Valahia University of Targoviste, B-dul Regele Carol I, Nr. 2, Targoviste, Romania
e-mail: [email protected]
1

Although the overviews on overall historical developments in given cognition domains are useful, in this survey a
modern era of operations management is treated. This work starts with describing the development and current
position of operation management in production sector. Subsequently, decisive development features of operations
management are articulated and analyzed. Finally, in the paper, opportunities and challenges of a modern operations
management for practitioners are discussed.
Keywords: strategic management, operations strategy, organizational change, innovation

1. INTRODUCTION
Operations management (often called as production management) may be defined in different ways depending upon
angle of view. Since this discipline is a field of management then it focuses on carefully managing the processes to
produce and distribute products faster, better, and cheaper than competitors. Operations Management (OM)
practically concerns all operations within the organization and objectives of its activities focuses on efficiency and
effectiveness of processes. Modern history of production and operations management was initiated in 1950s by an
extensive development of operation research tools of waiting line theories, decision theories, mathematical
programming, scheduling techniques and other theories. However, the material covered in higher education was
quite fragmented without umbrella what it is called as production and operations management (POM).
Subsequently, the first publications Analysis of Production Management by Bowman and Fetter (1957) and
Modern Production Management by Elwood Buffa (1961) represented an important transition from industrial
engineering to operations management. Operations management finally appears to be gaining position as a
respected academic discipline. OM as a discipline went through its own evolution that has been comprehensively
characterized by Chase and Aquilano (1989). Thus, this may be a good time to update the evolution of the field. To
achieve this goal, the major publications/citations in this field and their evolving research utility over the decades
will be identified in this paper.

2. OPERATION MANAGEMENT IN THE CONTEMPORARY ERA


The process of building operation management theory and definition of its scope or area has been treated by a
number of authors. As it has been mentioned above, a modern era of POM is closely connected with a history of
industrial engineering (IE). The development of IE discipline has been greatly influenced by the impact of operation
research (Turner et al, 1993). Operation research (OR) was originally aimed at solving difficult war-related
problems through the use of mathematics and other scientific branches. The diffusion of new mathematical models,
statistics and algorithms to aid in decision-making had a dramatic impact on industrial engineering development.
Major industrial companies established operation research groups to help solve their problems. In the 60s,
expectations from OR were extremely high, and as was commented by Luss and Rosenwein (1997), over the years
it often appeared that the mathematics of OR became the goal rather the means to support solving real problems. It
caused that OR groups in companies were transferred to traditional organization units within companies. As a
reaction on this disappointment Corbett and Van Wassehove (1993) classified OR specialists into three classes:
theoreticians, management consultants, who focus on using available methods to solve practical problems, and the
in-between specialists called operations engineers, who adapt and enhance methods and approaches in order to
solve practical problems. The term operations engineers was formulated due to lack of better term and
accordingly the group could be also called as operations managers and the field conducting applied research that
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(9-10), 534-547, 2013

A CASE STUDY OF APPLYING FUZZY DEMATEL METHOD TO


EVALUATE PERFORMANCE CRITERIA OF EMPLOYMENT
SERVICE OUTREACH PROGRAM
Jiunn-I Shieh 1, Hsuan-Kai Chen 2 (Corresponding Author), Hsin-Hung Wu 3
1

Department of Information Science and Applications, Asia University


No. 500, Lioufeng Rd., Wufeng , Taichung County, Taiwan 41354
E-mail: [email protected]

Department of Marketing and Logistics Management, Chaoyang University of Technology


No. 168, Jifong E. Rd., Wufeng, Taichung County 41349, Taiwan
E-mail: [email protected]
3

Department of Business Administration, National Changhua University of Education


No. 2 Shida Road, Changhua City, Taiwan 500
E-mail: [email protected]

The economic and financial crisis leads to deterioration in the employment market in Taiwan. The Bureau of
Employment and Vocational Training, Council of Labor Affairs of Executive Yuan has been aggressively
conducting Employment Service Outreach Program to resolve this tough issue. Under such program, the
outreach personnel are recruited, trained, and supervised to perform the duties including identifying unemployed
persons and then providing job information for them, using the social resource link to increase employment
opportunities, conducting employer forum or workshops for job-seekers, and so on. This study applies fuzzy
decision-making trial and evaluation laboratory method to not only evaluate the importance of the criteria but
also construct the causal relationships among the criteria of evaluating outreach personnel. The results show that
job-seeking service is the most critical criterion among the three first-tier criteria. In addition, identification of
the number of unemployed people and number of follow-up visit are the two most important causes under the
category of job-seeking service when the performance of outreach personnel in Employment Service Outreach
Program is evaluated.
Keywords: Employment service outreach program, Outreach personnel, Fuzzy theory, Fuzzy DEMATEL

1. INTRODUCTION
In the early 2008, the unemployment rate in Taiwan was 3.80%. Because of the economic and financial crisis,
the average unemployment rate in 2009 has been increased to 5.85%. Further, the highest unemployment rate
was occurred in August 2009 with 6.13%, representing 672 thousand unemployed persons. As a result, reducing
the unemployment rate has become a tough issue faced by the government. In order to ease the negative impact
on unemployment, the Bureau of Employment and Vocational Training, Council of Labor Affairs of Executive
Yuan has been aggressively conducting Employment Service Outreach Program.
This program performed by outreach personnel consists of identifying unemployed persons and then
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(9-10), 548-561, 2013

UTILIZING SIGN LANGUAGE GESTURES FOR GESTURE-BASED


INTERACTION: A USABILITY EVALUATION STUDY
Minseok Son 1, Woojin Park* 1, Jaemoon Jung 1, Dongwook Hwang 1 and Jungmin Park 2
1
Seoul National University, Seoul, 1 Gwanak-ro, Gwanak-gu, Seoul Korea, 151-744
2
Korea Institute of Science and Technology, 5 Hwarang-ro, Seongbuk-gu, Seoul Korea,136-791
*Woojin Park is the corresponding author of the paper

Utilizing gestures of major sign languages (signs) for gesture-based interaction seems to be an appealing idea as it
has some obvious advantages, including: reduced time and cost for gesture vocabulary design, immediate
accommodation of existing sign language users and supporting universal design and equality by design. However, it
is not well understood whether or not sign language gestures are indeed adequate for gesture-based interaction,
especially in terms of usability. As an initial effort to enhance our understanding of the usability of sign language
gestures, the current study evaluated Korean Sign Language (KSL) gestures employing three usability criteria:
intuitiveness, preference and physical stress. A set of 18 commands for manipulating objects in virtual worlds was
determined. Then, gestures for the commands were designed using two design methods: the sign language method
and the user design method. The sign language method consisted of simply identifying the KSL gestures
corresponding to the commands. The user design method involved having user representatives freely design
gestures for the commands. A group of evaluators evaluated the resulting sign language and user-designed gestures
in intuitiveness and preference through subjective ratings. Physical stresses of the gestures were quantified using an
index developed based on Rapid Upper Limb Assessment. The usability scores of the KSL gestures were compared
with those of the user-designed gestures for relative evaluation. Data analyses indicated that overall, the use of the
KSL gestures cannot be regarded as an excellent design strategy when viewed strictly from a usability standpoint,
and the user-design approach would likely produce more usable gestures than the sign language approach if design
optimization is performed using a large set of user-designed gestures. Based on the study findings, some gesture
vocabulary design strategies utilizing sign language gestures are discussed. The study findings may inform future
gesture vocabulary design efforts.
Keywords: sign language, gesture, gesture-based interaction, gesture vocabulary, usability

1. INTRODUCTION
Gesture-based interaction has been actively researched in the human computer interaction (HCI) community as it
has a potential to improve human-machine interaction (HMI) in various circumstances (Nielsen et al., 2003; Cabral
et al., 2005; Bhuiyan et al., 2009; Wachs et al., 2011; Choi et al., 2012). Compared with other modalities of
interaction, the use of gestures has many distinct advantages: first, gestures are the most basic means of human-tohuman communication along with speech, and thus, may be useful for realizing natural, intuitive and comfortable
interaction (Baudel and Beaudouin-Lafon, 1993). Second, human gestures are rich in expressions and can convey
many different meanings and concepts as can be seen in the existing sign languages extensive gesture vocabularies.
Third, gesture-based interaction can be utilized in situations where the use of other interaction methods is
inadequate. For example, covert military operations in battle fields would preclude the use of voice-based or
keyboard and mouse-based interaction. Fourth, the use of touchless gestures would be ideal in environments that
require absolute sanitation, such as operating rooms (Stern et al., 2008a; Wachs et al., 2011). Fifth, gestures may
promote chunking, and therefore, may alleviate cognitive burden during human-computer interaction (Baudel and
Beaudouin-Lafon, 1993; Buxton, 2013). Sixth, gesture can be combined easily with other input modalities,
including voice, to enhance ease of use and expressiveness (Buxton, 2013). Finally, the use of the hands (or other
body parts) as the input device eliminates the needs for intermediate transducers, and thereby, may help reduce
physical stresses on the human body (Baudel and Beaudouin-Lafon, 1993).
One of the key research issues related to gesture-based interaction is the design of gestures. Typically, a gesture
design problem is defined as determining the best set of gestures for representing a set of commands necessary for
an application. Such set of gesture-command pairs is often referred to as a gesture vocabulary (GV) (Wachs et al.,
2008; Stern et al., 2008a). Gesture design is important because whether or not gesture-based interaction achieves
naturalness, intuitiveness and comfort largely depends on the qualities of designed gestures.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(9-10), 562-573, 2013

A STUDY ON PREDICTION MODELING OF KOREA MILLITARY


AIRCRAFT ACCIDENT OCCURRENCE
Sung Jin Yeoum1, Young Hoon Lee2
1
Department of IIE,Yonsei University
2
Department of IIE, Yonsei University, Korea, Republic Of
This research reports the analysis on the causes of accidents and case studies during the last 30 years in order to predict
chances of accident occurrences for the Republic of Korea Air Force (ROKAF) proactively. Systematic and engineered
analytical methods i.e. artificial neural network (ANN) and logistics regression are employed in practice to develop
prediction models in order to predict accidents for the purpose of identifying superior technique among the two. After
experimentation, it is revealed that ANN outperforms logistic regression technique in terms of enhanced prediction rate.
Significance: This research proposes accident prediction models which are anticipated to perform in an effective
manner regarding superior accident prediction and prevention rate for military aircrafts. Moreover, this research also
serves the purpose of providing an academic base, data and direction for future research on this specific topic.
Keywords: Prediction Modeling, Accident Prediction Rate, Artificial Neural Network

1. INTRODUCTION
The ROKAF is facing chronic challenge of one or two aircraft accidents per year during commencement of its
scheduled air operations and training exercises. The aforesaid fact inevitably incurs high aircraft cost and results in loss
of precious pilots life having detrimental effects in terms of lowering of morale and causing great grief among citizens.
The ROKAF is making best of its effort to address this challenge and has established Air Safety Management Wing in
this regard. Few improvements in a scientific and realistic fashion compared to the existing situation have been reported
but complete accident prevention is yet to be achieved (Byeon et al., 2008, Myung, 2008). An extensive research with
focus on pilot error has been conducted but no research with focus on jet fighter accident variable determination and
consequent accident prevention models is available. The reason behind aforesaid shortcoming is that the data related to
jet fighter accident is restricted, off-limits and not accessible due to security issues.
Due to aforementioned reason, accident prevention models have been developed for nuclear facilities and civilian
aircrafts etc. but no research has been conducted regarding jet fighter accident prediction and prevention. This research
is one of its kinds because it analyzes a total of 38 major jet fighter (F-A, F-B, F-C and F-D types) accidents over the
span of last 30 years (from 1980 to 2010) in an effort to comprehensibly determine all factors and variables affecting
military aircraft accidents. Instead of using traditional qualitative accident prevention variables, a quantitative analysis
is engineered to extract accident prevention data. To increase the credibility of aforesaid data, we have used two data
mining and analysis techniques i.e. logistic regression analysis and ANN.
Casual jet fighter accident causes have also been included in the proposed accident prevention model as applicable
variables along with other factors or variables depicting major accident causes. Individual flight capability is: fighter jet
pilot of age 23 years, 2400 hours of flight duration, experience as safety flight leader and squadron leader is included in
the crash prediction model. It is worth mentioning that literature on theoretical considerations, suitable research methods
with safety management and crash prediction theories related to this specific domain of knowledge have been studied in
detail before this research. Two groups were made prior to collecting data via basic statistical analysis i.e. t-evaluation
in order to distinguish between accident prone variables and accident free variables. Durbin-Watsons statistic was used
to resolve variable independence, multi co-linearity, tolerance limit, dispersed expansion factor, state index and degree
of dispersion issues.
Crash prediction models were made through the analysis of aforementioned data via logistics regression and ANN
(using SPSS-18TM). The aforesaid models were also verified using test data for the purpose of validation. The
superiority of one model over another is determined based on better and enhanced prediction rate. Comprehensive
literature survey related to this area of research is presented in Section-2. In Section 3-5, models for jet fighter accident
prediction have been developed and validated using test data as gathered for the span of last 30 years. In Section-6-7,
few imperative conclusions are drawn along with future research directions and suggested implications.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(9-10), 574-588, 2013

A COMINED APPROACH OF CYCLE TIME ESTIMATION IN MASS


CUSTOMIZATION ENTERPRISE
Feng Liang1*, Richard Y K Fung2 and Zhibin Jiang3
Dept. of Industrial Engineering, Nankai University, Tianjin 300457, China
2
Department of Manufacturing Engineering & Engineering Management
City University of Hong Kong, Hong Kong, China
3
Dept. of Industrial Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
1

To enhance customer satisfactions and improve ability of quick responses, the production type of mass
customization is advocated in many manufacturing enterprises. But in the mass customization enterprises, the
customization demands will influence the standard cycle time estimation, which is essential in the analysis of
contracts negotiation, capacity planning, and the assignments of due dates. Hence in this paper, a combined
methodology employing an analytical model and a statistical regression method is proposed to facilitate the cycle
time estimation in the mass customization enterprise. Using inferential reasoning for the analytical optimal model
for cost minimization, it is deduced that the relationship between the customization degree and cost coefficient
provider an efficient way to estimate the cycle time accurately. And their relationship is described with a statistical
regression method. Finally, a case study from a bus manufacturing enterprise is used to illustrate the detailed
estimation procedures and the further discussion is presented to explicate the significance for practice.
Key words: Cycle Time; Mass Customization; Statistics Regression; Customization Degree; Cost Coefficient

1.

INTRODUCTION

One of the essential criteria having reliable due date commitments and maintaining high level of customer service is
to have accurate estimates of cycle time. Owing to the lack of the fast and accuracy cycle time estimation methods
in the mass customization enterprise, practitioners often use constant cycle times as bases for due date assignment
and scheduling. However, the constant cycle time is so much simplified that due dates and schedules may not be
assigned and constructed with acceptable accuracy. Actually in many production systems, this approach results in a
high degree of late deliveries as the mean cycle-time is used as a basis to determine the delivery date. Therefore, the
development of a model for cycle time estimation for the mass customization enterprise is essential though it may
be rather complex. However, beyond the objective of due date setting, accurate cycle time estimates are also needed
for better management of the shop floor control activities, such as order review/release, evaluation of the shop
performance, identification of jobs that requires expediting, leadtime comparisons, etc. All these application areas
make the cycle time estimation problem as important as other shop floor control activities (Sabuncuoglu and
Comlekci, 2002).
In fact, in the mass customization enterprise, the difficulty of cycle time estimation is not only due to the
complexity of manufacturing systems, but also the high customization degree. It is well known that the actual cycle
time may vary from the theoretical cycle time because of the demands of customization. For example, in an
automobile enterprise which is a typical mass customization enterprise, there are 35 important parts in which 14
parts are optional for the customers. Therefore the cycle time vary from 20 days to 24 days. If the average cycle
time is determined as the promised cycle time, the delivery late rate may be 23%. Hence in order to avoid late
deliveries, the actual cycle time has to be determined according to the required customization degree.
However under a mass customization environment, estimating the cycle time as the important base to meet due
dates on time and utilize existing capacity efficiently is a complex problem than in other production systems due to
the example stated below.
According to the statistics and analysis of history production and after service data in a bus manufacturing
company, the measures of customer satisfaction on the six factors as shown in Figure 1.

*
Correspondence author: Dr Feng Liang, Dept. of Industrial Engineering, Nankai University, 23 Hongda Street, Tianjin
Economical Development Area, Tianjin 300457, China. Phone: (+86) 22 6622 9204, Fax: (+86) 22 6622 9204, Email:
[email protected].

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(11-12), 589-601, 2013

SYSTEM ENGINEERING APPROACH TO BUILD AN INFORMATION


SYSTEM FOR EMERGENCY CESAREAN DELIVERIES IN SMALL
HOSPITALS
Gyu M Lee
Department of Industrial Engineering, Pusan National University,
Korea, Republic of
The human is an imperfect being so that he/she has a limitation in perceiving the situations appropriately and making a
right decision quickly. His/her perceptions and decisions often come from the personal experiences and characteristics.
This vulnerability leads to the frequent errors or mistakes and aggravates things in an emergency situation of time pressure
and complex confusions. In a situation where an emergency cesarean delivery (ECD) is required, the immediate and
appropriate medical cares are very important to the fetus and the mother. However, the number of high-risk pregnancy
obstetrics doctors is decreasing in recent days and more medical staffs are currently in a great need. The American College
of Obstetricians and Gynecologists (ACOG) stated in 1989 that hospitals with obstetric services should have the
capability to begin an ECD within 30 minutes of the decision. This requirement places intense time pressure on the
preparation and surgical teams. A distributed, mobile communication and information system to facilitate ECDs at Pusan
National University Hospital has been developed along with its healthcare staffs. The developed ECD Facilitator has been
demonstrated to the staff at the hospital and their responses has been obtained to assess that such a system would reduce
the decision-to-incision intervals (DII) to well below the 30-minute ACOG guideline and reduce the likelihood of human
errors that compromise patient safety. This system engineering approach can be readily adaptable to other emergency
disastrous situations.

1. INTRODUCTION
The operating room (OR) in hospitals is a complex system in which the effective integration of personnel, equipment, and
information is essential to the delivery of high quality health care services. A team of surgeons, nurses, and technicians
with appropriate knowledge and skills must be assembled to perform many complex tasks necessary to properly prepare
for and successfully complete the surgery. Then, they must have the appropriate equipment, supplies, and materials at
hand, and those items must not only be present, but properly placed and correctly configured to be used by the OR team.
Besides the knowledge and skills they bring to the OR, team members require additional information to support their
decisions and guide their actions, including accurate vital data, proper protocols and procedures, and medical reference
information, particularly if they encounter unfamiliar situations or complications in the course of the surgery. All of these
components in the complex OR system must be properly coordinated. The surgery must be carefully planned, personnel
must be in the right places at the right times, activities must be properly synchronized, logistics must be executed
efficiently, and the right information must be available when and where needed.
This coordination is made more difficult by the fact that emergency surgery is time-critical, where the life of the patient
may depend on the hospitals ability to assemble the OR team, prepare the OR and equipment, and provide the necessary
information to begin a surgical procedure within minutes. Emergency surgeries challenge even the largest, most capable
hospitals, but they are especially challenging for small, rural hospitals that do not have enough personnel and resources.
When a patient needing emergency surgery presents at a small hospital, the medical staffs may be at home, on call, and
must be contacted and summoned to the hospital. As the team members begin arriving, they must start preparing the
patient, the OR and equipment for surgery. This is often complicated by the fact that small hospitals have few ORs and it
may not be practical to have one always ready for any specific class of emergency surgical procedure, thus requiring a
more lengthy preparation process. Moreover, small, rural hospitals often lack the information infrastructure needed to
deliver patient data, procedural knowledge, and medical reference information in an effective and timely manner.
The potential chaos and confusion of an emergency surgery in the middle of the night is compounded by the fact that the
medical personnel involved in the case are human, and human beings are fallible. Since human beings are limited by
nature in their abilities to sense, perceive, and act accurately and quickly, and innate cognitive biases compromise their
judgment and decision making capabilities. These fallibilities combine and interact with characteristics of the complex
system and complicated situation, as described above, to yield delays and errors that may lead to further harm to the
emergency patient, or even, in some cases, the death.
With that principle in mind, we utilize medical knowledge and engineering methods to design efficient, best-practice
processes and to create information and communication systems to facilitate emergency surgeries in small, rural hospitals.
The developed Emergency Cesarean Delivery Facilitator (ECD Facilitator) is a job performance aid to help summon,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(11-12), 602-613, 2013

EXPLORING BUSINESS MODELS FOR APPLICATION SERVICE


PROVIDERS WITH RESOURCE BASED REVIEW
JrJung Lyua, Chia-Hua Changb *,
aDepartment of Industrial and Information Management,
National Cheng Kung University, Tainan 701, Taiwan, ROC
[email protected]
bDepartment of Management and Information Technology,
Southern Taiwan University of Science and Technology,
Tainan City 710, Taiwan, ROC
[email protected]
The Application Service Provider (ASP) concept is extended from traditional information application outsourcing as
currently used by numerous industries for information applications. Although value-added service can be generated with
ASPs, it can still have a high failure rate in ASP markets. This research applies Resource Based Theory (RBT) to
evaluate the business models of ASPs in order to assess their positions and provide suggestions for development
directions. Top ten application service providers among the fifty most significant ASPs were selected to investigate the
global markets of the ASP industry and the trend of services beforehand. Then three of them were explored to illustrate
the RBT review. Based on the market review and the empirical investigation, it was found that only a few ASPs can
provide integrated service contents which can adapt to fit the real demands of customers. ASPs should focus on the
perspective of the ecosystem and consider employing strategic alliances in order to provide an integrated solution for
their customers and sustain competitive advantage.
Keywords: Application Service Provider, Resource Based Theory, Outsourcing Strategy, Business Model

1. INTRODUCTION
Information technology (IT) has become one of the most critical survival strategies for enterprises wishing to adapt to
rapidly evolving environments as a result of the advent of the network economy. To retain competitive advantage,
enterprises must seek out more efficient ways to utilize available resources. Thus, when internal resources cannot meet
environmental changes, enterprises may turn to outsourcing strategy and ally with their suppliers to better use external
resources. In this way, industries can broaden their services, reduce transaction cost, maintain core competence, and
increase benefit margins through the employment of such combinations of outsourced resources (Cheon et al, 1995).
Since information technology has become a critical resource for business, outsourcing strategy is therefore an option
involving commitment of all or parts of information system (IS) activities, manpower and other IS resources to exterior
suppliers (Adeley et al., 2004). The most critical reason for employing IS outsourcing strategy is to decrease the
inherent risks and compensate for the lack of abilities to develop such strategic applications in-house. Application
service providers (ASPs) have emerged in recent years offering services like traditional outsourcing, which receive
much attention in IS markets. Although the market scale of ASPs shows continued growth, most enterprises do not
realize or are not familiar with the way to outsource through ASPs (Currie and Seltsikas, 2001; Chen and Soliman,
2002). Therefore, the purpose of this research is to develop an evaluation structure for exploring and evaluating ASPs
from a supply side perspective. The consequences could then help ASPs recognize corresponding strategic marketing
directions in the future.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(11-12), 602-613, 2013

EXPLORING BUSINESS MODELS FOR APPLICATION SERVICE


PROVIDERS WITH RESOURCE BASED REVIEW
JrJung Lyua, Chia-Hua Changb *,
aDepartment of Industrial and Information Management,
National Cheng Kung University, Tainan 701, Taiwan, ROC
[email protected]
bDepartment of Management and Information Technology,
Southern Taiwan University of Science and Technology,
Tainan City 710, Taiwan, ROC
[email protected]
The Application Service Provider (ASP) concept is extended from traditional information application outsourcing as
currently used by numerous industries for information applications. Although value-added service can be generated with
ASPs, it can still have a high failure rate in ASP markets. This research applies Resource Based Theory (RBT) to
evaluate the business models of ASPs in order to assess their positions and provide suggestions for development
directions. Top ten application service providers among the fifty most significant ASPs were selected to investigate the
global markets of the ASP industry and the trend of services beforehand. Then three of them were explored to illustrate
the RBT review. Based on the market review and the empirical investigation, it was found that only a few ASPs can
provide integrated service contents which can adapt to fit the real demands of customers. ASPs should focus on the
perspective of the ecosystem and consider employing strategic alliances in order to provide an integrated solution for
their customers and sustain competitive advantage.
Keywords: Application Service Provider, Resource Based Theory, Outsourcing Strategy, Business Model

1. INTRODUCTION
Information technology (IT) has become one of the most critical survival strategies for enterprises wishing to adapt to
rapidly evolving environments as a result of the advent of the network economy. To retain competitive advantage,
enterprises must seek out more efficient ways to utilize available resources. Thus, when internal resources cannot meet
environmental changes, enterprises may turn to outsourcing strategy and ally with their suppliers to better use external
resources. In this way, industries can broaden their services, reduce transaction cost, maintain core competence, and
increase benefit margins through the employment of such combinations of outsourced resources (Cheon et al, 1995).
Since information technology has become a critical resource for business, outsourcing strategy is therefore an option
involving commitment of all or parts of information system (IS) activities, manpower and other IS resources to exterior
suppliers (Adeley et al., 2004). The most critical reason for employing IS outsourcing strategy is to decrease the
inherent risks and compensate for the lack of abilities to develop such strategic applications in-house. Application
service providers (ASPs) have emerged in recent years offering services like traditional outsourcing, which receive
much attention in IS markets. Although the market scale of ASPs shows continued growth, most enterprises do not
realize or are not familiar with the way to outsource through ASPs (Currie and Seltsikas, 2001; Chen and Soliman,
2002). Therefore, the purpose of this research is to develop an evaluation structure for exploring and evaluating ASPs
from a supply side perspective. The consequences could then help ASPs recognize corresponding strategic marketing
directions in the future.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 20(11-12), 614-630, 2013

HYBRID FLOW SHOP SCHEDULING PROBLEMS INVOLVING SETUP


CONSIDERATIONS: A LITERATURE REVIEW AND ANALYSIS
Mrcia de Ftima Morais, Moacir Godinho Filho, Thays Josyane Perassoli Boiko
Affiliation: Federal University of So Carlos
Department of Industrial Engineering
Rodovia Washington Luiz, km 235 - So Carlos - SP - Brazil
email: [email protected]
This research is dedicated to the Production Scheduling Problem in a hybrid flow shop with setup times separated
from processing times. The goal is to identify and analyze the current literature to identify papers that develop
methods to solve this problem. In this review, it was possible to identify and analyze 72 papers that have addressed
this issue since 1991. Analyses were performed using the number of papers published over the years, the approach
used in the development of the methods for the solutions, the type of objective function, the performance criterion
adopted, and the additional constraints considered. The analysis results provide some conclusions about the state of
the art in the subject and also enable us to identify suggestions for future research in this area.
Keywords: Production Scheduling, Hybrid Flow Shop, Sequence-Dependent Set-up Time, Sequence-Independent
Set-up Time.

1. INTRODUCTION
In scheduling theory, a multi-stage production process with the property that all of the products must pass through a
number of stages in the same order is classified as a flow shop. In a simple flow shop, each stage consists of a single
machine that handles at most one operation at a time. When it is assumed that, at least in one stage, a number of
machines that operate in parallel are available, this model is known as a hybrid flow shop (Sethanan, 2001).
According to Ruiz and Vzquez-Rodrguez (2010), a hybrid flow shop (HFS) system processes jobs in a series of
production stages, each containing parallel machines, with the aim of optimizing one or more objective functions.
Solving the production scheduling in such an environment is, in most cases, NP-hard.
Many real manufacturing systems are hybrid flow shop systems. The products manufactured in such an
environment can differ in certain optional components; consequently, the processing time on a machine differs from
one product to the next, and the need to prepare one or more machines before beginning a job or after finishing a job
is frequently present. In scheduling theory, the time required to shift from one job to another on a given machine is
defined as the additional production cost or the setup time. The corresponding scheduling problems, which consider
the setup times, have a higher computational complexity (Burtseva, Yaurima and Parra, 2010).
An explicit treatment of the setup times in most of the applications is required and represents a special interest, as
machine setup time is a significant factor for production scheduling in many practical cases. Setup time could easily
consume more than 20% of the available machine capacity if it is not handled well (Pinedo, 2008). Many examples
of scheduling problems that consider separable setup times are given in the literature, including electronics
manufacturing, automobile assembly plants, the packaging industry, the textile industry, steel manufacturing,
airplane engine plants, label sticker manufacturing companies, the semiconductor industry, maritime container
terminals, and the ceramic tile manufacturing sector, as well as in the electronics industry in sections for inserting
components on printed circuit boards (PCB), where this type of problem occurs frequently. Hybrid flow shop
scheduling problems that consider setup times are among the most difficult classes of scheduling problems.
Research in production scheduling began in the 1950s; however, until the mid-1990s, most research considered
setup times to be irrelevant or a slight variation and usually included it in the processing times of jobs and/or batch
jobs (Allahverdi, Gupta and Aldowaisan, 1999). Ruiz and Vzquez-Rodrguez (2010) show that studies that address
hybrid flow shop scheduling and that consider separate setup cost or time arose in the early 1990s. Within this
context, the main goal of this research is to perform a literature review on hybrid flow shop scheduling problems
with setup considerations. After the literature review, this paper also presents an analysis of this review, attempting
to find literature gaps and suggestions for future research in this field.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(1), 1-17, 2014

DEVELOPING A ROBUST PROGRAMMING APPROACH FOR THE


RESPONSIVE LOGISTICS NETWORK DESIGN UNDER UNCERTAINITY
Reza Babazadeh , Fariborz Jolai, Jafar Razmi, Mir Saman Pishvaee
Department of Industrial Engineering, College of Engineering,
University of Tehran, Tehran, Iran
Iran, Islamic Republic Of
Operational and disruption risks derived from the environment have forced firms to design responsive supply chain
networks. This paper presents a multi-stage multi-product robust optimization model for responsive supply chain
network design (SCND) under operational and disruption risks. First, a deterministic mixed-integer linear programming
(MILP) model is developed considering different transportation modes, outsourcing, flexibility and cross-docking
options. Then, the robust counterpart of the presented model is developed to deal with the inherent uncertainty of input
parameters. The proposed deterministic and robust models are assessed under both operational and disruption risks.
Computational results show the superiority of the proposed robust model in managing risks with a reasonable increase
in the total costs compared to deterministic model.
Keywords: Robust Optimization, Responsive Supply Chain Network Design, Operational & Disruption Risks.
1. INTRODUCTION
Facility location is one of the most important decisions in the supply chain network design (SCND) problem and plays a
crucial role in the overall performance of the supply chain. Generally, the SCND problem includes determining the
numbers, locations and capacities of facilities, as well as the amount of shipments between them (Amiri, 2006).
Nowadays, time and cost are common gauges used to assess the performance of the supply chains and both are
minimized, as they are treated simultaneously. The delivery time criterion is considered to be an individual objective
that leads to a bi-objective problem. Minimizing delivery time and cost objectives in the form of a bi-objective problem
are in conflict with each other (Pishvaee and Torabi, 2010). That is, quick delivery implies high amount of costs. The
time minimization objective, however, can be integrated in cost objective when it is expressed in terms of monetary
value. Increased environmental changes in the competitive markets force manufacturing companies to be more flexible
and improve their responsiveness (Gunasekaran and Kobu, 2007). Some components, such as direct shipments from the
supply centres to customers and decisions on opening or closing facilities (plants, distribution centres and etc.) for the
forward seasons (Rajabalipour et al., 2013), utilizing different transportation modes can improve the flexibility of an
SCN. Cross-docking is a logistics function in which products are shipped directly from the origin to the destination,
without being stored in warehouses or distribution centres (Choy et al., 2012). Utilizing cross-dock centres as an
intermediary stage between supply centres and customer zones leads to significant advantages for the manufacturing
and service industries (Bachlaus et al., 2008). In recent decades, some companies, including Wal-Mart used cross-docks
in different sites to achieve competitive advantages in distribution activities. Although inventory holding is not
attractive, especially in lean production systems, it could play a significant role in dealing with supply and demand
uncertainty (You and Grossmann, 2008). In todays world, the increased diversity of customer needs prevents
manufacturing and service industries from making fast changes, unless it is done through outsourcing. Outsourcing is
performed for many reasons, such as saving on costs, focus on core business, quality improvement, knowledge, reduced
time to market, enhance capacity for innovation and risk management (Kang et al. 2012). Some of companies, like Gina
and Zara Tricot, which use the outsourcing approach, have a massive advantage (Choudhury and Holmgren, 2011).
Many previously presented models consider fixed capacities for all facilities, whereas determining capacity of
facilities is often difficult in practice (Wang et al., 2009). Therefore, capacity level of facilities should be determined as
a decision variable in mathematical programming models. Since opening and closing of facilities are strategic and timeconsuming decisions (Pishvaee et al., 2009), an SCN should be designed in the way that could be sustained under
operational and disruption risks. Chopra and Sodhi (2004) and Chopra et al. (2005) mentioned that the organizations
should consider uncertainty issues with its various forms in supply chain management to deal with their destructive and
burdensome effects on supply chain.
Exploring various sources proves that most presented works in the SCND area assume that input parameters, such as
demands, are deterministic (see Melo et al., 2009; Klibi et al. 2010). Although some studies have considered the SCND
under tentative conditions, most of them used the concept of stochastic and chance constrained programming methods
(Alonso-Ayuso et al., 2003; Santoso et al., 2005; Listes and Dekker, 2005; Salema et al., 2007). The major drawbacks
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(1), 18-32, 2014

DEVELOPMENT OF A CLOSED-LOOP DIAGNOSIS SYSTEM FOR


REFLOW SOLDERING USING NEURAL NETWORKS AND SUPPORT
VECTOR REGRESSION
Tsung-Nan Tsai1 and Chiu-Wen Tsai2
Department of Logistics Management, Shu-Te University, Kaohsiung, 82445, Taiwan
2
Graduate School of Business and Administration, Shu-Te University, Kaohsiung, 82445, Taiwan
Corresponding authors e-mail: {Tsung-Nan Tsai, [email protected]}
1

This study presents an industrial application of artificial neural network (ANN) and support vector regression (SVR)
to diagnose control reflow soldering process in a closed-loop framework. Reflow soldering is the principal process
for the fabrication of a variety of modern computer, communication, and consumer (3C) electronics products. It is
important to achieve robust electrical connections without changing the mechanical and electronic characteristics of
the components during reflow soldering process. In this study, a 38-4 experimental design was conducted to collect
the structured process information. The experimental data was then used for data-training via the ANN and SVR
techniques to investigate both the forward and backward relationships between the heating factors and the resultant
reflow thermal profile (RTP) and so as to develop a closed-loop reflow soldering diagnosis system. The proposed
system includes two modules: (1) a forward-flow module used to predict the output elements of the RTP and
evaluate its performance based on ANN and a multi-criteria decision-making (MCDM) criterion; (2) a
backward-flow module employed to ascertain the set of heating parameter combinations which best fulfill the
production requirements of the expected throughput rate, product configuration, and the desired solderability. The
efficiency and cost-effectiveness of this methodology were empirically evaluated and the results show the
promising to improve soldering quality and productivity.
Significance: The proposed closed-loop reflow soldering process diagnosis system can predict the output elements
of a reflow temperature profile according to process inputs. This system is also able to ascertain the set of heating
parameter combinations which best fulfill the production requirements and the desired solderability. The empirical
evaluation demonstrates the efficiency and cost-effectiveness for the improvements of soldering quality and
productivity.
Keywords: SMT, analytic hierarchy process, neural network, reflow soldering, support vector regression

INTRODUCTION

A high-speed surface mount technology (SMT) is an important development to fabricate many types of modern 3C
products in the electronics assembly industry. A SMT assembly process consists of three main process steps: the
stencil printing application, component placement, and reflow soldering. Reflow soldering is the principal process
used to melt powder particles in the solder paste and then solidify them to create strong metallurgical joints between
the pads of printed circuited board (PCB) and the surface mounted devices (SMDs) through a reflow oven. The
reflow soldering operation is widely recognized as a key determinant of production yield in PCB assembly (Soto,
1998; Parasad, 2002). A poor understanding of reflow soldering behavior can result in remarkable troubleshooting
time, soldering defects, considerable manufacturing costs.
The required function of a reflow oven is to heat the assembled boards to a predefined temperature at the proper
heating rates for a specific elapsed time. The forced convection reflow oven is the most commonly used heating
source in the SMA since it meets the economic and technical requirements of mass production. A reflow thermal
profile (RTP) is a time-temperature graph used to monitor and control the heating phases and duration, so that the
assembled boards are heated enough to form reliable solder joints without changing the mechanical and electronic
characteristics of the components. An inhomogeneous and inefficient reflow temperature profile may cause various
soldering failures (Ills, 2010), as shown in Figure 1. A typical RTP is comprised of preheating, soaking, reflowing
and cooling phases using a leaded solder paste, as shown in Figure 2. During the preheating phase, the board and
the relevant components are heated quickly from room temperature to about 150 C. In the soaking phase, the
temperature continues rising to approximately 180 C. At the same time, flux is activated to gradually wet and clean
oxidation from the surfaces of the metal pads and component leads. The solder paste is melt and changing into a
liquid solder mass in the reflowing phase. Eventually, during the cooling phase, electrical connections form between
the component leads and the PCB pads. The grey area between the contoured and faint lines shows the acceptable
temperature range that might produce acceptable soldering quality according to the specification provided by solder
paste maker (Itoh, 2010).

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(1), 33-44, 2014

EFFICIENT DETERMINATION OF HELIPORTS IN THE CITY OF RIO DE


JANEIRO FOR THE OLYMPIC GAMES AND WORLD CUP: A FUZZY
LOGIC APPROACH
Claudio S. Bissoa, Carlos Patricio Samanezb
a
Production Engineering Program
Federal University of Rio de Janeiro (UFRJ), COPPE, Brazil
b
Industrial Engineering Department
Pontifical Catholic University of Rio de Janeiro PUC-Rio, Brazil
The purpose of this study was to determine a method of evaluation for the use and adaptation of Helicopter Landing
Zones (HLZs) and their requirements for registered public-use for the Olympic Games and the World Cup. The
proposed method involves two stages. The first stage consists of clustering the data obtained through the Aerial and
Maritime Group/Military Police of the State of Rio de Janeiro (GAM/PMERJ). The second stage uses the weighted
ranking method. The weighted ranking method was applied to a selection of locations using fuzzy logic, linguistic
variables and a direct evaluation of the alternatives. Based upon the selection of four clusters, eight HLZs were obtained
for ranking. The proposed method may be used to integrate the air space that will be used by the defense and state
assistance agencies with the locations of the sporting events to be held in 2014 and 2016.
Significance: In this paper we proposed a model for evaluating the use and adaptation of Helicopter Landing Zones.
This method involves clustering data and the selection of locations using fuzzy logic and a direct evaluation of the
alternatives. The proposed method allowed for precise ranking of the selected locations (HLZs) contributing to the
development of public policies aimed at reforming the local aerial resources.
Keywords: Fuzzy logic, Site selection, Transport, Public Policy.

1. INTRODUCTION
The city of Rio de Janeiro will host the 2014 World Cup and 2016 Olympic competitions. Thus, the development of
more effective and technical mapping is urgently needed to rationalize the use of aerial resources (helicopters) that
belong to the state of Rio de Janeiro. Consequently, the helicopters will meet the demands for human health and safety
better, as well as actively participate in these large sporting events.
The main objective of this study was to determine a method that could be used to justify potential investment
opportunities in registered public-use heliports based on their requirements and their locations relative to points of
public interest. To accomplish this task, Helicopter Landing Zones, or HLZs, were mapped and identified by the Aerial
and Maritime Group (Grupamento Areo e Martimo (GAM)) of the Military Police of the State of Rio de Janeiro
(Polcia Militar do Estado do Rio de Janeiro (PMERJ)).
In the city of Rio de Janeiro, various zones were identified by the GAM as HLZs. Yet, these zones do not have the
appropriate identification, illumination or signage. Thus, these HLZs do not meet the appropriate technical standards
that would define them as zones being appropriate for helicopter landing. Here, several aspects, including the proximity
of the HLZs to hospitals, PMERJ (Military Police of the State of Rio de Janeiro) units, Fire Department (CBMERJ),
Civil Police (PCERJ) and the major sporting competition locations, were used to identify the most relevant HLZs in the
city of Rio de Janeiro (according to these criteria). In addition, this study serves to stimulate the use of the HLZs and
provide subsidies for developing public policies for streamlining the existing aerial resources (helicopters) that belong
to corporations within the state of Rio de Janeiro.
Considering that is not likely that the city will have a fitting conventional terrestrial transport that can handle the
numerous tourists and authorities, Rio de Janeiro will face an increased demand for air transport via helicopter to move
between the different sports facilities, integrated with the local assistance and defense agencies.
Today, Rio de Janeiro faces a challenge that it has never face before. The burden of investments in various sectors
led by the oil and gas industry sum, according to the Federation of Industries of the State of Rio de Janeiro (FIRJAN),
$ 76 billion during the period from 2011 to 2013. This is one of the largest concentrations of investment in the world,
given the volume of investments in relation to the small territorial dimension of the state.
Air transport demand brought by those investments, combined with the fact that the city will host the 2014 World Cup
and the 2016 Olympic Games, requires a focused technical mapping that allows streamlining the aerial resources
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(1), 45-51, 2014

AN APPLICATION OF CAPACITATED VEHICLE ROUTING PROBLEM TO


REVERSE LOGISTICS OF DISPOSED FOOD WASTE
Hyunsoo Kim1, Jun-Gyu Kang2*, and Wonsob Kim3
Department of Industrial and Management Engineering, University of Kyonggi
San 94-6, Iui-dong, Yeongtong-gu, Suwon, Gyeonggi-do 443-760, Republic of Korea
2
Department of Industrial and Management Engineering, Sungkyul University
Sungkyul Daehak-Ro 53, Manan-gu, Anyang, Gyeonggi-do 430-742, Republic of Korea
*Corresponding authors e-mail: [email protected]
1,3

Reverse logistics amended transportation of food waste from local collecting areas to designated treatment facilities
produce enormous amounts of greenhouse gas. The Korean government has recently introduced the RFID technology in
hopes of reducing CO2 production problems. In this study, we evaluated the reduction of total route distance required
for reverse logistics based on the total weight of food waste in each collecting area. We defined the testing environment
as CVPR (capacitated vehicle routing problem) based on the actual field data. As our first alternative method, we
introduced Fixed CVRP for the improvement of current reverse logistics and also applied the daily Dynamic CVRP,
which considers daily weight information of total food waste at each collecting area in order to determine the optimum
routes for reverse logistics. We performed and compared experimental results of total routing distance using three
different scenarios; current, Fixed CVRP, and daily Dynamic CVRP.
Key words: Reverse logistics, Food waste, CVRP, Sweep method, RFID, Greenhouse gas (CO2)

1. INTRODUCTION
The amount of disposed food waste has been continuously increasing since January of 2013 when the Korean
government prevented the dumping of food waste into the marine. This act was a correspondence to the 1996 London
Dumping Convention Protocol to stop marine pollution by dumping of waste and other matter (Daniel. 2012). Food
waste is the largest portion (28.8%) of domestic municipal waste and the disposed amount has been increasing
continuously since 2005; 65ton/day (2005), 72.1ton/day (2007), 79.1ton/day (2009), and 79.8ton/day (2011) (Seoul
Metropolitan Government. 2012).
In order to reduce and properly manage disposed food waste, the Ministry of Environment and the Ministry of
Public Administration and Security started a pilot project in 2011 over the weight-based charging system, under which
the fee charged increases in proportion to the weight of food waste discarded using RFID technology. The system can
charge a monthly fee to an individual based on the total amount of disposed food waste measured via RFID reader
equipped containers. Originally operational in only 10 of 229 local governments, this system has now spread to 129
local governments as of June, 2013. According to the report from the Gimcheon-gu local government in
Gyeongsangbuk-do provincial government, which has already adopted this system, 47% of disposed food waste has
been reduced since 2011 (Korea Environment Corporation. 2012).
With the advent of RFID technology, it has become possible to take full advantage of all information (Zhang et al.
2011). Currently, the RFID technology solely operates using the identification of the individual whom disposes food
waste and its measured weight. Unfortunately, however, the important information of the total weight of food waste
disposed at each container, which can be obtained from current RFID system, is not being used by reverse logistics
providers (we call collectors) who collect the disposed food waste from each containers. Therefore, fixed routings
based on fixed schedules are still applied for reverse logistics of disposed food waste now.
The problem dealt in this paper is considered as a Vehicle Routing Problem (VRP, here after), which is a
combinatorial optimization and integer programming problem designing optimal delivery or collection routes from one
or several depots to a number of geographically scattered cities of customers with a fleet of vehicles (Laporte, 1992). In
general, the VRP comprises of two combinatorial optimization problems, i.e., the Bin-Packing Problem (BPP) and the
Travelling Salesman Problem (TSP). Assigning each customer to a vehicle is considered a BPP while designing the
optimal route for each vehicle with assigned customers is considered a TSP (Fenghe and Yaping, 2010). VRP is the
intersection of two difficult problems, typically known as NP-hard problem, which becomes more difficult or even
impossible to solve as the number of customers or vehicles increases (Lin et al., 2014). Since the first proposal of VRP
by Danzig and Ramser in 1959, it has been playing an important role in the fields of transportation, distribution and
logistics. According to additional practical restrictions, there exists a wide variety of VRPs. Along with the traditional
variation of VRPs, Capacitated VRP (CVRP), VRP with Time Window (VRPTW), Multi depot VRP (MDVRP), and
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(2), 53-65, 2014

DELIVERY MANAGEMENT SYSTEM USING THE CLUSTERING BASED


MULTIPLE ANT COLONY ALGORITHM: KOREAN HOME APPLIANCE
DELIVERY
Taeho Kim and Hongchul Lee
School of Industrial Management Engineering, Korea University, Seoul, Republic of Korea

This paper deals with the heterogeneous fleet vehicle routing and scheduling problems with time windows (HFVRPTW)
in the area of Korean home appliance delivery. The suppliers of modern home appliance products in Korea not only have
to provide the traditional service of simply delivering goods to customers within the promised time, but they also need to
perform additional services such as installation of the product and explanation of the products functions. Therefore,
businesses reducing the delivery cost while improving the quality of the service experienced by customers is an important
issue. In order to meet these two demands, we generated a delivery schedule by using a heuristic clustering-based multiple
ant colony system (MACS) algorithm. In addition, to improve service quality, we set up an expert system composed of a
manager system and an android-based driver system. The system was tested for home appliance delivery in Korea. This
paper is significant in that it constructs an expert system for the entire process of distribution, from the generation of an
actual schedule to management system setup.
Keywords: HFVRPTW, Ant colony algorithm, Home appliance delivery, Android; Information System

1. INTRODUCTION
The physical distribution industry is facing a rapid change in its business environment due to the development of
information and communication technology and the spread of Internet accessibility. Products are ordered both online and
offline. Through online communities, customers can freely share information relating to the entire process of product
purchasing such as product functions, delivery and installation. In particular, products handled in home appliance delivery
in recent years, like Smart TVs, notebooks, air conditioners and refrigerators, have complex functions in contrast to their
predecessors. Hence why it is important to provide installation and demonstration service while guaranteeing accurate
and timely delivery. Such extended services have actually become an important factor for customers in building an image
of a given company. Accordingly, separately from the traditional work of simply delivering a product to a customer,
qualitative improvements of service, like product installation and explanation of product functions, have become an
important part of home appliance delivery in Korea (Kim et al., 2013). From the companies point of view, reducing
delivery costs while improving the quality of delivery service experienced by customers is an important problem.
Basically, the problem of satisfying the constraints of delivery time desired by customers while finding the shortest
traveling route for vehicles is known as the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW model
is a representative NP-hard problem (Lenstra and kan, 1981; Savelsbergh, 1985). There are many studies that have used
metaheuristics to solve this problem (Cordeau et al., 2001; Haghani and Banihashemi, 2002; Sheridan et al., 2013). In this
paper, we used the ant colony system (ACS) among the various metaheuristic methods to generate schedules (Dorigo and
Gambardella, 1997a, 1997b). ACS has the advantage of being able to respond flexibly even when the constraint rules
change. We also utilized a heuristic clustering algorithm in this paper to improve the calculation speed of the local search
part that requires the longest calculation time among the ACS processes (Dondo and Cerd, 2007).
A delivery management system is required for qualitative delivery service improvement. (Santos et al., 2008; Moon
et al. 2012). We constructed an Android-based delivery management system to flexibly handle such problems as delivery
delays and delivery sequence changes that can occur due to the characteristics of delivery work. With this system,
managers can easily manage various accidents that can occur during deliveries and more effectively monitor the locations
of drivers and manage the delivery progress rate as well as idle drivers.

2. LITERATURE REVIEW
Ever since Dantzig and Ramser (1959) attempted to solve the vehicle routing problem (VRP) by using an LP heuristic,
many researchers have introduced various mathematical models and solutions. Of the VRP types, VRPTW is the VRP
with the customer-demanded time constraint. Since VRPTW is an NP-hard problem, an optimum solution within the
restricted time cannot be found. Studies related to VRPTW have advanced greatly with insertion heuristic research
(Solomon, 1987) as the starting point. Supported by recent advances in computer technology, studies applying
metaheuristic methods such as simulated annealing (Osman,1993; Czech and Czarnas, 2002; Lin et al., 2011), tabu search
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(2), 66-73, 2014

INFLUENCE OF DATA QUANTITY ON ACCURACY OF PREDICTIONS


IN MODELING TOOL LIFE BY THE USE OF GENETIC ALGORITHMS
Pavel Kovac, Vladimir Pucovsky, Marin Gostimirovic, Borislav Savkovic, Dragan Rodic
University of Novi Sad, Faculty of Technical Science, Trg Dositeja Obradovica 6, 21000 Novi Sad, Serbia
[email protected], [email protected], [email protected], [email protected], [email protected]
It is widely known that genetic algorithms can be used in search space and modeling problems. In this paper theirs
ability to model a function while varying the amount of input data is tested. Function which is used for this research
is a tool life function. This concept is chosen because by being able to predict tool life, workshops can optimize
their production rate expenses ratio. Also they would gain profit by minimizing number of experiments necessary
for acquiring enough input data in process of modeling tool life function. Tool life by its nature is a multiple factor
dependent problem. By using four factors, to acquire adequate tool life function, vivid complexity is simulated
while acceptable duration of computational time is maintained. As a result almost clear threshold, of data quantity
inputted in optimization model to gain acceptable results in means of output function accuracy, is noticed.
Keywords: Modeling; Genetic Algorithms; Tool Life; Milling; Heuristic Crossover

1. INTRODUCTION
From early days when artificial intelligence was introduced, there is a prevailing trend of discovering capabilities
which lies inside this branch of science. As all machine related domain, with this one being no exception, there are
limits. These limits and boundaries of usage are often expanded and new purposes are constantly discovered. To be
able to achieve this goal one must be a very good student of the best teacher that is known to mankind; mother
nature. With an experience of more than five billion years our nature is a number one scientist and we are all proud
that we have an opportunity to learn whatever she has to offer. Mastery of creation such a variety of living beings is
no easy task and maintaining this delicate balance between species is something that requires time, experience and
understanding. No scientist is able to create something graceful, like variety of life on Earth, by share coincidence.
There has to be a consistency in process of creating and maintaining this complexity of living beings. Law which
lies behind this consistency had prevailed more than we can remember and is a simple postulate which tells us that
only those who are most adaptable to their environment will survive. By surviving more than others, less adaptable
individuals, every living organism is increasing chance to mate, with equally adaptable member of same specie and
creating offspring which posses the same, or higher level of adaptability to their environment. This law of selection
is something that enabled creation of this world that we live in. Seeing its effectiveness yet understanding simplicity
of this concept, we decided to model it. One way of succeeding in this is through genetic algorithms (GA). Since
they have been introduced, in early 1970s, GA present a very powerful tool in space search and optimization fields.
Introduce them to a certain area and, with a proper guidance, they will create a population of their own and
eventually yield individuals with highest attributes.
Through time many scientist manage to successfully implement GA as a problem solving technique. Sovilj et al.
(2009) developed a model for predicting tool life in milling process. Pucovsky et al. (2012) studied dependence
between modeling ability of tool life with genetic algorithm and the type of function. u and Bali (2003) used GA
to optimize cutting parameters in process of milling. Similar procedure for optimizing parameters in turning
processes was employed by Srikanth and Kamala (2008). And optimization of multi-pass turning operations using
genetic algorithms for the selection of cutting conditions and cutting tools with tool-wear effect has been
successfully reported by Wang and Jahawir (2005). Zhu (2012) managed to implement genetic algorithm with local
search in solving the job shop scheduling problem. Since job shop scheduling is major area of interest and progress,
Wang et al. (2011) succeeded in constructing the genetic algorithm with a new repair operator for assembly
procedure. Ficko et al. (2005) reported positive experiences in using GA in forming a flexible manufacturing
system. Regarding tool life in face milling, statistical approach by the use of response surface method have been
covered by Kadirgama et al (2008). Khorasani et al (2011) used both Taguchis design of experiment and artificial
neural networks for tool life prediction in face milling. Pattanaik and Kumar (2011), using a bi-criterion evolution
algorithm for identification of Pareto optimal solution, developed a system for product family formation in area of
reconfigurable manufacturing. And knapsack problem is now widely considered as a classical example of GA
implementation (Ezzaine, 2002).
Taking in consideration weight and importance of milling tool life modeling with evolutionary algorithms, very
small amount of articles on this subject was noticed. Also no papers discuss on influence of quantity of input data
on results of genetic algorithms optimization function. In absence of these two facts this article is presented as a
way to, at least partially, fill existing gap.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(2), 74-85, 2014

PROACTIVE IDENTIFICATION OF SCALABLE PROGRAM


ARCHITECTURES:
HOW TO ACHIEVE A QUANTUM-LEAP IN TIME-TO-MARKET
Christian Lindschou Hansen & Niels Henrik Mortensen
Department of Mechanical Engineering
Product Architecture Group
The Section of Engineering Design & Product Development
Technical University of Denmark
Building 426
DK-2800 Kgs. Lyngby
Email: [email protected], [email protected],
This paper presents the Architecture Framework for Product Family Master Plan. This framework supports the identification
of a program architecture (the way cost competitive variance is provided for a full range of products) for a product program
for product-based companies during the early stages of a product development project. The framework consists of three basic
aspects: the market, product program, production and a time aspect captured in the multi-level roadmap. One of the unique
features is that these aspects are linked, allowing for an early clarification of critical issues through a structured process. The
framework enables companies to identify a program architecture as the basis for improving time-to-market and R&D
efficiency for products derived from the architecture. Case studies show that significant reductions of development lead time
up to 50% is possible.
Significance: Many companies are front-loading different activities when designing new product programs. This paper
suggests an operational framework for identifying a program architecture during the early development phases, to enable a
significantly improved ability to launch new competitive products with fewer resources.
Keywords: Product architecture, program architecture, product family, platform, time-to-market, scalability

1. INTRODUCTION
Many industrial companies are experiencing significant challenges in maintaining competitiveness. There are many
individual explanations behind these, but some of the common challenges that are often recorded from companies are:
Need to reduce time-to-market in R&D:
o Shorter product life cycles are increasing the demand for faster renewal of the product program in order to
postpone price drops and maintain competitive offerings (Manohar et al., 2010)
o Loss of market share in highly competitive markets call for improved launch responsiveness to match and
surpass the offerings of competitors (Chesbrough, 2013)
o Protection of niche markets and their attractive price levels requires continuous multi-launches of
competitive products (Hultink et al., 1997)
Need for achieving attractive cost and technical performance levels for the entire product program
o Increased competitiveness requires all products to be attractive both cost wise and performance wise
(Mortensen et al., 2010)
o Focusing of engineering resources requires companies to scale solutions to fit across the product program
(by sharing) and prepare them for future product launches (by reuse) (Kester et al., 2013)
o Sales forecasts from global markets are affected by an increasing number of external influences making it
more and more difficult to predict the sales of individual product variants, thus leaving no room for
compromising competitive cost and performance for certain product variants (Panda and Mohanty, 2013)
These externally induced challenges pose a major task to the whole company. As such, many approaches exist to handle
these challenges which are of organizational- , process-, tool-, and competence nature originating within research from
sciences across business, marketing, organization, technology, socio-technical, and engineering design. The research
presented here originates within engineering design and product development focusing on the development of a program
architecture for a company. Although originating from the engineering design domain which is naturally centered in the R&D
function of a company, the development of program architectures have relations that stretches far into the marketing, product
planning, sourcing, production, and supply chain domains as well as into the companies overall product strategy.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(2), 86-99, 2014

AN APPROACH TO CONSIDER UNCERTAIN COMPONENTS


FAILURE RATES IN SERIES-PARALLEL RELIABILITY
SYSTEMS WITH REDUNDANCY ALLOCATION
Ali Ghafarian Salehi Nezhada,*, Abdolhamid Eshraghniaye Jahromib, Mohammad Hassan Salmanic, Fereshte Ghasemid
a
M.Sc. Graduated Student of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran.
b
Associate professor of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran.
c
PhD Student of Industrial Engineering at Sharif University of Technology, Tehran, 14588-89694, Iran
d
M.Sc. Graduated Student of Industrial Engineering at Amirkabir University of Technology, Tehran, 15875-4413, Iran.
* Author Phone Number: +98 936 337 7547
Fax Number: +98 331 262 4268
Emails: [email protected], [email protected],[email protected], [email protected]

Redundancy Allocation Problem (RAP) is a combinatorial problem to maximize system reliability by discrete
selection from available components. The main purpose of this study is to prove the effectiveness of robust
optimization to solve RAP. In this study it is assumed to have Erlang distribution density for components' failures
where to implement robust optimization. We suppose that failure rate attains dynamic values instead of exact and
fixed values. Therefore, a new calculation method is presented to consider dynamic values for failure rate in RAP.
Another assumption is that each subsystem can have one of cold-standby or active redundancy strategies. Moreover,
due to complexity of RAP, two Simulated Annealing (SA) and Ant Colony Optimization (ACO) algorithms are
designed to determine the robust system with respect to uncertain values for parameters. In order to solve this problem
and prove efficiency of proposed algorithms, a problem benchmark in literature is solved and discussed.
Keywords: Reliability Optimization; Robust Optimization; Series-Parallel System; Uncertain Failure Rate; Ant
Colony Optimization; Simulated Annealing.

1. INTRODUCTION
In general, reliability is the ability of a system to perform and maintain its functions in routine circumstances, as well
as hostile or unexpected circumstances. Redundancy Allocation Problem (RAP) is one of the classical problems in
engineering and other sciences to plan the selection of components for a system simultaneously, where these
components can be combined by different strategies. Generally, this problem is defined to maximize the system
reliability such a way that some predetermined constraints such as total weight, total cost, and total volume be
satisfied. The attractiveness of this problem to design an appropriate system will be arisen for different products with
high reliability value. In general, it is possible to categorize the series-parallel systems into three major parts: the
reliability allocation, the redundancy allocation and the reliability and the redundancy allocation. In the reliability
allocation problems, the reliability of the components is determined such that the consumption of a resource under a
reliability constraint is minimized while the redundancy allocation problem generally involves the selection of
components and redundancy levels to maximize the system reliability given various system-levels constraints [1]. In
fact, we can implement two approaches to improve the reliability of such a system using RAP. The first one is to
increase the reliability of the system components while the second one is using redundant components in various
subsystems in the system [2; 3]. This problem also has four major inputs; iz which represents failure rate for

component

zi

in subsystem i , C Ciz and W Wiz which are cost and weight of component
i
i

subsystem i , respectively, and

i t which is switch reliability in subsystem

zi

for

i at a predetermined time t

. The general structure series-parallel systems is shown in Fig. 1 where i indicates index of each subsystem.
Generally, previous studies are contributed in deterministic environment in which the failure rate of each component
is constant. Conversely, in real world the precise failure rates for each component are usually very hard to estimate
and it would be more practical to consider flexible values for these groups of parameters. This assumption seems
more invaluable when the failure rates can be affected by different factors such as labors, machines, environmental
conditions, and the way which components are using. In this study, it is assumed that there are no deterministic values
available for failure rates. In general, the major goal of this study is to solve RAP under uncertainty values for failure
rate by implementing robust optimization approach.
The general structure if this paper is as following. First of all, a concise and comprehensive literature review is
presented for various studies which have been done in last decades. Afterward, an extensive definition is proposed
for robustness in RAP and according to this definition, an appropriate mathematical model is developed. Following
with these sections, we present two SA and ACO algorithms in sections 5 and 6, respectively. Then, the proposed
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(2), 100-116, 2014

A MODEL BASED QUEUING THEORY TO DETERMINE THE


CAPACITY SUPPORT FOR TWIN-FAB OF WAFER FABRICATION
Ying-Mei Tua,Chun-Wei Lub
Department of Industrial Management
Chung Hua University
707, Sec.2, WuFu Rd., Hsinchu, Taiwan 30012, R.O.C.
b.
Ph.D. Program of Technology Management- Industrial Management
Chung Hua University
707, Sec.2, WuFu Rd., Hsinchu, Taiwan 30012, R.O.C.
Corresponding authors e-mail: [email protected]
a.

The twin-fab concept has been established over the past decade due to considerations of cheaper facility cost, faster
installation and more flexible productivity management. Nevertheless, if lacking in completed backup policies, the
benefits of twin-fab will decrease significantly, particularly in production flexibility and effectiveness. In this work,
the control policy of capacity support is established and two control thresholds are developed. The first is the threshold
of Working in Process (WIP) amount, which acts as a trigger for backup action. The concept of protective capacity
is applied to set this threshold. In order to endorse the effectiveness of WIP transfer between twin fabs, the threshold
of WIP amount difference (WDTH) is set as a control gate. The design of WDTH is to maximize the expected saved
cycle time. The GI/G/m model is applied to develop equations for the calculation of expected saved time. Finally, the
capacity support policy is validated by a simulation model. The results show that this policy is both feasible and
efficient.
Keywords: Twin-fab, Capacity support policy, Protective capacity, Queuing theory

1. INTRODUCTION
Compared with other industries, the manufacturing processes of wafer fabrication is more complicated, such as reentrant flows, batch processing, and time constraints (Rulkens et al., 1998; Robinson and Giglio, 1999; Tu et al.,
2010). In order to maintain high competitiveness, the capacity expansion and upgrade of advanced technology are
necessary. However, managers have to suffer many difficulties in these situations; the market demand changes
quickly and equipment costs are more expensive. Given this situation, expanding capacity in dynamic environments
is risky ( Chou et al., 2007).
Over the past decades, many semiconductor manufacturing companies have adopted the twin-fab concept where
two neighboring fabs are installed in the same building and connected to each other through an Automatic Material
Handling System (AMHS). The advantages of twin-fab are as follows.
1. Reducing the cost of capacity expansion through sharing essential facilities, such as gas pumps and recycling
polluted water systems.
2. Due to the building and basic facilities established in the beginning stage, the construction time of the second fab
is reduced.
3. As twin-fab is two neighboring fabs, the real-time capacity backup can be achieved by AMHS.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(3), 117-128, 2014

LOCATION DESIGN FOR EMERGENCY MEDICAL CENTERS


BASED ON CATEGORY OF TREATABLE MEDICAL DISEASES
AND CENTER CAPABILITY
Young Dae Ko1, Byung Duk Song2, James R. Morrison2 and Hark Hwang2
1
Deloitte Analytics, Deloitte Anjin LLC
Deloitte Touche Tohmatsu Limited
One IFC, 23, Yoido-dong, Youngdeungpo-gu
Seoul 150-945, Korea
2
Department of Industrial and Systems Engineering
Korea Advanced Institute of Science and Technology
Guseong-dong, Yuseong-gu
Daejeon 305-701, Korea

Corresponding authors e-mail: [email protected]

With the proper location and allocation of emergency medical centers, the mortality rate of emergency patients could be
improved by providing the required treatment within an appropriate time. This paper deals with the location design of
emergency medical centers in a given region under the closest assignment rule. It is assumed that the capability and
capacity to treat various categories of treatable medical diseases are provided for each candidate medical center as a
function of possible subsidies provided by the government. It is further assumed that the number of patients occurring at
each patient group node during a unit time is known along with the categories of their diseases. Additionally, to emphasize
the importance of timely treatment, we use the concept of a survival rate dependent on patient transportation time as well
as the category of disease. With the objective of minimizing the total subsidies paid, we select from among the candidate
medical centers subject to minimum desired survival rate constraints.
Keywords: Emergency Medical Center, Location Design, Closest Assignment Rule, Genetic Algorithm, Simulation, and
Survival Rate

1. INTRODUCTION
1.1 Background
A medical emergency is an injury or illness that is acute and poses an immediate risk to a person's life or long-term health.
For emergencies starting outside of medical care, two key components of providing proper care are to summon the
emergency medical services and to arrive at an emergency medical center where the necessary medical care is available.
To facilitate this process, each country provides its own national emergency telephone number (e.g., 911 in the USA, 119
in Korea) that connects a caller to the appropriate local emergency service provider. Appropriate transportation, such as an
ambulance, will be dispatched to deliver the emergency patient from the site of the medical emergency to an available
emergency medical center.
In Korea, there are four classes of emergency medical center: regional emergency medical center, specialized care
center, local emergency medical center, and local emergency medical facilities. One regional emergency medical center
can be assigned to each metropolitan city or province based on the distribution of medical infrastructure, demographics
and population. Specialized care centers can be allocated by the Korean Ministry of Health, Welfare and Family Affairs
with the special purpose of treating illnesses caused by poison, trauma and burns. According to Act 30 of the Korean
Emergency Medical Service Law, one local emergency medical center should be operated per 1 million people in
metropolitan cities and major cities. One such center per 0.5 million people is provided in the provinces. The facility to be
designated as such a center should be selected from among the general hospitals in a region based on the accessibility to
local residents and capability to address the needs of emergency patients with serious medical concerns. To retain the
designation as a local emergency medical center, the general hospital should provide more than one specialist in the fields
of internal medicine, surgery, pediatrics, obstetrics and gynecology and anesthesiology. Local emergency medical
facilities may be appointed from among the local hospitals to support the local emergency medical center and to treat less
serious conditions.
A flow chart depicting the Korean emergency medical procedure is provided in Figure 1; it is from the National
Emergency Medical Center of Korea (National Emergency Medical Center, 2013). Initially, the victim(s) or a first
responder calls 119 to request emergency medical service. The Emergency Medical Information Center (EMIC) then
dispatches an ambulance to the scene. When an ambulance arrives at the scene, firstly, on-scene treatment is performed by
an emergency medical technician (EMT). And then, the patient(s) transport to an emergency medical service (EMS)
facility by an ambulance. During transport, information on the patients condition may be communicated to the EMIC.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(3), 129-140, 2014

OPTIMAL JOB SCHEDULING OF A RAIL CRANE IN A RAIL TERMINAL


Vu Anh Duy Nguyen and Won Young Yun
Department of Industrial Engineering,
Pusan National University,
30Jangeon-Dong, Geumjeong-Gu,
Busan 609-735, South Korea
Corresponding authors email: [email protected]

This study investigates the job sequencing problem of a rail crane at rail terminals with multiple train lanes. Two kinds
of containers are carried between trains and trucks. Inbound containers are loaded onto trains and outbound containers
are unloaded from trains. We consider the dual-cycle operation of the crane to load and unload containers between trains
and trucks. A branch-and-bound algorithm is used to obtain the optimal solution. A parallel simulated annealing algorithm
is also proposed to obtain near optimal solutions to minimize the makespan in job sequencing problems of large size.
Numerical examples are studied to evaluate the performance of the proposed algorithm. Finally, three different layouts
for rail terminals with different temporary storage areas are considered and their performance of three layouts is compared
numerically.

1. INTRODUCTION
Rail transportation becomes more important in intermodal freight transportation, to cope with the rapid changes which
are taking place in global trade. However, the percentage of goods carried by trains within Europe has dropped to 16.5%
in 2009, from 19.7 %, in 2000 (Boysen et al. 2012). Main reasons for this decrease are the difficulties in door-to-door
transportation and the enormous initial investments involved in the construction of railroad infrastructure. However, the
unit transportation costs decrease as the transportation distance increases. In addition, rail transportation is more
environmentally friendly than road transportation.
Inbound and outbound containers are loaded and unloaded by rail cranes (RMGC, RTGC), forklifts and reach stackers
at rail stations, so that the handling equipment plays an important role of the infrastructure at rail terminals. When the
trains arrive at the rail station, outbound containers must first be unloaded from the trains, after which inbound containers
that are located in the container yard need to be loaded onto the trains. We consider the job sequencing problem for a rail
crane because its performance affects significantly the dwelling duration of trains at rail terminals and the throughput of
the terminals.
In this paper, we deal with the job sequencing problem associated with a rail crane at rail terminals and want to minimize
the makespan for unloading and loading operations. Dual-cycle operation of a crane is defined as follows; 1) picking up
an inbound container from a truck, 2) loading it onto one of flat wagons in a train, 3) picking up an outbound container
from a train, and 4) loading it onto a truck that moves it to the yard terminal.
The operational problems in rail stations including layout design, load planning and rail crane scheduling have been
studied in the past. Kozan (1997) considered a heuristic decision rule for the crane split and a dispatching rule to assign
trains to rail tracks. Ballis and Golias (2002) considered the optimal design problem of some main design parameters of
a rail station, including length and utilization of transshipment tracks, train and truck arrival behavior, type and number
of handling equipment, and stacking height in storage areas. Abacoumkin and Ballis(2004) studied a design problem with
a number of user-defined parameters and equipment selections.
Feo and Gonzlez-Velarde(1995) proposed a branch and bound algorithm and greedy randomized adaptive search
procedure to optimally assign highway trailers to railcar hitches. Bostel and Dejax(1998) studied the process of loading
containers onto trains in a rail-rail transshipment shunting yard. They proposed both optimal and heuristic methods to
solve it. Corry and Kozan (2006) studied the load planning problem on flat wagons, considering a number of uncertain
parameters including dual-cycle operations and mass distribution. Bruns and Knust(2012) studied an optimization
problem to assign containers to wagons in order to minimize the set-up and transportation costs along with the aim of
maximizing the utilization of the train when a rail terminal is developed.
Boysen and Fliedner (2010) determined the disjunctive working area for each rail crane by dynamic programming,
although they did not consider the job sequence of the rail cranes. They employed simple workflow measures to separate
the crane working areas. Jeong and Kim(2011) dealt with the scheduling problem of a rail crane and parking position
problem of trucks in rail stations located at seaport container terminals. In their scheduling problem, a single crane covers
each train and moves in one direction along the train. Pap et al. (2012) developed a branch and bound algorithm to
determine optimally the crane scheduling arrangement. They focused on the operation of a single crane, which is used to
transfer containers directly between the container yard and the train. (Guo et al. 2013) dealt with a scheduling problem of
loading and unloading containers between a train and yards. They assumed that multiple gantry cranes are used, safety
distance is required and cranes cannot cross other cranes. However, the article assumed one dimension travel (gantry
travel) of the cranes and did not consider the dual-cycle operation and the re-handle issues of transferring containers.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(3), 141-152, 2014

BAYESIAN NETWORK LEARNING


FOR PORT-LOGISTICS-PROCESS KNOWLEDGE DISCOVERY
Riska Asriana Sutrisnowati1, Hyerim Bae1 and Jaehun Park2
1
Pusan National University, Korea Republic of,
2
Worcester Polytechnic Institute, United States

A Bayesian network is a powerful tool for various analyses (e.g. inference analysis, sensitivity analysis, evidence
propagation, etc.); however, it is first necessary to obtain the Bayesian network structure of a given dataset, and this, an
NP-hard problem, is not an easy task. However, an enhancement approach has been followed in order to learn Bayesian
network from event logs. In the present study, a genetic-algorithm-based method for generation of a Bayesian network is
developed and compared with a dynamic programming method. We herein also present the useful knowledge found using
our inference method.
Keywords: Bayesian network learning, mutual information, event logs

INTRODUCTION

Currently many businesses are supported by information systems that provide insight into what actually happens in
business process execution. This abundant data has been studied mainly in the growing research area of process mining
(Weijters et al., 2006; Goedertier et al., 2007; Gunther and van der Aalst, 2007; Song et al., 2009; van der Aalst, 2011;
De Weerdt et al., 2012;). There are four perspectives on process mining (van der Aalst, 2011): control flow, organizational
flow, time, and data. Current process mining techniques for the most part can accommodate only one of these. A Bayesian
network, however, can handle two perspectives at once (e.g. control flow and data).
In our previous work (Sutrisnowati et al., 2012), we used a dependency graph, retrieved by Heuristic Miner (Weijters
et al., 2006), and decomposed any cycles found into a non-cycle structure. This methodology, though enabling quick
retrieval of the constructed Bayesian network, has drawbacks relating to the fact that its non-cycle structure is dependent
solely on the structure of the dependency graph. In other words, we have to take note of the fact that the structure is
supported only by the successive occurrences between activities and not by the common information shared. To remedy
this shortcoming, we have developed a dynamic programming procedure (Sutrisnowati et al, 2013) of mutual information
score using Mutual Information Test (MIT) (De Campos, 2006). The data used to calculate MIT score was originally not
in a form of event logs, and, indeed, MIT was not designed for the business process management field. Therefore, the
formula was modified to accommodate the problem at hand. However, the dynamic programming, while capable of
delivering the optimal score, still lacks in performance. Therefore, genetic algorithms along with a comparison of dynamic
programming are also presented in this paper.
This paper is organized as follows: section 2 discusses the background; sections 3 and 4 introduce the proposed method
and a case study from a real-world example, respectively; section 5 offers a discussion, and finally, section 6concludes
our work.

BACKGROUND

2. 1 Process Structure
The dataset used in the present study was in the form of an event log, denoted L. According to Van der Aalst (2011)s
proposed hierarchical structure of process execution event logs, a process consists of cases, denoted c, and each case
consists of events, denoted e, such that an event is always related to one case. For instance, suppose a tuple A, B, C , D
and A, C , B, D , which represents an event-log case in which an event A is followed by an event B and then an event
C and, eventually, an event D. Since a case c in the event logs contains a sequential process execution, we can assume
that the data in the event logs is ordered.
For convenience, we assume that each event in the event log is represented by one random variable X , so that X A
represents a random variable of an event A. Pa ( X i ) and Pa( X i ) denote a set of candidate parent(s) and actual parent(s)
of an event in the event logs, respectively. We can say that Pa( X i ) Pa( X i ) always holds, due to the fact that a candidate
parent that makes no higher contribution in the iterative calculation of MI L ( X i , Pa( X i )) cannot be considered as the actual
parent. For example, an event A has an empty candidate parent, since event A is the start event, denoted Pa( X A ) {} ,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(3), 153-167, 2014

A HYBRID ELECTROMAGNETISM-LIKE ALGORITHM FOR A


MIXED-MODEL ASSEMBLY LINE SEQUENCING PROBLEM
Hong-Sen Yan, Tian-Hua Jiang, and Fu-Li Xiong
MOE Key Laboratory of Measurement and Control of Complex Systems of Engineering, and School of Automation,
Southeast University, Nanjing, China
Corresponding authors e-mail: {Hong-Sen Yan, [email protected]}

With the growth in customer demand diversification, research on mixed-model assembly lines have been given
increasing attention in the field of management. Sequencing decisions are crucial for managing mixed-model assembly
lines. To improve production efficiency, the product sequencing problem with skip utility work strategy and
sequence-dependent setup times is focused on in this study, and its mathematical model is established, whereby the idle
cost, the utility cost and the setup cost are to be optimized simultaneously. A necessary condition for skip policy of the
system is set, and a lower bound of utility work cost is given and theoretically proved. Strong NP-hardness of the
problem is confirmed. Addressing the main features of the problem, a hybrid EMVNS (electromagnetism-like
mechanism-variable neighborhood search) algorithm is developed. To enhance the local search ability of EM, a VNS
algorithm is employed and five neighborhood structures are designed. With the aid of the VNS algorithm, the fine
neighbour search of the optimum individual is made available, thus improving the solution to a certain extent.
Simulation results demonstrate that the algorithm is feasible and valid.
Significance: This paper presents a hybrid EMVNS algorithm to solve the product sequencing problem of a
mixed-model assembly line with skip utility work strategy and sequence-dependent setup times. The simulation results
demonstrate that the proposed method is feasible and valid.
Keywords: Scheduling, Mixed-model assembly line sequencing, Skip utility work strategy, Sequence-dependent setup
times, Hybrid EMVNS algorithm

1. INTRODUCTION
To cope with diversification of customers demand, mixed-model assembly lines have gained increasing importance in
the field of management. A mixed model assembly line (MMAL) is a type of production line where a variety of product
models similar in product characteristics are assembled. Two important decisions for managing mixed-model assembly
lines are balancing and sequencing. Sequencing is a problem of determining a sequence of the product models, whereby
a major emphasis is placed on maximizing the line utilization. In MMAL, products are transported on the conveyor belt
and operators move along with the belt while working on a product. An operator can work on a product only when it is
within his work zone limited by upstream and downstream boundaries. Whenever multiple labor-intensive models, e.g.,
all having an elaborate option, follow each other in direct succession at a specific station, a work overload situation
occurs, which means that the operator cannot finish work on a product before it leaves his station. Many outstanding
results have been achieved in this field. Okamura and Yamashina (1979) developed a sequencing method for
mixed-model assembly lines to minimize line stoppage. Yano and Rachamadugu (1991) addressed the problem of
sequencing mixed-model assembly lines to minimize work overload. Miltenburg and Goldstein (1991) developed
heuristic approaches to smooth production times by minimizing loading variation. Kim and Cho (2003) studied the
sequencing problem in a mixed-model final assembly line with multiple objectives by using simulated annealing
algorithm. Zhao and Ohno (1994, 1997) proposed a branch-and-bound method for finding an optimal or sub-optimal
sequence of mixed models that minimizes the total conveyor stoppage time. Chutima et al. (2003) applied fuzzy genetic
algorithm to the sequencing problem of mixed-model assembly line with processing time. Simaria and Vilarinho (2004)
presented an iterative genetic algorithm-based procedure for the mixed-model assembly line balancing problem with
parallel workstations to maximize the production rate of the line for a pre-determined number of operators. Akpinar and
Baykasolu (2014) proposed a multiple colony hybrid bee algorithm to solve the mixed-model assembly line balancing
problem with setups.
To simultaneously optimize the idle and overload costs, Sarker and Pan (1998) studied MMAL design problem in
the cases of closed and opened workstation. Yan et al. (2003) presented three heuristic methods combining tabu search
with quick schedule simulation for optimizing the integrated production planning and scheduling problem on
automobile assembly lines to minimize the idle and setup cost. Moghaddam and Vahed (2006) addressed a
multi-objective mixed assembly line sequencing problem to optimize the costs of utility work, productivity and setup
simultaneously. Tsai (1995) studied a class of assembly line sequencing problem to minimize the utility work and the
risk of line stop simultaneously. Fattahi and Salehi (2009) addressed a mixed-model assembly line sequencing
optimization problem with variable production cycle time to minimize the idle time and utility work costs. Bard et al.
(1994) developed a mathematical model that involved two objective functions in the mixed model assembly line
sequencing (MMALS): minimizing the overall line length and keeping a constant rate of part usage. They combined the
two objectives using a weighted sum and suggested a tabu search algorithm. Mohammadi and Ozbayrak (2006)
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(3), 168-178, 2014

A VARIANT PERSPECTIVE TO PERFORMANCE APPRAISAL SYSTEM:


FUZZY C MEANS ALGORITHM
Coskun Ozkana, Gulsen Aydin Keskinb,*, Sevinc Ilhan Omurcac
[email protected], [email protected], [email protected]
a
Yldz Technical University, Mechanical Engineering Faculty, Industrial Engineering Department, Istanbul Turkey,
Tel: +90 212 383 2865, Fax: +90 212 383 2866
b
Kocaeli University, Engineering Faculty, Industrial Engineering Department, Umuttepe Campus, Kocaeli Turkey
c
Kocaeli University, Engineering Faculty, Computer Engineering Department, Umuttepe Campus, Kocaeli Turkey

Performance appraisal and evaluating the employees for awarding is an important issue in human resource management.
In performance appraisal systems, ranking scales and 360 degree are the most commonly used types of evaluating
methods in which the evaluator gives a score for each criterion to assess all employees. Ranking scales are relatively
simple assessment methods. Despite using ranking scales allows the management to complete the evaluation process in
a short time, they have some disadvantages. In addition, although, all the performance appraisal methods evaluated the
employees in different ways, the employees get scores for each evaluation criteria and then their performances are
evaluated according to total scores.
In this paper, the fuzzy c means (FCM) clustering algorithm is applied as a new method to overcome the common
disadvantages of the classical appraisal methods and help managers to make better decisions in a fuzzy environment.
FCM algorithm not only selects the most appropriate employee(s), but also clusters them with respect to the evaluation
criteria. To explain the FCM method clearly, a performance appraisal problem is discussed and employees are clustered
both by the proposed method and the conventional method. Finally, the results obtained by the current system and FCM
have been presented comparatively. This comparison concludes that, in performance appraisal systems, FCM is more
flexible and satisfactory compared to conventional method.
Key words: Performance appraisal, fuzzy c means algorithm, fuzzy clustering, multi criteria decision making,
intelligent analysis.

1. INTRODUCTION
Employee performances such as capability, knowledge, skill, and other abilities are significantly important for the
organizations (Gungor et al., 2009). Hence, accurate personnel evaluation has a significant role in the success of an
organization. Evaluation techniques that allow companies to identify the best employee from the personnel are the key
components of human resource management (Sanyal and Guvenli, 2004). However, this process is so complicated due
to human nature. The objective of an evaluation process depends on appraising the differences between employees, and
estimating their future performances. The main goal of a manager is to attain ranked employees who have been
evaluated with regard to some criteria. Therefore, the development of efficient performance appraisal methods has
become a main issue. Some authors define the performance appraisal problem as an unstructured decision problem, that
is, no processes or rules have been defined for making decisions (Canos and Liern, 2008).
Previous researches have shown that performance appraisal information is used especially in making decisions
requiring interpersonal comparisons (salary determination, promotion, etc.), decisions requiring personal comparison
(feedback, personal educational need, etc.), decisions orientated to the continuation of the system (target determination,
human force planning, etc.) and documentation. It is clear that in a conventional way, there are methods and tools to do
those tasks (Grbz and Albayrak, 2014); however, each traditional method has certain drawbacks. In this paper, fuzzy
c means (FCM) clustering algorithm is proposed to make a more efficient performance evaluation by removing these
drawbacks.
The proposed method enables the managers group their employees with respect to several criteria. Thus, managers can
determine the most appropriate employee(s), in case of promotion, salary determination, and so on. In addition, in case
of personal educational requirement, they will know which employee(s) needs training by the proposed method.
This paper proposes an alternative suggestion to performance appraisal system. After a brief review of performance
appraisal in Section 2, FCM algorithm is described in Section 3. A real-life problem is solved both by FCM and the
conventional method to evaluate their performances and the findings are discussed in Section 4. Finally, this paper
concludes with a discussion and a conclusion.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(4), 179-189 , 2014

AN EARLY WARNING MODEL FOR THE RISK MANAGEMENT OF


GLOBAL LOGISTICS SYSTEMS BASED ON PRINCIPAL COMPONENT
REGRESSION
Jean Betancourt Herrera1, Yang-Byung Park2
Department of Industrial and Management Systems Engineering
College of Engineering, Kyung Hee University
1
Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do 446-701, Republic of Korea
1
[email protected], [email protected] (corresponding author)

This paper proposes an early warning model for the risk management of global logistics systems based on principal
component regression (PCR) that predicts a countrys global logistics system risk, identifies risk sources with
probabilities, and suggests ways of risk mitigation. Various quantitative and qualitative global logistics indicators are
utilized for monitoring the global logistics system. The Enabling Trade Index is used to represent the risk level of a
countrys global logistics system. Principal component analysis is applied to identify a small set of global logistics
indicators that account for a large portion of the total variance in the original set. An empirical study is carried out to
validate the predictive ability of PCR using datasets of years 2010 and 2012 published by the World Economic Forum.
Furthermore, the superiority of PCR is evaluated by comparing its performance with that of a neural network with
respect to the correlation coefficient and coincident rate. Finally, a real-life example of the South Korean global
logistics system is presented.
Keywords: early warning model, global logistics system, risk management, principal component regression, neural
network.

1. INTRODUCTION
Global logistics is a collection of moving and storage activities required for trade between countries. In general, global
logistics is much more complicated and difficult to perform than domestic logistics because the goods flow over borders
and thus take a long time to transport. Complex administrative processes are involved, and more than one mode of
transportation is required (Shamsuzzoha and Helo, 2012). The components of a typical global logistics system of a
country are tariff, customs, documentations, transport infrastructure and services, information and communication
services, regulations, and security (Gourdin, 2006).
As global trade continues to expand, the sustainable global logistics system of a country plays a crucial role in
achieving global competiveness by shortening the logistics process time, reducing the logistics cost, and securing
interoperability between different logistics sectors (Yahya et al., 2013). The establishment of the sustainable global
logistics system requires a big investment for a government and takes a period of multiple years. If a country cannot
provide traders with a satisfactory global logistics system, it will lose valuable customers and experience a significant
drop in trade. Therefore, it is very important for a country to predict its global logistics system risk in advance, identify
risk sources where improvements are most needed, and investigate effective ways for risk mitigation.
An early warning system is responsible for monitoring the system conditions and determining the issue of a warning
signal in advance through the analysis of various system indicators. Thus, an early warning system is an effective tool
for the operation of a sustainable global logistics system for a country. An early warning system may contribute to
providing relevant government ministries with strong evidence for improving certain areas of the global logistics
system, especially when allocating limited resources or establishing various global logistics policies.
A few researchers have studied the development of risk early warning system. Fordyce et al. (1992) proposed a
method for monitoring the manufacturing flow of semi-conductor facilities in a logistics management system. Xie et al.
(2009) developed an early warning and control management process for inner logistics risk in small manufacturing
enterprises based on label-card system equipped with RFID, through which an enterprise can monitor the quantity and
quality of work in process in a dynamic manner. Xu et al. (2010) presented the early warning model for food supply
chain risk based on principal component analysis and logistics regression. Li et al. (2010) presented a novel framework
for early warning and proactive control systems in food supply chain networks that combine expert knowledge and data
mining methods. Feng et al. (2008) proposed a simple early warning model with thresholds for four indicators as a
subsystem of the decision support system for price risk management of the vegetable supply chain in China. Xia and
Chen (2011) proposed a decision-making model for optimal selection of risk management methods and tools based on
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(4), 190-208, 2014

AGILE AND FLEXIBLE SUPPY CHAIN NETWORK DESIGN UNDER


UNCERTAINITY
Morteza Abbasi, Reza Hosnavi, Reza Babazadeh
Department of Management and Sot Technologies, Malek Ashtar University of Technology, P.O. Box 1774/15875, Tehran, Iran

Agile supply chain has proved its efficiency and capability in dealing with the disturbances and turbulences of
todays competitive markets. This paper copes with the strategic and tactical level decisions in agile supply
chain network design (SCND) under interval data uncertainty. In this study, an efficient mixed integer linear
programming (MILP) model is developed that is able to consider the key characteristics of agile supply chain
such as direct shipments, outsourcing, different transportation modes, discount, alliance (process and
information integration) among opened facilities and maximum waiting time of customers for deliveries. In
addition, in the proposed model capacity of facilities is determined as decision variables which are often
assumed to be as an input parameter. Then, the robust counterpart of the presented model according to the recent
advances in robust optimization theory is developed to deal with the inherent uncertainty of input parameters.
Computational results illustrate that the proposed robust optimization model has high degree of responsiveness
in dealing with uncertainty compared with deterministic model. Therefore, the robust model can be applied as a
power tool in agile and flexible SCND which faces with different risks in competitive environments.
Keywords: Robust optimization, Agile supply chain network design, Flexibility, Outsourcing, Responsiveness.

1.INTRODUCTION
Todays, high fluctuations and disturbances in business environments have caused the supply chains to seek an
effective way to deal with the undesirable uncertainties which affect the overall supply chain performance.
Supply chain network design (SCND) decisions, as the most important strategic level decisions in supply chain
management, concerned with complex interrelationships between various tiers, such as suppliers, plants,
distribution centers and customer zones as well as determining the number, location and capacity of facilities to
meet customer needs, effectively. Supply chain management integrates interrelationships between various
entities through creating alliance, i.e. information-system integration and process integration, between entities to
improve response to customers in various aspects such as, higher product variety and quality, lower costs and
quick responses.
Typically, risks in SCND are classified in two categories: operational or internal risk factors and disruption or
external risk factors. Operational risks is related to those risks which occur because of internal factors in supply
chain because of improper coordination between entities in various tiers, such as, production risk, distribution
risk, supply risk and demand risk. In contrast, disruption risks are resulted because of external risk factors which
occur due to interaction between supply chain and environment, such a natural disasters, exchange rate
fluctuations and terrorist attacks (Singh et al., 2011).
Increasing utilization of outsourcing approach through sub-contracting some of customer demands as well as
reduction in life cycle of products due to enthusiasm of customers to welcome fashion goods rather than
commodities has increased the uncertainties in competitive environments. Therefore, the supply chain network
(SCN) should be designed in the way that could sustain in dealing with such uncertainties. Chopra and Sodhi
(2004) mentioned that the organizations should consider uncertainty issue with its various forms in planning and
supply chain management to deal with their destructive and burdensome effects on supply chain network.
One of the vital challenges for organizations in todays turbulent markets is the need to respond to customer
needs with different volumes and vast variety quickly and efficiently (Amir, 2011). Agility with its various
contexts is the most popular approach that enables organizations to face with unstable and high volatile
customer demands. The most important concepts of agility are described in the next section. The point that
should be mentioned here is that the agility concepts should be applied in upstream and downstream
relationships of the supply chain management involving supplier selection, logistics, information system and
etc. Since the SCND is the most important strategic level decision which affects the overall performance of
supply chain, it is necessary to consider agility concepts such as response to customers in maximal allowable
time, direct shipment, alliance (i.e., information and process integration) between entities in different echelons,
discount to achieve competitive supply chain, outsourcing and using different transportation modes to achieve
flexibility as well as safety stock to improve responsiveness. It is evident that considering agility concepts in
SCND plays incredible role in agility of the overall supply chain. As yet many researchers have tried to show
the most important factors in agile supply chain management conceptually and this context has omitted in
mathematical modeling area especially in supply chain network design area.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(4), 209-230, 2014


SIMULATION OPTIMIZATION OF FACILITY LAYOUT DESIGN


PROBLEM WITH SAFETY AND ERGONOMICS FACTORS
1

Ali Azadeh1, Bita Moradi


Department of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran
Corresponding author e-mail address: [email protected]

This paper presents an integrated fuzzy simulation-fuzzy data envelopment analysis (FDEA)-fuzzy analytic hierarchy
process (FAHP) algorithm for optimization of flow shop facility layout design (FSFLD) problem with safety and
ergonomics factors. Almost all FSFLD problems are solved and optimized without safety and ergonomics factors. At
first, safety and ergonomics factors are retrieved from a standard questionnaire. Then, feasible layout alternatives are
generated by a software package. Third, FAHP is used for weighting non-crisp ergonomics and safety factors in
addition to maintainability, accessibility and flexibility (or qualitative) indicators. Fuzzy simulation is used
consequently to incorporate the ambiguity associated with processing times in the flow shop by considering all
generated layout alternatives with uncertain inputs. The outputs of fuzzy simulation or non-crisp operational indicators
are average waiting time in queue, average time in system and average machine utilization. Finally, FDEA is used for
finding the optimum layout alternatives with respect to ergonomics, safety, operational, qualitative and dependent
indicators (distance, adjacency and shape ratio). The integrated algorithm provides a comprehensive analysis on the
FSFLD problems with safety and ergonomics issues. The results have been verified and validated by DEA, principal
component analysis and numerical taxonomy. The unique features of this study are the ability of dealing with multiple
non-crisp inputs and outputs including ergonomics and safety factors. It also uses fuzzy mathematical programming for
optimum layout alternatives by considering safety and ergonomics factors as well as other standard indicators.
Moreover, it is a practical tool and may be applied in real cases by considering safety and ergonomics issues within
FSFLD problems.
Keywords: Simulation Optimization; Flow Shop Facility Layout Design; Fuzzy DEA; Safety; Ergonomics
Motivation and Significance: Almost all FSFLD problems are solved and optimized without safety and ergonomics
factors. Moreover, standard factors related to operational and layout dependent issues are only considered in such
problems. There are usually missing data, incomplete data or lack of data with respect to FSFLD problems in general
and safety and ergonomics issues in particular. This means data could not be collected and analyzed by deterministic or
stochastic models and new approaches for tackling such problems are required. This gap motivated the authors to
develop a unique simulation optimization algorithm to handle such gaps in FSFLD problems.
The integrated fuzzy simulation-fuzzy DEA algorithm-fuzzy AHP presents exact solution to the FSFLD problems
with safety and ergonomics issues whereas previous studies present incomplete and non exact alternatives. Also, it
provides a comprehensive analysis on the FSFLD problems with uncertainty by incorporating non-crisp ergonomics and
safety indicators in addition to fuzzy operational, dependent and qualitative indicators. Moreover, it provides complete
and exact rankings of the plant layout alternatives with uncertain and fuzzy inputs. The superiority and effectiveness of
the proposed integrated algorithm is compared with previous DEA-Simulation-AHP, AHP-DEA, AHP-principal
component analysis (PCA), and numerical taxonomy (NT) methodologies through a case study. The unique features of
the proposed integrated algorithm are the ability of dealing with multiple fuzzy inputs and outputs (ergonomics and
safety in addition to operational, qualitative and dependent). It also optimizes layout alternatives through fuzzy DEA.
Third it is a practical approach due to considerations of ergonomics, safety, operational and dependent aspects of the
manufacturing process within FSFLD problems.

1. INTRODUCTION
Facility Layout design (FLD) is a critical issue in productivity and profitability through redesigning, expanding, or
designing the manufacturing systems, e.g. flow shop systems (FSFLD). Zhenyuan, et al. (2013) showed that the
designed lean facility layout system can increase the productivity efficiency. Also, Niels Henrik Mortensen, et al.



ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(4), 231-242, 2014

A CONCURRENT APPROACH FOR FACILITY LAYOUT AND AMHS


DESIGN IN SEMICONDUCTOR MANUFACTURING
Dongphyo Hong1, Yoonho Seo1*, Yujie Xiao2
School of Industrial Management Engineering, Korea University, Seoul, Korea
2
Department of Logistic Management, School of Marketing & Logistic Management, Nanjing University of Finance & Economics, Nanjing, People's
Republic of China
*Corresponding author: [email protected]
1

This paper presents a concurrent approach to solve the design problem of facility layout and automated material
handling system (AMHS) for semiconductor fabs. The layout is composed of bays which are unequal-area blocks with
equal height but flexible width. In particular, the bay width and locations of a shortcut, bays, and their stockers, which
are major fab design considerations, are concurrently determined in this problem. We developed a mixed integer
programming (MIP) model for this problem to minimize the total travel distance (TTD) based on unidirectional interbay flow and a bidirectional shortcut. To solve large-sized problems, we developed a five-step heuristic algorithm to
exploit and explore the solution space based on the MIP model. The computational results show that the proposed
algorithm is able to find optimal solutions of small-sized problems and to solve large-sized problems within acceptable
time.
Keywords: Facility Layout, Bay Layout, AMHS Design, Concurrent Approach, Semiconductor Manufacturing

1. INTRODUCTION
In a 300 mm wafer fabrication facility (fab), a wafer typically travels about 1317 km and visits 250 process equipment
during the processing (Agrawal and Heragu, 2006). An effective facility layout and material handling system design can
significantly reduce the total travel distance of wafers. As stated by Tompkins et al. (2010), 2050% of the total
operating expenses within manufacturing are attributed to material handling. An efficient design of facility layout and
material handling can reduce operational cost by at least 1030%. Therefore, two challenges are presented to a fab
designer: (1) facility layout; (2) automated material handling system (AMHS) design (Montoya-Torres, 2006).
This paper focuses on the fab design comprising a bay layout and AMHS design with a spine configuration, which
usually has a unidirectional flow and bidirectional shortcuts, as presented by Peters and Yang (1997). Here, each bay is
composed of equipment that performs similar processes and forms a rectangular shape. However, they approached the
two sub-problems in a sequential manner, which may result in a local optimal solution. In this study, a concurrent
approach is proposed to find optimal solution of the two sub-problems simultaneously as shown in Figure 1.

Figure 1. Layout example using a shortcut

Figure 2. Representation of the problem

The facility layout problem (FLP) is to determine the physical placement of departments within the facility (Kusiak
and Heragu 1987). In semiconductor manufacturing, the layout design usually has a bay structure with a spine
configuration to enhance the utilization of process equipments and frequent maintenance (Agrawal and Heragu, 2006).
Peters and Yang (1997) suggested a methodology for an integrated layout and AMHS design which enables spine and
perimeter configurations in a semiconductor fab. Azadeh and Izadbakhsh (2008) presented an analytic hierarchy process
and principal component analysis to solve the FLP. Ho and Liao (2011) proposed a two-row dual-loop bay layout. They
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(5), 243-252, 2014

ASSISTING WHEELCHAIR USERS ON BUS RAMPS: A POTENTIAL CAUSE


OF LOW BACK INJURY AMONG BUS DRIVERS
Piyush Bareria1, Gwanseob Shin2
1

Department of Industrial and Systems Engineering


State University of New York at Buffalo
Buffalo, New York, USA
Corresponding authors e-mail: [email protected]
2

School of Design and Human Engineering


Ulsan National Institute of Science and Technology
Ulsan, Korea
Manual assistance to wheelchair-users while boarding and disembarking a bus may be an important risk factor for
musculoskeletal disorders of bus drivers, but no study has yet assessed biomechanical loads associated with the manual
assist operations. In this study, off-duty bus drivers simulated wheelchair-user assisting operations using forward and
backward strategies for boarding and disembarking ramps. Low-back compression and shear forces, shoulder moments and
percent population capable of generating required shoulder moment were estimated using the University of Michigan
Three-Dimensional Static Strength Prediction Program. L4-L5 compression force ranged from 401.6 N for forward
boarding to 2169.3 N for backward boarding (pulling), and from 2052.4 N for forward disembarking to 434.2 N for
backward disembarking (pushing). The shoulder moments were also consistently higher for the pushing tasks. It is
recommended that bus drivers adopt backward boarding and forward disembarking strategies to reduce the biomechanical
loads on the low back and shoulder.
Keywords: musculoskeletal injury, bus driver, wheelchair pushing/pulling, bus access ramp
(Received on September 9, 2012; Accepted on September 15, 2014)
1. INTRODUCTION
Bureau of Labor Statistics data (BLS, 2009) indicate that among bus drivers (transit and intercity), back injuries and
disorders constitute about 25% of reported cases of nonfatal work-related injuries and illnesses resulting in days away from
work. Data from the same year reports a work-related nonfatal back injury/illness incidence rate (IR) of 12.3 per 10,000 full
time bus drivers, which was greater than that of construction laborers (IR = 10.6). A number of studies have also evaluated
the prevalence of work-related musculoskeletal disorders in the upper body quadrant (neck, upper back, shoulder, elbow,
wrist, etc.) in drivers of different types of vehicles (Greiner & Krause, 2006; Langballe, Innstrand, Hagtvet, Falkum, &
Aasland, 2009; Rugulies & Krause, 2008). The prevention of musculoskeletal injuries of bus drivers and associated
disability has become a major challenge for employers, insurance carriers, and occupational health specialists.
Physical risk factors that have been associated with the high prevalence of work-related musculoskeletal disorders of
drivers include frequent materials handling activities as well as prolonged sitting and exposures to whole body vibration
(WBV) (Magnusson, Pope, Wilder, & Areskoug, 1996; Szeto & Lam, 2007). Specifically, bus drivers in public
transportation may also be exposed to the risks of heavy physical activities from manual assisting of wheelchair users.
Bus drivers of public transit system are mandated by law to assist a person in wheelchair to board and disembark
buses if needed. Sub-section 161(a) of the Code for Federal Regulation on Transportation services for individuals with
disability (49 CFR 37, U.S.) requires public and private entities providing transportation services maintain in operative
condition those features of facilities and vehicles that are required to make the vehicles and facilities readily accessible to
and usable by individuals with disabilities. In addition, 49 CFR 37 sub-section 165 (f) states that where necessary or upon
request, the entity's personnel shall assist individuals with disabilities with the use of securement systems, ramps and lifts.
If it is necessary for the personnel to leave their seats to provide this assistance, they shall do so. With an estimated 1.6
million non-institutionalized wheelchair users in U.S. of which about 90% are hand-rim propelled or so-called manual
wheelchairs (Kaye, Kang, & LaPlante, 2005), bus drivers are likely to assist wheelchair users during their daily shift which
could involve manual lifting, pushing and/or pulling the occupied wheelchair.
Pushing a wheelchair could cause overexertion and lead to injury since even ramps that comply with the Americans
with Disabilities Act (ADA) standards can be difficult to climb for wheelchair pushers of any strength (Kordosky, Perkins,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(5), 253-270, 2014

OPEN INNOVATION STRATEGIES OF SMARTPHONE MANUFACTURERS:


EXTERNAL RESOURCES AND NETWORK POSITIONS
Jiwon Paik1, Hyun Joon Chang2
1,2

Graduate School of Innovation and Technology Management


Korea Advanced Institute of Science and Technology
Daejeon 305-701, South Korea
Corresponding authors e-mail: [email protected]

A smartphone is not only a product made up of various integrated components, but also a value-added service. As the
smartphone ecosystem has evolved within the value chain of the ICT industry, smartphone manufacturers can benefit from
open innovation, such as by making use of external resources and collaboration networks. However, most studies on
smartphones have focused on aspects of product innovation, such as functional differentiation, usability, and market
penetration rather than on innovation networks. The aim of this study is to examine how the open innovation approaches and
strategic fit of smartphone manufacturers function in delivering innovation outcomes and business performance. This
research examines the relationship between seven smartphone manufacturers and their collaboration partners during a recent
three-year period, by analyzing four specific areas: hardware components, software, content services, and
telecommunications.
Keywords: smartphone, open innovation, external resources, network positions
(Received on September 7, 2012; Accepted on August 7, 2014)
1. INTRODUCTION
Information and communications technology (ICT) firms are now experiencing a new competitive landscape that is
redefining and eroding the boundaries between software, hardware, and services. In 2000, the first device marketed as a
smartphone was released by Ericsson; it was the first to use an open operating system and to combine the functions of a
mobile phone and a personal digital assistant (PDA). Then in 2007, the advent of the iPhone redefined the smartphone
product category, with the convergence of traditional mobile telephony, Internet services, and personal computing
representing a paradigm shift for this emerging industry (Kenney and Pon 2011).
Smartphones are becoming increasingly popular: smartphone sales to end users accounted for 19 percent of total mobile
communications device sales in 2010, a 72.1 percent increase over 2009. In comparison, worldwide mobile device sales to
end users increased by 31.8 percent during the same perioda
The smartphone industry is undergoing rapid and seismic change. Within two years, the iPhone went from nothing to
supplying 30% of Apple's total revenue. Indeed, the iPhone has been the best performer in terms of global sales, capturing
more than 14% of the market in 2009; whereas Nokia, once the smartphone industry leader, has seen its market share fall
dramatically. Stephen Elop, the former chief executive officer of Nokia, expressed a sense of crisis in February 2011: We are
standing on a burning platform. Figure 1 shows the global market share of eight smartphone manufacturers, and provides an
indication of the fierce competition in this industry.
During the remarkable flourishing of the smartphone industry, most theoretical analysis has strongly emphasized either
the product or the service aspects of smartphones, such as their usability, diffusion, software development, and service
provision (Funk 2006; Kenney and Pon 2011; Eisenmann et al. 2011; Doganova and Renault 2011). In contrast, the
competitive management aspects, such as integration or collaboration, have been relatively neglected.
The purpose of this paper is to analyze the open innovation strategy of smartphone manufacturers who have experienced
sudden performance changes; examples of such open innovation strategies include managing complementary assets and
integrating or collaborating with other companies. This paper examines the impact of utilizing external resources on the
innovation output, performance, and network position of smartphone manufacturers; and also formulates and tests several
hypotheses, by means of theoretical analyses and empirical research.
This paper is organized as follows: the next section provides an overview of the relevant literature, and the hypotheses are
defined in accordance with the theoretical analysis; in the third section, the dataset and methodology are explained; the fourth
a

Gartner press releases, 2011. Gartner Says Worldwide Mobile Device Sales to End Users Reached 1.6 Billion Units in 2010;
Smartphone Sales Grew 72 Percent in 2010

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(5), 271-283 2014

ENHANCING PERFORMANCE OF HEALTHCARE FACILITY VIA NOVEL


SIMULATION METAMODELING BASED DECISION SUPPORT
FRAMEWORK
Farrukh Rasheed1, Young Hoon Lee2
1 2

, Department of Information and Industrial Engineering


Yonsei University College of Engineering
50 Yonsei-ro, Seodaemun-gu, Seoul
120-749, Republic of Korea.
Corresponding authors e-mail: [email protected]
A simulation model of patient throughput in the community healthcare center (CHC) located in Seoul, Korea is developed.
The aforementioned CHC is providing primary, secondary and tertiary healthcare (HC) services, i.e. diagnostic, illness,
treatment, health screening, immunization, family planning, ambulatory care, pediatric and gynecologic along with various
other support services to uninsured, under-insured and low income patients residing in the nearby medically underserved
areas. The prime aim of this investigation is to identify main imperative variables via statistical analysis of de-identified
customer tracking system dataset and based-on expert opinion. Afterwards, using proposed novel simulation metamodeling
based decision support framework to gauge their impact on performance measures of interest. The identified independent
variables are resource shortage and stochastic demand pattern while performance measures of interest are the average
length of stay (LOSa), balking probability (Pb), reneging probability (Pr), overcrowding and resource utilization.
Significance: The methodology presented in this research is unique in a sense: a single meta-model represents a single
performance measure and the solution found may be sub-optimal, having a detrimental effect on other crucial performance
measures of interest if not considered. Hence, it is emphasized to develop all possible meta-models representing all the
crucial performance measures individually for the purpose of overcoming aforesaid draw back so that final solution may
qualify itself as a real-optimal solution.
Keywords: simulation, regression, performance analysis, healthcare system and application, decision making.
(Received on December 18, 2012; Accepted on September 15, 2014)
1. INTRODUCTION
Today's highly competitive HC sector must be able to adjust as per customers' ever changing requirements to survive.
Specific HC installation as considered for this research is a CHC located in Seoul, Korea serving medically underserved
areas. A HC facility can only survive by delivering high quality service at reduced cost while promptly responding to
associated challenges: swift changes in technology, patient load fluctuations, longer patient LOS, sub-optimal resource
utilization, unnecessary inter-process delays, inefficient information access and control, compromised patient safety,
overcrowding, surge, emergency department (ED) use and misuse and medication errors (Erik. et al. (2010), Mare et al.
(1995), Nan et al. (2009), Nathan and Dominik (2009)). Foregoing in view, the CHC administration was frantically looking
for ways to improve service quality because if systems under investigation are multifaceted as in numerous practical
situations, mathematical solutions become impractical and simulation is used as a contrivance for system evaluation.
Simulation represents transportation, manufacturing and service systems in a computer program for performing
experiments which enables testing of design changes without disruption to the system being modelled i.e. representation
mimics systems pertinent outward characteristics (Wang et al. (2009)).
Many HC experts have used simulation for the analysis of different situations aiming at better service quality and
improved performance. Hoot et al. (2007, 2008, 2009) used real-time simulation to forecast crowding in an ED. Kattan and
Maragoud (2008) uses simulation to address problems of an ambulatory care unit in a large cancer center, where operations
and resource utilization challenges led to overcrowding, excessive delays, and concerns regarding safety of critical patients.
Santibanez et al. (2009) analyzes the impact of operations, scheduling and resource allocation on patient waiting time,
clinic over-time and resource utilization using simulation. Zhu et al. (2009) analyzes appointment scheduling systems in
specialist outpatient clinics to determine the optimal number of appointments to be planned under different performance
indicators and consult room configurations using simulation. Wang et al. (2009) modelled ED services using ARIS /
ARENA software. Su et al. (2010) used simulation to improve the hospital registration process by re-engineering actual
process. Blasak et al. (2003) used simulation to evaluate hospital operations between emergency department (ED) and
medical treatment unit to suggest improvements. Samaha et al. (2003) proposed a framework to reduce patient LOS using
simulation. Holm and Dahl (2009) used simulation to analyze the effect of replacing nurse triage with physician triage.
Reindl et al. (2009) used simulation to analyze and suggest improvements for the cataract surgery process.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(5) ,284-294, 2014

ECONOMIC-STATISTICAL DESIGN OF THE MULTIVARIATE SYNTHETIC


T2 CHART USING LOSS FUNCTION
Wai Chung Yeong1, Michael Boon Chong Khoo1, Mohammad Shamsuzzaman2, and Philippe Castagliola3
1
School of Mathematical Sciences,
Universiti Sains Malaysia, 11800 Penang, Malaysia
Corresponding authors e-mail: [email protected]
2

Industrial Engineering and Management Department,


College of Engineering, University of Sharjah,
Sharjah, United Arab Emirates
3

Unam Universit, Universit de Nantes &


Irccyn umr cnrs 6597, Nantes, France

This paper proposes the economic-statistical design of the synthetic T2 chart, where the optimal chart parameters are
obtained by minimizing the expected cost function, subject to constraints on the in-control and out-of-control average run
lengths (ARL0 and ARL1). The quality cost is calculated by adopting a multivariate loss function. This paper also
investigates the effects of input parameters, shift sizes and multivariate loss coefficients toward the optimal cost, choice of
chart parameters and ARLs. Interaction effects are identified through factorial design. Besides that, comparisons are made
between the significant parameters of the synthetic T2 chart with that of the Hotellings T2 and Multivariate Exponentially
Weighted Moving Average (MEWMA) charts. Conditions where the synthetic T2 chart shows better economic-statistical
performance than the Hotellings T2 and MEWMA charts are identified. The synthetic T2 chart compares favorably with the
other two charts in terms of cost, while showing better ability to detect shifts.
Keywords: economic-statistical design; factorial design; Hotellings T2 chart; MEWMA chart; multivariate loss function;
synthetic chart
(Received on April 1, 2014; Accepted on September 25, 2014)
1. INTRODUCTION
Multivariate control charts are used when more than one correlated variables need to be monitored simultaneously. The
Hotellings T2 control chart is one of the most popular multivariate control charts used in practice. However, this chart is
not very efficient in detecting small to moderate shifts. To improve the performance of the Hotellings T2 chart, Ghute and
Shirke (2008) combined the Hotellings T2 chart with the conforming run length (CRL) chart, leading to a multivariate
synthetic T2 chart. The multivariate synthetic T2 chart operates by defining a sample as non-conforming if the T2 statistic is
larger than CLT 2 / S , the control limit of the T2 sub-chart. Unlike the T2 chart, an out-of-control signal is not immediately
generated when the T2 statistic is larger than CLT 2 / S . An out-of-control signal will only be generated when the number of
conforming samples between two successive non-conforming samples is smaller than or equal to L, the lower control limit
of the CRL sub-chart. Ghute and Shirke (2008) have shown that the multivariate synthetic T2 chart gives better ARL
performance, in comparison to the Hotellings T2 chart. Some recent studies on control charts include Zhao (2011), Chen et
al. (2011), Kao (2012a), Kao (2012b), Pina-Monarrez (2013), and many more.
Duncan (1956) developed an economic design of X control charts, for the purpose of selecting optimal control chart
design parameters. This approach was generalized by Lorenzen and Vance (1986), so that it can be adopted on other charts.
The major weakness of the economic design of control charts is that it ignores the statistical performance of the control
charts. Woodall (1986) criticized that in the economic approach, the Type I error probability is considerably higher than it
would usually be compared to statistical designs, which leads to more false alarms. To improve the poor statistical
performance of the economically designed control chart, Saniga (1989) proposed an economic-statistical design of the
univariate X and R charts. In the economic-statistical design, statistical constraints are incorporated into the economic
model. The economic-statistical design can be viewed as a cost improvement approach to statistical designs, or as a
statistical performance improvement approach to economic designs.

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

ISSN 1943-670X

International Journal of Industrial Engineering, 21(6), 295-303, 2014

THE EFFECT OF WRIST POSTURE AND FOREARM POSITION ON THE


CONTROL CAPABILITY OF HAND-GRIP STRENGTH
Kun-Hsi Liao
Department of Product Development and Design
Taiwan Shoufu University
Tainan, Taiwan
Corresponding authors e-mail: [email protected]
Economic and industrial developments have yielded an increase in automated workplace operations; consequently,
employees must learn to operate various hand tools and equipment. The hand grip strength exerted by workers during
machinery operation has received increasing attention from engineers and researchers. However, research on the
relationship between hand grip strength and posturea crucial issue in ergonomicsis scant. Therefore, in this study, the
relationships among wrist posture, forearm position, and hand grip strength were examined among 72 university students.
Three wrist posture and forearm positions of grip span were tested to identify the maximum volitional contraction (MVC)
and hand gripping control (HGC) required for certain tasks. A one-way analysis of variance was conducted using MVC and
HGC as dependent variables, and the optimal wrist posture and forearm position were identified. The findings provide a
reference for task and instrument design and protecting industrial workers from diseases.
Keywords: wrist posture; forearm position; hand gripping control; maximum volitional contraction
(Received on November 19, 2013; Accepted on September 25, 2014)
1. INTRODUCTION
Hand-grip strength is crucial in determining the ability to handle and control an object. Two types of hand-grip strength are
associated with tool operationmaximal grip force and hand-gripping control strength (HGC). Numerous previous studies
have elucidated the factors associated with the design of industrial safety equipment and tools based on hand-grip strength
and maximum volitional contraction (MVC), maximum force which a human subject can produce in a specific isometric
exercise ( Hallbeck and McMullin, 1993; Carey and Gallwey, 2002; Kong et al., 2008; Lu et al., 2008; Schlssel et al.,
2008; Liao, 2009; Liao, 2010a, 2010b; Shin, 2012; Boonprasurt and Nanthavanij; 2012; Liao, 2014). Those studies have
shown that hand-grip strength is a critical source of power for operating equipment and tools in the workplace. HGC
represents a controlled force precisely exerted using the palm of the hand (Murase et al., 1996; Hoeger and Hoeger, 2002).
For example, HGC can indicate the force required to cut electrical wire or to tighten a screw. Hoeger and Hoeger (2002)
applied MVC to standardize test results. For example, 70% MVC (MVC-70%), the vale equals to seventy percent of
maximum volitional contraction force, is a typical measurement standard. Numerous previous studies have applied HGC to
measure the force exerted during daily tasks, work performance, and tool operation (Mackin, 1984; Murase, 1996; Kuo,
2003). Moreover, it has been proposed that hand-grip strength could predict mortality and the expectancy of being able to
live independently. Hand-grip strength measurement is a simple and economical test that provides practical information
regarding muscle, nerve, bone, or joint disorders. Thus, measuring the HGC required for work tasks can provide a useful
reference for designing new hand tools. Numerous studies have shown that hand-grip strength is moderated by factors such
as age, gender, posture, and grip span (Carey and Gallwey, 2002; Watanabe et al., 2005; Liao, 2009). In specific
circumstances, posture is the most critical factor affecting grip strength; thus, measuring grip strength can provide crucial
knowledge for tool designers.
The American Academy of Orthopedic Surgeons (1965) and Eijckelhof et al. (2013) have identified the following
three types of wrist and forearm posture scaling for observational job analysis: (1) flexion/extension; (2) radial/ulnar
deviation; and (3) pronation/supination (Figure 1), demonstrating joint range of motion (ROM) boundaries of 85 to -95,
70 to -45, and 130 to -145, respectively.
Numerous previous studies have reported various grip strengths based on differing postures (Odriscoll et al., 1992;
Hallbeck and McMullin, 1993; Mogk and Keir, 2003; Shih et al., 2006; Arti et al., 2010; Barut and Demirel, 2012). Kattel
et al. (1996) indicated that shoulder abduction, elbow and wrist flexion, and ulnar deviation significantly affect grip force.
Regarding wrist posture, numerous studies have consistently shown that large deviations from the neutral position weaken
grip force (Kraft and Detels, 1972; Pryce, 1980; Lamoreaux and Hoffer, 1995; Subramanian and Mita, 2009). Carey and
Gallwey (2002) evaluated the effects of wrist posture, pace, and exertion on discomfort. They concluded that extreme
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 304-316, 2014

INTEGRATING PHYSIOLOGICAL AND PSYCHOLOGICAL TECHNIQUES


TO MEASURE AND IMPROVE USABILITY: AN EMPIRICAL STUDY ON
KINECT APPLYING OF HEALTH MANAGEMENT SPORT
Wei-Ying Cheng1, Po-Hsin Huang1, and Ming-Chuan Chiu1,*
1

Department of Industrial Engineering and Engineering Management


National Tsing Hua University
HsinChu, Taiwan, R.O.C
*
Corresponding authors e-mail: [email protected]

This research aimed to develop an approach for measuring, monitoring and auditing the usability of a motion-related health
management product. Based on an ergonomic perspective and principles, the interactions between test participants and a
motion sports device were studied using physiological data gathered from a heart rate sensor. Based on our literature
review, we customized a psychological usability questionnaire which considered effectiveness, efficiency, satisfaction,
error, learnability, sociability, and mental workload, generating a tool meant to reveal the subjective cognition of product
usability. This research analyzed the objective (physiological) and subjective (psychological) data simultaneously to gain
greater insight about the product users. In addition, heart rate data, mental workload data and the questionnaire data were
synthesized to generate a comprehensive, detailed approach for evaluating usability in order to provide suggestions for
improving the usability of an actual health care product.
Keywords: usability; physiological techniques; questionnaires; health management product
(Received on November 19, 2014; Accepted on October 20, 2014)
1. INTRODUCTION
According to the Directorate-General of Budget, Accounting and Statistics of the R.O.C., the average number of working
hours per worker in Taiwan in 2012 was 2140.8, ranking third in the world. On average, employees in Taiwan work 44.6
hours every week and almost 9 hours every day. This busy status is echoed among workers in Korea, Singapore and Hong
Kong, who are measurably among the busiest people throughout the world. To balance work, family, and quality of life, an
increasing emphasis is being placed on the concept of personal exercise since the lack of exercise has been shown to lead to
common long-range health problems such as high blood pressure, diabetes and hyperlipidemia. Despite this recognition,
many people do not know how often or how long to exercise in order to achieve maximum benefit. In response to this need,
various products have been designed and manufactured to address this problem and to help maintain personal health status.
During such product development, usability has been considered an important design issue; however, there are few
usability evaluation methods that totally fit with assessment for health maintenance or product improvement, especially for
the infirm and the elderly. Therefore, a method to measure and assess product use and satisfaction is important and
necessary in order to distinguish the usability features of these products and to improve their usability. Thus, the purpose of
this research is to establish an evaluation method which can detect the intention of the customers so as to measure the
usability of the products.
2. LITERATURE REVIEW
New approaches to the ancient study of ergonomics continue to emerge. During the last decade, for instance, Baesler and
Sepulveda (2006) applied a genetic algorithm heuristic and a goal programming model to address ergonomics in a cancer
treatment facility. Jen et al. (2008) conducted a research on a VR-Based Robot Programming and Simulation System that
was ergonomics dominated. Subramanian and Mital (2008) investigated the need to customize work standards for the
disabled. Nanthavanij et al. (2010) made a comparison of the optimal solutions obtained from productivity-based, safetybased, and safety-productivity workforce scheduling models. The analysis of healthcare issues, processes, and products
continues to increase, influenced by modernized work conditions as well by evolving government mandates.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 317-326, 2014

AN OPERATION-LEVEL DYNAMIC CAPACITY SCALABILITY MODEL FOR


RECONFIGURABLE MANUFACTURING SYSTEMS
Zhou-Jing Yu1, Jeong-Hoon Shin1, and Dong-Ho Lee1,*
1

Department of Industrial Engineering


Hanyang University
Seoul, Republic of Korea
*
Corresponding authors email: [email protected]
This study considers the problem of determining the facility requirements for a reconfigurable manufacturing system to
satisfy the fluctuating demand requirements and the minimum allowable system utilization over a given planning horizon.
Unlike the existing capacity planning models for flexible manufacturing systems, the problem considered in this study has
both design and operational characteristics since the reconfigurable manufacturing systems have the capability of changing
its hardware and software components rapidly in response to market changes or system changes. To represent the problem
mathematically, a nonlinear integer programming model is suggested for the objective of minimizing the sum of facility
acquisition and configuration change costs, while the throughputs and utilizations are estimated using a closed queuing
network model. Then, due to the problem complexity, we suggest three heuristics, two forward-type and one backward-type
algorithms. To compare the performances of the three heuristic algorithms, computation experiments were done and the
results are reported.
Keywords: reconfigurable manufacturing system; capacity scalability; closed queuing network; heuristics.
(Received on November 28, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
Reconfigurable manufacturing system (RMS), one of recent manufacturing technologies, is a manufacturing system
designed at the outset for rapid changes in its hardware and software components in order to quickly adjust its production
capacity and functionality in response to sudden market changes or intrinsic system changes (Koren et al. 1999, Bi et al.
2008). In fact, the RMS is a new manufacturing paradigm that overcomes the concept of flexible manufacturing system
(FMS) with limited success in that it is expensive due to more functions than needed, not highly reliable, and subject to
obsolescence due to advances in technology and their fixed system software and hardware (Mehrabi et al. 2000). See Koren
et al. (1999), Mehrabi et al. (2000, 2002), ElMaraghy (2006) and Bi et al. (2008) for more details on the characteristics of
RMS.
There are various decision problems in designing and operating RMSs, which can be classified into system-level,
component-level and ramp-up time reduction decisions (Mehrabi et al. 2000). Among them, we focus on system-level
capacity planning, called the capacity scalability problem in the literature. Capacity planning, an important system-level
decision in ordinary manufacturing systems, is especially important in the RMS since it has more expansion flexibility than
FMSs. Here, the expansion flexibility is defined as the capability to expand or contract production capacity. In particular,
the RMS can utilize the expansion flexibility in short-term operation-level because it has the inherent reconfigurability. See
Sethi and Sethi (1990) for more details on the importance of expansion flexibility.
From the pioneering work of Manne (1961), various models have been developed on the capacity expansion problem
in traditional manufacturing systems. See Luss (1982) for an extensive review of the classical capacity expansion problems.
Besides these, there are a number of previous studies on capacity planning or system design in FMSs. For examples, see
Vinod and Solberg (1985), Dallery and Frein (1988), Lee et al. (1991), Rajagopalan (1993), Solot and van Vliet (1994),
Tetzlaff (1994), Lim and Kim (1998) and Chen et al. (2009).
Unlike the classical ones, not many studies have been done on capacity scalability in RMSs since the new paradigm
has been emerged recently. One of the earlier studies is done by Son et al. (2001) that suggest a homogeneous paralleling
flow line in which paralleling is done to scale and balance the capacity of transfer lines. Deif and ElMaraghy (2006a)
suggest a dynamic programming model, based on the set theory and the regeneration point theorem, to find the optimal
capacity scalability plans that minimize the total cost, and Deif and ElMaraghy (2006b) suggest a control theoretic
approach for the problem that minimizes the delay in capacity scalability, i.e. ramp-up time of new configurations. See Deif
and ElMaraghy (2007a, b) for other extensions. Also, Spicer and Carlo (2007) consider a multi-period problem that
determines the system configurations over a planning horizon and suggest two solution algorithms, an optimal dynamic
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 327-336, 2014

A ROBUST TECHNICAL PLATFORM PLANNING METHOD TO ASSURE


COMPETITIVE ADVANTAGE UNDER UNCERTAINTIES
Jr-Yi Chiou1 and Ming-Chuan Chiu1,*
1

Department of Industrial Engineering and Engineering Management


National Tsing-Hua University
101Kuang-Fu Road, Hsinchu
Taiwan 30013, R.O.C.
*
Corresponding authors e-mail: [email protected]

Developing a technology-based product platform (technical platform) that can deliver a variety of products has emerged as
a strategy for obtaining competitive advantage in the global marketplace. Technical platform planning can improve
customer satisfaction by integrating diversified products and technologies. Prior studies have alluded to developing a robust
framework of technical platforms and validated methodologies. We propose a multi-step approach to organize technical
platforms based on corporate strength while incorporating technological improbability during platform development. A
case study is presented to demonstrate its advantages, referencing a company developing 3-Dimension Integrated Circuitry
(3D-IC) for the semiconductor industry. We evaluate four alternatives to ensure compliance with market demands. This
study applies assessment attributes for technology, commercial benefits, industrial chain completeness, and risk. Using
Simple Multi-Attribute Rating Technique Extended to Ranking (SMARTER), decision-makers can quickly determine
efficient alternatives in uncertain situations. Finally, a scenario analysis is presented to simulate possible market situations
and provide suggestions to the focal company. Results illustrate the proposed technical platform can enhance companies
core competencies.
Significance: The proposed method incorporates technical platform planning to help fulfill current and future market
demands. This method can also provide robust solutions for enterprises should untoward events occur. Thus the competitive
advantage of the focal company can be assured in the future.
Keywords: technical platform planning; decision analysis; technology management; fuzzy simple multi-attribute rating
technique extended to ranking (SMARTER); 3-dimension integrated circuit (3D-IC)
(Received on November 28, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
In an effort to achieve customer satisfaction, many companies have adopted product family development and platformbased methods to improve product variety, to shorten lead times, and to reduce costs. The backbone of a successful product
family is the product platform, which can be generated by adding, removing, or substituting one or more modules to the
platform. The platform may also be scaled in one or more dimensions to target specific market niches. This burgeoning
field of engineering planning has prospered for the past 10 years. However, most of the related research has solely
considered customer-oriented metrics. Other key factors such as core technologies of enterprises and technology trends
under uncertainties can also affect the development of the industry. This recognition is what motivated us to conduct this
research. This paper integrates these elements within technical platform planning. Technical platform planning is
considered in tandem with technology management to achieve efficient solutions so as to maintain and enhance the strength
of the focal company. The proposed methodology can enable companies to incorporate future-bound technology in their
technology roadmap to meet diverse customer needs in the future. It also enables enterprises to concentrate their resources
in the right directions based on scenario analysis. In previous studies, the technology management framework and
assessment methods for uncertain situations have rarely been addressed. Fuzzy SMARTER is a decision analysis method
which can solve problems under uncertainty. Experts work with limited data and linguistic expressions like good or bad to
forecast future trends. In this research, fuzzy analysis was applied to resolve this set of circumstances. SMARTER only
required the sequence information of future products and technologies. Therefore, this study can address the gap and
combine technical platform planning, technology management as well as decision analysis to generate a new planning tool
for enterprises.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 337-347, 2014

UNCERTAINTY ANALYSIS FOR A PERIODIC REPLACEMENT PROBLEM


WITH MINIMAL REPAIR: PARAMETRIC BOOTSTRAPPING
Y. Saito1, T. Dohi1,*, and W. Y. Yun2
1

Department of Information Engineering


Hiroshima University
1-4-1 Kagamiyama,
Higashi-Hiroshima, 739-8527 Japan
*
Corresponding authors e-mail: [email protected]
2

Department of Industrial Engineering


Pusan National University
30 Jangjeon-dong,
Geumjeong-gu,
Pusan, 609-735 Korea

In this paper we consider a statistical estimation problem for a periodic replacement problem with minimal repair which is
one of the most fundamental maintenance models in practice, and propose two parametric bootstrap methods which are
categorized into simulation-based approach and re-sampling-based approach. Especially, we concern two data analysis
techniques: direct data analysis of the minimal repair data which obeys a non-homogeneous Poisson process and indirect
data analysis after data transformation to a homogeneous Poisson process. Through simulation experiments, we investigate
statistical features of the proposed parametric bootstrap methods. Also, we analyze the real minimal repair data to
demonstrate the proposed methods in practice.
Significance: In practice, we often encounter situations where the optimal preventive maintenance policy should be trigged.
However, only a few research results on the statistical estimation problems of the optimal preventive maintenance policy
have been reported in the literature. We take place the high level statistical estimation of the optimal preventive
maintenance time and its associated expected cost, and derive estimators of higher moments of the optimal maintenance
policy, and its confidence interval. Then, the parametric bootstrap methods play a significant role. The proposed approach
enables us the statistical decision making on the preventive maintenance planning under uncertainty.
Keywords: statistical estimation; parametric bootstrap method; periodic replacement problem; minimal repair; nonhomogeneous poisson process;
(Received on November 29, 2013; Accepted on October 7, 2014)
1. INTRODUCTION
The periodic replacement problem by Barlow and Proschan (1965) is one of the simplest, but most important preventive
maintenance scheduling problems. The extended versions of this model have been studied in various papers (Valdez-Flores
and Feldman 1989, Nakagawa 2005). Boland (1982) gave the optimal periodic replacement time in case where the minimal
repair cost depends on the age of component, and showed necessary and sufficient conditions for the existence of an
optimal periodic replacement time in the case where the failure rate is strictly increasing failure rate (IFR). Nakagawa
(1986) proposed generalized periodic replacement policies with minimal repair, in which the preventive maintenance is
scheduled at periodic times. If the number of preventive maintenance reaches to a pre-specified value, the system is
replaced at the next preventive maintenance time. Nakagawa (1986) derived simultaneously both the optimal number of
preventive maintenance and the optimal preventive maintenance time. Recently, Okamura et al. (2014) developed a
dynamic programming algorithm to obtain the optimal periodic replacement time in Nakagawa (1986) model effectively.
Sheu (1990) considered a preventive maintenance problem in which the minimal repair cost varies with the number of
minimal repairs and the age of component. Sheu (1991) also proposed another generalized periodic replacement problem
with minimal repair in which the minimal repair cost is assumed to be composed of age-dependent random and
deterministic parts. If the component fails, it is replaced randomly by a new one or repaired minimally. He showed that the
optimal block replacement time can be derived easily in numerical examples.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 348-359, 2014

STRATEGIC OPENNESS IN QUALITY CONTROL: ADJUSTING NPD


STRATEGIC ORIENTATION TO OPTIMIZE PRODUCT QUALITY
Dinush Chanaka Wimalachandra1,*, Bjrn Frank1, Takao Enkawa1
1

Department of Industrial Engineering and Management


Tokyo Institute of Technology
Tokyo, 152-8552
*
Corresponding authors e-mail: [email protected]
Many firms have shifted to an open innovation strategy by integrating external information into new product development
(NPD). This study extends the open innovation paradigm to the area of product quality control practices in NPD. Using
data collected in 10 countries, this study investigates the role of external information acquired through B2B/B2C customer,
competitor, technology, and manufacturing orientation in meeting quality and performance specifications of newly
developed products. It also illuminates the interconnected roles of B2B and B2C customer orientation in meeting these
specifications. Contrary to conventional wisdom, the results show that leveraging a variety of external information sources
(in particular, frequent and informal communication with B2B customers and coordination with the manufacturing
department) indeed helps firms improve internal product quality control practices in NPD. Information on B2C customers
is beneficial in B2B contexts only if effectively integrated by means of B2B affective information management.
Keywords: product quality; B2B customer orientation; B2C customer orientation; manufacturing orientation
(Received on November 29, 2013; Accepted on April 21, 2014)
1. INTRODUCTION
Research has identified product quality as one of the key determinants of NPD performance (Sethi, 2000). Due to growing
competition in most industries, managers thus have come to regard the quality of newly developed products as crucial for
maintaining a competitive edge in the long run (Juran, 2004). Research based on Chesbroughs (2003) open innovation
paradigm indicates that firms openness to its external environment can improve their ability to innovate by enabling them
to leverage outside capabilities and follow changes in the environment (Laursen and Salter, 2006), but it remains unknown
whether such openness might also help firms improve their mostly internally oriented quality management practices.
Hence, our study seeks to verify whether the open innovation paradigm can be extended to the area of product quality
control practices in NPD. Moreover, our study aims to identify the types of external information acquired through NPD
strategies (B2B/B2C customer, competitor, technology, and manufacturing orientation) that best help firms meet quality
and performance specifications of newly developed products in B2B contexts.
Our original claim is that accounting for external information during quality control can help firms to minimize the
reoccurrence of past quality-related problems detected by B2B customers, to minimize manufacturing problems, to improve
the effectiveness of early-stage prototype testing, and to learn from competitors best practices in quality control. Hence, we
argue that many firms would profit from greater openness in quality management. Firms in B2B markets may benefit from
integrating external information on B2B customers and on their eco-system, which includes product technology,
manufacturing techniques, and competitor strategies. As information on B2C customers at the end of the supply chain is not
directly related to immediate concerns of internal quality control in B2B contexts, we argue that accounting for this type of
information directly may be problematic. However, firms might learn to leverage such information to improve prototype
testing in collaboration with B2B customers. Hence, even information on B2C customers may be beneficial to firms
quality control practices in B2B contexts if such information is handled appropriately.
To examine the effectiveness of strategic openness in quality control and thus provide industrial engineers with
actionable knowledge of how to improve quality control practices, our study establishes hypotheses about the influence of
externally oriented NPD strategies on product quality. To test these hypotheses empirically, we collected data from 10
countries (Bangladesh, Cambodia, China, Hong Kong, India, Japan, Sri Lanka, Taiwan, Thailand, and Vietnam) in the
textile and apparel industry, covering firms across the supply chain starting from raw material suppliers via manufacturers
and value-adding firms (printing/dyeing/washing) to buying offices. As our study is based on statistical analyses, confirmed
hypotheses are valid and can be generalized to the entire population of firms from which our firm sample was drawn. Thus,
our study is not simply a case study. Rather, it derives generalizable insights that can be applied across different contexts.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 360-375, 2014

A LAD-BASED EVOLUTIONARY SOLUTION PROCEDURE FOR BINARY


CLASSIFICATION PROBLEMS
Hwang Ho Kim1 and Jin Young Choi1,*
1

Department of Industrial Engineering


Ajou University
206, World cup-ro, Yeongtong-gu,
Suwon-si, Gyeonggi-do, Korea
*
Corresponding authors e-mail: [email protected]
Logical analysis of data (LAD) is a data analysis methodology used to solve the binary classification problem via
supervised learning based on optimization, combinatorics, and Boolean functions. The LAD framework consists of the
following four steps: data binarization, support set generation, pattern generation, and theory formulation. Patterns that
contain the hidden structural information calculated from the binarized training data play the most important roles in the
theory, which consists of a weighted linear combination of patterns and works as a classifier of new observations. In this
work, we develop an efficient parameterized iterative genetic algorithm (PI-GA) to generate a set of patterns with good
characteristics in terms of degree (simplicity-wise preference) and coverage (evidential preference) of patterns. The
proposed PI-GA can generate simplicity-wise preferred patterns that also have high coverage. We also show the efficiency
and accuracy of the proposed method through a numerical experiment using benchmark machine learning datasets.
Keywords: logical analysis of data; binary classification; pattern generation; genetic algorithm
(Received on November 29, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
Binary classification (Lugosi, 2002) is an issue arising in the field of data mining and machine learning and involves the
study of how to classify observations with characteristics of two classes. It has been used in the medical, service,
manufacturing, and various other fields. For example, binary classification methods are used for diagnostic criteria using
information obtained through inspection of patients in medicine (Prather et al., 1997); in the service field, it is used for
credit ratings based on customers applications and history (Berry and Linoff, 1997). Binary classification problems with
two data classes as defective or non-defective goods in manufacturing are particularly important when we are looking for
the cause of the defects and trying to increase productivity (Chien et al., 2007).
To solve the binary classification problems, various data mining approaches such as decision trees (J48), support
vector machines (SVMs), neural networks (NNs) have been proposed and utilized. However, one of the main drawbacks of
these learning methods is the lack of interpretation ability of the results. An NN is generally perceived as a black box
(Ahluwalia and Chidambaram, 2008; Yeoum and Lee, 2013), and it is extremely difficult to document how specific
classification decisions are reached. SVMs are also black box systems that do not provide insights on the reasons or
explanations about classification (Mitchell, 1997). Thus, these approaches do not exhibit both high accuracy and
explanatory power for binary classification. Meanwhile, the major disadvantage of the decision tree is its computational
complexity. Decision trees examine only a single field at a time so that large decision trees with many branches are
complex and time-consuming (Safavian and Landgrebe, 1991).
The logical analysis of data (LAD; Crama et al., 1998; Boros et al., 1997; Boros et al., 2000; Hammer and Bonates,
2006) proposed recently is a data analysis methodology used to solve the binary classification problem via supervised
learning based on patterns that contain hidden structural information calculated from binarized training data. Therefore,
LAD is an effective methodology that can easily explain the reasons for the classification using patterns. Moreover, LAD
can provide higher classification accuracy than others if the patterns used for the classification represent all characteristics
of data and the number of patterns is sufficient. In many medical application studies, LAD has been applied to classification
problems for diagnosis and prognosis. Such studies have shown that the accuracy of LAD is comparable with the accuracy
of the best methods used in data analysis so far, usually providing similar results to other binary classification methods
(Boros et al., 2000). However, there is still a problem in that classification performance of LAD can vary depending on
certain characteristics of the patterns generated in the LAD framework. Therefore, pattern generation is the most important
issue in the LAD framework, and has been studied in various ways these days.
The conventional pattern generation methods can mainly be divided into (i) enumeration-based approaches and (ii)
mathematical approaches. First, most of the early studies on pattern generation used enumeration-based techniques (Boros
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 376-383, 2014

A STUDY ON COLLABORATIVE PICK-UP AND DELIVERY ROUTING


PROBLEM OF LINE-HAUL VEHICLES IN EXPRESS DELIVERY SERVICES
Friska Natalia Ferdinand1, Young Jin Kim3, Hae Kyung Lee2, and Chang Seong Ko2,*
1
Department of Information System
University of Multimedia Nusantara
Kampus UMN, Scientia Garden, Boulevard Gading Serpong,
Tangerang, Indonesia
2

Department of Industrial and Management Engineering


Kyungsung University
309 Suyeong-ro, Nam-gu, Busan, 608-736
Busan, South Korea
*
Corresponding authors e-mail: [email protected]

Department of Systems Management and Engineering


Pukyoung National University
45 Yongso-ro, Nam-gu, Busan, 608-737
Busan, South Korea

In the Korean express delivery service market, many companies have been striving to extend their own market share. An
express delivery system is defined as a network of customers, service centers and consolidation terminals. Some companies
operate line-haul vehicles in milk-run types of pick-up and delivery services among consolidation terminals and service
centers with locational disadvantages. The service centers with low sales are kept operating, even if they are unprofitable, to
ensure the quality of service. Recently, a collaborative operation is emerging as an alternative to reduce the operating costs
of the handicapped centers. This study considers a collaborative service network with pick-up and delivery visits for linehaul vehicles for the purpose of maximizing the incremental profits of collaborating companies. The main idea is to operate
only one service center shared by different companies for service centers with low demands and change the visit schedules
accordingly. A genetic algorithm-based heuristic is proposed and assessed through a numerical example.
Keywords: express delivery services; collaborative pick-up and delivery; line-haul vehicle; milk-run, genetic algorithm
(Received on November 29, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
Pick-up and delivery problems (PDPs) are aimed at designing a vehicle route starting and ending at a common depot in
order to satisfy pick-up and delivery requests in each location. In a traditional pick-up and delivery problem, each customer
usually receives a delivery originating from a common depot and sends a pick-up quantity to the same depot. Most of the
express delivery service centers in Korea are directly linked to a consolidation terminal. However, service centers located in
rural areas with low utilization may not be directly linked to a consolidation terminal (Ferdinand et al., 2013). These remote
service centers with low sales are mostly operated, even though unprofitable, in order to ensure the quality of service. There
has thus been a growing need to develop an operational scheme to ensure a higher level of service as well as profitability. It
has been claimed that a collaborative operation among several companies may provide an opportunity to increase
profitability as well as to ensure the quality of service. There exist various types of collaboration in express delivery
services and such an example includes sharing of vehicles, consolidation terminals, and other facilities. This study
considers the collaboration among companies sharing service centers and line-haul vehicles. Visit schedules are also
determined accordingly to enhance profitability of collaborating companies. Service centers located in rural areas with low
utilization may not be profitable, and thus only one company will operate a service center and vehicles in each location
along the route (so-called, monopoly of service center). Other companies will use the service center and vehicles at a
predetermined price. All the routes should provide pick-up and delivery services, and all the vehicles should return to the
depot at the end of each route. The objective of this study is to construct a network design for profitable tour problem (PTP)
with collaborative pick-up and delivery visits that maximizes the incremental profit based on the maxmin criterion.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering,21(6),384-395, 2014

OPTIMIZATION OF WIND TURBINE PLACEMENT LAYOUT ON NON-FLAT


TERRAINS
Tzu-Liang (Bill) Tseng1, Carlos A. Garcia Rosales1, and Yongjin (James) Kwon2,*
1

Department of Industrial, Manufacturing and Systems Engineering


The University of Texas at El Paso
500 W, University Ave.
El Paso, TX 79968, USA
2

Department of Industrial Engineering


Ajou University
Suwon, 443-749, Republic of Korea
*
Corresponding authors e-mail: [email protected]
To date, wind power has become popular due to climate change, greenhouse gases and diminishing fossil fuel. Although
wind turbine technology for electricity is already mature, industry is looking to achieve the best utilization of the wind
energy in order to fulfill the electrical needs for cities at a very affordable cost. In this paper, a method entitled Cluster
Identification Algorithm (CIA) and an optimization approach called a Multi-Objective Genetic Algorithm (MOGA) has
been developed. The main objective is to maximize the power and the efficiency, while minimize the cost caused by the
size and quantity of wind turbines installed on non-flat terrains (i.e., a terrain with different heights). The fitness functions
evaluate different population sizes and generation numbers to find the best options. Necessary assumptions are made in
terms of wind directions, turbine capacities, and turbine quantities. Furthermore, this study considers how the downstream
decay model from the wind energy theory describes a relationship between the wind turbines positioned ahead and the
subsequent ones. Finally, a model that relates the layout of wind farm with an optimal combination of efficiency, power and
cost is suggested. A case study that addresses the three dimensional terrain optimization problems using the combination of
CIA and MOGA algorithms is presented, which validates the proposed approach. The methodology is expected to help
solving other similar problems that occur in the renewable energy sectors.
Keywords: wind turbine; cluster identification algorithm (CIA); multi-objective genetic algorithm (MOGA); optimization
of wind farm layout; wind energy
(Received on November 29, 2013; Accepted on June 10, 2014)
1. INTRODUCTION
Currently, wind energy is receiving considerable attention as an emission-free, low cost alternative to fossil fuel. It has a
wide range of applications such as battery charging, mobile power generator, or auxiliary power sources for ships, houses
and buildings. In terms of a large, grid-connected array of turbines, it is becoming an increasingly important source of
commercial electricity. In this paper, an optimization methodology encompassing Cluster Identification Algorithm (CIA)
and Multi-Objective Genetic Algorithm (MOGA) are developed to optimize the wind farm layout on non-flat terrain. The
optimization of layout is a multi-faceted problem, such that (1) maximizing the efficiency, which can be heavily affected by
the aerodynamic losses; (2) maximizing the wind power generation; and (3) minimizing the cost of installation, which is
affected by the size and quantity of wind turbines. At the same time, other important variables, including different terrain
heights, wind directions, wind speed over a period of one year, and terrain size, are taken into consideration. The terrain is
analyzed with the use of Cluster Identification Algorithm (CIA) because it is possible to determine a cluster of positions.
After that, a subset of positions that is the most suitable can be selected from the total land area. Another important fact is
that the wind turbine capacities and characteristics are not the same. Physical and performance characteristics like the rotor
area and the turbine height should be analyzed simultaneously. Based on the extensive review of closely related literature, it
is difficult to locate the proper methodology for optimal wind turbine placement problems that comprehensively considers
the aforementioned issues [Lei 2006, Kusiak and Zheng 2010, Kusiak and Song 2010], which has been the motivation of
this research. In this context, this paper presents the development of optimization algorithms and the computational results
of the real-world case.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 396-407, 2014

AGENT-BASED PRODUCTION SIMULATION OF TFT-LCD FAB DRIVEN BY


MATERIAL HANDLING REQUESTS
Moonsoo Shin1, Taebeum Ryu1, and Kwangyeol Ryu2,*
1

Department of Industrial and Management Engineering


Hanbat National University
Daejeon, Korea
2

Department of Industrial Engineering


Pusan National University
Busan, Korea
*
Corresponding authors e-mail: [email protected]
Thin film transistor-liquid crystal display (TFT-LCD) fabs are highly capital-intensive. Therefore, to ensure that a fab
remains globally competitive, production must take place at full capacity, with extensive utilization of resources, and must
employ just-in-time principles that require on-time delivery with minimum work-in-process (WIP). However, limited space
and lack of material handling capacity act as constraints that hamper on-time delivery to processing equipment. Therefore,
to build an efficient production management system, a material handling model should be incorporated into the system.
This paper proposes a simulation model applying an agent-based collaboration mechanism for a TFT-LCD fab, which is
driven by material handling requests. Every manufacturing resource, including equipment for processing and material
handling as well as WIP, is represented as an individual agent. The agent simulates operational behaviors of associated
equipment or WIP. This paper also proposes an event graph-based behavior model for the agent.
Keywords: TFT-LCD fab; production management; production simulation; material handling simulation; agent
(Received on December 1, 2013; Accepted on November 30, 2014)
1. INTRODUCTION
The thin film transistor-liquid crystal display (TFT-LCD) industry is naturally capital-intensive, with a typical fab requiring
an investment of a few billion dollars. A cutting-edge TFT-LCD fab contains highly expensive processing equipment
performing complicated manufacturing operations and large material handling equipment connecting this processing
equipment (Chang et al., 2009). Because idle equipment and more work-in-process (WIP) than necessary lead to high
operational costs, production must take place at full capacity, with extensive utilization of resources, and employ just-intime principles that require on-time delivery with minimum WIP to ensure that the fab remains globally competitive
(Acharya, 2011). Thus, optimal management of production capacity is critical, and consequently, efficient production
planning and scheduling pose great challenges to the TFT-LCD industry.
Two alternative approaches are usually employed for production planning and scheduling in TFT-LCD fabs (Ko et
al., 2010): 1) optimization and 2) simulation. An optimization approach aims to find an optimal solution, which is
represented as a combination of resources and products within a given time frame, and typically applies linear
programming (LP) methods (Chung et al., 2006, Chung and Jang, 2009, Leu et al., 2010). It is difficult for a mathematical
model to sufficiently reflect dynamic field constraints, and it is challenging (albeit possible) to reformulate the
mathematical model in response to environmental changes. On the other hand, a simulation approach continuously searches
for an optimal solution by alternating decision variables, such as step target, equipment arrangement, and dispatching rules,
according to the given processing status (Choi and You, 2006). Thus, a simulation approach is more suited to a dynamic
environment than an optimization approach. However, existing approaches to production simulation for TFT-LCD fabs
restrictively implement material handling processes; consequently, they have certain limitations in their prediction power
(Shin et al., 2011).
This paper proposes a material handling request-driven simulation model for production management of a TFT-LCD
fab, aiming to implement dynamic material handling behavior. In particular, an agent-based collaboration mechanism is
applied to production simulation that provides a production manager with the capability of performing what-if analysis
on production management problems such as production planning and scheduling. Agent-based approaches have been
widely adopted to ensure collaborative decision-making (Lee et al., 2013). Every manufacturing resource, including process
and material handling equipment as well as WIP, is represented as an individual agent, and material handling request-driven
collaboration among these agents implements dynamic WIP routing. The remainder of this paper is organized as follows.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 408-420, 2014

LOWER AND UPPER BOUNDS FOR MILITARY DEPLOYMENT PLANNING


CONSIDERING COMBAT
Ivan K. Singgih1 and Gyu M. Lee1,*
1

Department of Industrial Engineering


Pusan National University
Busan, Korea
*
Corresponding authors e-mail:[email protected]
In the military deployment planning problem (DPP), troops and cargoes are transported from source nodes to destination
nodes, while considering various constraints, such as the supply availability, demand satisfaction constraints, and
availability of required multimodal transportation assets. The enemies may exist at several locations to block the
transportation of troops and cargoes to the destinations. In order to arrive at the destinations, the troops need to have
combats with the enemies, which cause loss of troops. To satisfy the demands, additional troops may be necessary to be
transported. Usage of various transportation modes leads to the introduction of subnodes and subarcs in the graphical
representation. A mixed integer programming (MIP) formulation is proposed, which is classified as a fractional
programming. A solution method which calculates the lower and upper bounds is developed. Then, the gap between the
lower and upper bounds is calculated. The computational results are provided and analyzed.
Keywords: military deployment planning problem; multimodal transportation
(Received on December 1, 2013; Accepted on September 26, 2014)
1. INTRODUCTION
Military DPP is dealing with transportation of troops and cargoes from the sources to destinations using transportation
assets to satisfy the demand requirements at the destinations. The transportation assets in a single mode or multi-modes can
be used. Practically, the usage of multimodal transportation assets is needed in some geographical location which cannot be
traveled only by transportation assets of a single mode. The usage of multimodal transportation assets requires the
unloading of troops and cargoes from transportation assets of a mode and loading of troops and cargoes to transportation
assets of another mode. Each unit of troops or cargoes is required to be transported to the destinations before a certain due
date, in order to support military activities in peace or war situations. Late deliveries are not preferred, so penalties are
charged for late deliveries. However, the enemy troops exist between some nodes and block the transportation of the troops
and cargoes. To transport the troops and cargoes between nodes where the enemies exist, the troops need to have combats
with the enemies. Each combat reduces the size of the troops and the enemies. The costs related with transportation,
transfer, and inventory of troops and cargoes, procurement and inventory of transportation assets, number of troops loss,
and penalties of late deliveries are minimized in the objective function. Each part of the objective function is associated
with a certain weight. Several constraints, which are the availability of supplies and flow balance of troops, cargoes, and
transportation assets, must be satisfied. The multimodal transportation assets used to transport the troops and cargoes are
shown in Figure 1.
Some studies on DPP using multimodal transportation assets have been conducted. A multimodal DPP for military
purposes was formulated by Singgih and Lee (2013) who introduced a graphical representation of subnodes and subarcs,
which are used to express the nodes and arcs while considering the usage of multimodal transportation assets. They
formulated the problem as an MIP, obtained the solutions using LINGO and analyze the characteristic of the problem using
sensitivity analysis. A large-scale multicommodity, multi-modal network flow problem with time windows is solved by
Haghani and Oh (1996). A heuristic which exploits an inherent network structure of the problem with a set of constraints
and an interactive fix-and-run heuristic are proposed to solve a very complex problem in disaster relief management. A new
large-scale analytical model was developed by Akgun and Tansel (2007) and solved using CPLEX. The usage of relaxation
and restriction methods enables the model to find the solution in a shorter time. The studies on the multicommodity freight
flows over a multimode network were reviewed by Crainic and Laporte (1997).
Lanchester combat model is a set of differential equation models that describe change in the force levels that describe
the combat process (Caldwell et al., 2000). Lanchester differential equation models are able to provide insight into the
dynamics of the combat and provide more information to address more critical operational problems. Proposed by F. W.
Lanchester in 1914, the Lanchester combat models are used in various researches. Kay (2005) explained and gave examples
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 421-435, 2014

ANALYSIS OF SUPPLY CHAIN NETWORK BY RULES OF ORIGIN


IN FTA ENVIRONMENT
Taesang Byun1, Jisoo Oh2, and Bongju Jeong2,*
1

Sales Planning Team


Nexen Tire Inc.
Bangbae-dong 796-27, Seoul, Korea
2

Department of Information and Industrial Engineering


Yonsei University
50 Yonsei-ro 120-749, Seoul, Korea
*
Corresponding authors e-mail: [email protected]

This paper presents a supply chain in Free Trade Agreements (FTA) environments governed by rules of origin and analyzes
it using profit analysis and supply chain planning. The proposed supply chains follow the rules of origin as wholly
obtained, substantially transformation, and third country processing criteria. These supply chain can be used to take nontariff benefits according to the rules of origin. In order to evaluate the validity of the proposed supply chain, we construct
profit models and show optimal sales prices can maximize net profits. The profit model encompasses the structure of
supply chain which enables decision-makers to make a strategic decision on evaluation and selection of efficient FTA.
Using the output of profit models, global supply chain planning models are built to maximize profit and customer needs. A
case study for a textile company in Korea is provided for illustrating how the proposed supply chain models work.
Keywords: FTA; rules of origin; supply chain management; profit analysis; supply chain planning
(Received on December 16, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
In recent global market, Free Trade Agreements (FTA) are rapidly increasing to maximize the international trade profits
among countries. By joining FTA, each country expects to explore and acquire new market for export, promote industrial
restructuring, and improve the relevant systems. Moreover, the trade tariff concession results in the economic effects of
inflow of oversea capital and technologies. Relaxing the tariff barriers and extensive application of rules of origin expedites
the adoption of FTA and improve the multi-national production environments. This is because in employing the rules of
origin, different tariff rates are applied and resolved with regard to boundary of origins. Therefore, there is a strong
motivation for global companies to construct a framework for FTA supply chain and then take advantage of it. In this
research, we propose supply chain networks according to the rules of origin in FTA environment. We investigate a profit
structure of company and find optimal selling price in FTA supply chain. Then companies can decide overseas production
for their profit maximization and establish supply chain planning on multinational production activities. In this paper, we
pursue the profit maximization of each company involved in FTA supply chain and try to simplify it for further analysis.
The case study shows how a Korean textile company can take advantage of non-tariff concession in FTA environment.
Although many previous literatures are found in addressing the various issues of FTA environment, which are mostly
its economic impacts, few studies have been performed in view of supply chain network. Not surprisingly, some
researchers are interested in competitiveness gains in FTA (Weiermair and Supapol (1993), Courchene (2003), and Seyoum
(2007)). Regarding the rules of origin, many researchers considered the benefit of usage of it in FTA environments to
maximize the profit of companies (Estevadeordal and Suominen (2004), Bhavish et al. (2007), Scott et al. (2007), and
Drusilla et al. (2008)). In terms of pricing policy, Zhang (2001) formulated the model for profit maximization to choose the
location of a delivery center considering customer demand and the selling price of a product and Manzini et al.(2006)
developed mathematical programming models for the design of a multi-stage distribution system to be flexible and
maximum profit. Hong and Lee (2013) and Lee (2013) suggested the price and guaranteed lead time of a supplier that
offers a fixed guaranteed lead time for a product. Savaskan et al. (2004) defined the model relationship between producer,
retailer, and third party in recycling environment and developed a profit model for each member. On the other hand, Zhou
et al. (2007) tried to guarantee the profits of all supply chain members using the profit model in consideration of order
quantity and selling price.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 1-10, 2015


APPLICATION OF INTEGRATED SUSTAINABILITY ASSESSMENT: CASE STUDY OF A SCREW DESIGN
Zahari Taha 1, H. A. Salaam2,*, S. Y. Phoon1, T.M.Y.S. Tuan Ya3 and Mohd Razali Mohamad4
1

Faculty of Manufacturing Engineering


2
Faculty of Mechanical Engineering
Universiti Malaysia Pahang
Pekan, Pahang 26600, Malaysia
*
Corresponding authors e-mail: [email protected]
3

Department of Mechanical Engineering


Universiti Teknologi PETRONAS
Bandar Sri Iskandar
Tronoh, Perak 31750, Malaysia

Faculty of Manufacturing Engineering


Universiti Teknikal Malaysia Melaka
Hang Tuah Jaya, Durian Tunggal, 76100, Malaysia
Sustainability can be referred to as meeting the needs of the present generation without compromising the ability of future
generations to meet their own needs. For politicians, it is an attempt to shape the social; sustain the economy and preserved
the environment for future generations. Balancing these three criteria is a difficult task since it involves different output
measurements t. The aim of this paper is to present a new approach of evaluating sustainability at the product design stage.
There are three criteria involved in this study which is manufacturing costs, carbon emission release into the air and
ergonomic assessment. Analytic hierarchy process (AHP) is used to generalize the outputs of the three criteria which is then
ranked accordingly. The highest score is selected as the best solution. In this paper, a simple screw design is presented as a
case study.
Keywords: sustainable assessment; multi-criteria decision method (MCDM); analytic hierarchy process (AHP); screw.
(Received on November 30, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
The United Nations Department of Economic and Social Affairs/Population Division projected that the world population will
increase from 6.1 billion in the year 2000 to 8.9 billion by the year 2050 (United Nations 2004). With this huge number of
human population, the need for consumer products will increase. Many consumers purchase multi-functional products
according to their individual preferences (Thummala, 2011).
In order to fulfill consumer demand for products, manufacturing companies can consider four (4) ways to do it. The first
way is by expanding their production lines or factory areas. By doing this, they can buy more equipment and hire more
workers to increase their productivity. Besides that they can explore new business by adding more new products in the
production line to increase the company profits. The second way is by increasing the number of workers and machines
without expanding the factory building. By doing this, the productivity can be increased; but with a minimal cost compared to
expanding the factory.
The third way is by giving the operator opportunity to work overtime or changing the operation time to a 24 hours
production line system with two or three shifts. By doing this, it will give the operators a chance to increase their income for
a better living. Lastly, they can outsource the manufacturing of some components. The difficulty with this is in ensuring the
exact quality needed by the customers and the capability of the third party company in delivering those components on time to
the customers. However which way they want to do it they need to consider manufacturing, environmental and social costs.
Theoretically, expanding the factory will increase productivity, but the investment cost is too high and it can lead to
serious environmental problems since more and more land must be used. On the other hand, failure to expand can lead to
serious society problems such as poverty which can further lead to criminal activities. On the other hand allowing workers to
work overtime will cause them to have less resting time thus affecting productivity and their health.
To protect the environment for the future generation, many countries around the world have introduced more stringent
environmental legislations. As a result; manufacturing companies especially in Malaysia are forced to abide by these new

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 11-22, 2015


International Journal of Industrial Engineering, 21(1), 99-111, 2014
International Journal of Industrial Engineering, 21(1), 99-111, 2014

GLOBAL SEARCH OF GENETIC ALGORITHM ENHANCED BY


MULTI-BASIN DYNAMIC NEIGHBOR SAMPLING
Misuk Kim1 and Gyu-Sik Han2,*
1

Department of Industrial Engineering


Seoul National University
1 Gwanak-ro, Gwanak-gu,
Seoul, 151-742, Republic of Korea

Division of Business Administration


Chonbuk National University
567 Baekje-daero, Deokjin-Gu,
Jeonju-si, Jeollabuk-do 561-756, Republic of Korea
*
Corresponding authors e-mail: [email protected]
We propose a pioneering enhanced genetic algorithm to find a global optimal solution without derivatives information. A new
neighbor sampling method driven by a multi-basin dynamics framework is used to efficiently divert from one existing local
optimum to another. The method investigates the rectangular-box regions constructed by dividing the interval of each axis in
the search domain based on information of the constructed multi-basins, and then finds a better local optimum. This neighbor
sampling and the local search are repeated alternately throughout the entire search domain until no better neighboring local
optima could be found. We improve the quality of solutions by applying genetic algorithm with the resulting point as an initial
population generator. We fulfill two kinds of simulations, benchmark problems and a financial application, to verify the
effectiveness of our proposed approach, and compare the performance of our proposed method with that of direct search,
genetic algorithm, particle swarm optimization, and multi-starts.
Keywords: genetic algorithm, global optimal solution, multi-basin dynamic neighbor sampling, heston model
(Received on November 27, 2013; Accepted on September 02, 2014)
1. INTRODUCTION
Many practical scientific, engineering, management, and finance problems can apply to global optimization problems
[Armstrong (1978), Conn et.al. (1997), Cont and Tankov (2004), Goldstein and Gigerenzer (2009), Lee (2005), Modrak
(2012), Shanthi and Sarah (2011)]. From the complexity point of view, the global optimization problems belong to the hard
problem class, with the assumption that the computational time and cost required to solve them increase exponentially with
the input size of the problem. In spite of these difficulties, various heuristic algorithms have been developed to reduce
computational time and cost in resolving them. The classical smooth methods are optimization techniques that need objective
functions that behave smoothly because the methods use the gradient, the Hessian, or both information types. Mathematically,
the methods are well established, and some smooth optimization problems are resolved fast. However, the derivative
information is not given in most real-world optimization problems, which are large and complicated. Thus, more time and
cost are required to find solutions. Stochastic heuristics such as genetic algorithm, direct search, simulated annealing, particle
swarm optimization, and clustering method are other popular methods that proved to work well for many problems that are
completely impossible to solve using classical methods [Gilli et.al. (2011), Kirkpatrick et.al. (1983), Michaelwicz and Fogel
(2004), Trn (1986), Wang et.al. (2013)]. The performances of these previous studies depend on where the heuristic
algorithms are applied to which problem or what the initial estimate for its optimization is. One of the main drawbacks of
these stochastic heuristics is that too much computing time and cost are used to locate a local (or improved local) optimal
solution, but not a global one.
In this paper, we propose a novel enhanced genetic algorithm that incorporates and extends the basic framework from
Lee (2007), which is a deterministic methodology for global optimization to reduce the disadvantages of stochastic heuristics.
The method utilizes multi-basin dynamic neighbor sampling to locate an adjacent local optimum by constructing
rectangular-box regions that approximate multi-basins of convergence in the search space. By alternating this neighbor
sampling and the local search, we try to improve and accelerate the search for better local optima. Then, the resulting point is
used as an ancestor of the initial descendant populations to enhance the global search of genetic algorithm. We will also
compare the performances of the conventional heuristic global optimization algorithms with that of our proposed method.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 23-34, 2015

THE IMPACT OF RELATIONSHIP STRATEGIES ON SURVIVABILITY OF


FIRMS INSIDE SUPPLY NETWORKS
Mohamad Sofitra12,*, Katsuhiko Takahashi1 and Katsumi Morikawa1
1

Graduate School of Engineering


Hiroshima University
Higashi-Hiroshima, Japan 739-8527
*
Corresponding authors e-mail: [email protected]
2

Department of Industrial Engineering


Universitas Tanjungpura
Pontianak, Indonesia

A relationship strategy, which engages firms with each other, mainly intends to achieve a firms goals. One goal of such
firms is to prolong the firms survival in the market. At the supply networks (SN) level, the interactions among firms by
means of engaging strategies, i.e., cooperation, defection, competition and co-opetition, are complexly interconnected and
coevolved. Due to their complexity and dynamic nature, investigations of the outcomes of the coevolution of
interconnected relationship strategies are non-trivial tasks. To overcome these difficulties, this paper proposes cellular
automata simulation frameworks and adopts a complex adaptive supply networks perspective to develop a model of the
coevolution of interconnected relationship strategies in a supply network. We aimed to determine how and under what
conditions the survivability of firms inside supply networks is affected by the coevolution of interconnected relationship
strategies among them. We constructed experiments using business environment scenarios of a SN as its factors and
observed how different interaction policies of firms could produce networks effects that impact the lifespan of firms. We
found that a co-operation coupled with a co-opetition policy in a business environment that favors co-operation can
promote the lifespan of nodes at both the individual and SN level.
Keywords: interconnected relationships strategy; complex adaptive supply network; cellular automata; survivability.
(Received on November 29, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
Each firm situated in any network needs to build relationships with other firms. A relationship strategy, which engages
firms with each other, mainly intends to achieve a firms goals. One goal of firms is to prolong its survival in the market.
Issues in the buyer-supplier relationship strategy and its impact at individual or dyad level of firms have been studied for
over two decades (Choi & Wu, 2009). However, at networks level it recognized that instead a particular relationship
strategy (e.g., cooperation, defection, competition and coopetition) exists and being independent from each others, they are
complexly interconnected (Ritter, 2000). None of the relationships in a network are built or operate independently of others
(Hakansson & Ford, 2002). A small shift in a particular relationship state in a given network could affect the other
relationships that are directly connected and then in turn affect the other indirectly connected relationships. This domino
effect can result in either a minor or major complication at both the individual and SN level. Moreover, firms and their
relationship strategies are very dynamic, similar to living entities that co-evolved over time (Choi, Dooley, &
Rungtusanatham, 2001; Pathak et.al., 2007). Therefore, to further our understanding of the complex nature of a SN, we
must extend our analysis from individual firms or the dyadic level to the network level. At the network level of analysis, we
will attempt to determine how individual strategies (i.e., cooperation, defection, competition and co-opetition) interconnect
and coevolve inside the SN and investigate the related emergence network effects.
A cooperation relationship between firms is motivated by a common goal (e.g., to solve problems, to improve
products and streamline processes, etc.) (Choi, Wu, Ellram, & Koka, 2002) and/or a resource dependency (Ritter (2000);
Lee & Leu (2010)). This type of relationship builds upon teamwork by sharing information and resources. Conversely, a
defection relationship between firms is provoked by short-term opportunistic behavior (e.g., being lured by better terms of a
contract from other firms) (Nair, Narasimhan, & Choi, 2009).
A competition relationship between firms is based on the logic of economic risks (e.g., appropriation risk, technology
diffusion risk, forward integration by suppliers and/or backward integration by buyers, etc.) that can introduce threats to the
core competence of a firm (Choi et al., 2002). Conversely, co-opetition is a strategy employed by firms that simultaneously
mixes competitive actions with co-operative activities (Gnyawali & Madhavan, 2001). The motivation for engaging in coISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 35-45, 2015

DRIVERS AND OBSTACLES OF THE REMANUFACTURING INDUSTRY IN


CHINA: AN EMPIRICAL STUDY
Yacan Wang1,*, Liang Zhang1, Chunhui Zhang1 and Ananda S Jeeva2
1

Department of Economics
Beijing Jiaotong University
No.3 Shangyuan Residency, Haidian District
Beijing 100044, Peoples Republic of China
*
Corresponding authors email: [email protected]
2

Curtin Business School


Curtin University
Perth, Australia 6845

Remanufacturing is one of the prioritized sectors that pushes sustainability forward and has been vigorously promoted by
two rounds of experimental programs in China. A survey of 7 Chinese remanufacturing enterprises involving 190
respondents is used to empirically identify the current situation and explore influential factors of the remanufacturing
industry in China. The results of principal component factor analysis indicate that enterprise strategy factors as well as
policy and technical factors are the major drivers of the remanufacturing industry with the largest contribution rate of
21.424% and 20.486% respectively. The policy, economic factors and industry environmental factors are major barriers
with the largest contribution rate of 29.361% and 19.690% respectively. This is the first empirical study to explore the
influencing factors of the remanufacturing industry in China. The results provide preliminary reference for government and
industry to further develop mechanism to promote remanufacturing practice in China.
Keywords: remanufacturing industry; drivers; barriers; empirical study; China
(Received on November 29, 2013; Accepted on August 10, 2014)
1. INTRODUCTION
The current challenges in scarce resources and polluted environment in China have spurred the circular economy as a new
key to Chinas economic growth (Zhu & Geng, 2009). Remanufacturing, as a pillar of circular economy, is pushed forward
by a series of policies by the Chinese government. In 2005, the State Council issued Several Options of the State Council
on Speeding up the Development of Circular Economy, which included remanufacturing as an important component of
circular economy. In 2008, the National Development and Reform Commission (NDRC) launched experimental auto-part
remanufacturing programs in 14 selected firms. In 2009, the Ministry of Industry and Information Technology (MIIT) also
launched the first block of experimental machinery and electronic products remanufacturing programs in 35 selected firms
and industry agglomeration areas. In 2011, NDRC issued Information on Further Improving the Work on Experimental
Remanufacturing Programs, which aimed to further expand the category and coverage of remanufacturing products.
These experimental programs have generated some professional remanufacturing firms. Data from China Association
of Automobile Manufacturing showed that by the end of 2010, China had already built a remanufacturing capacity of 0.63
million pc/set including engines, gear boxes, steering booster, dynamos, etc., and 12 million retreaded tires. However,
remanufacturing is still in an infancy stage in China, encumbered by various obstacles. The Chinese government has not
established an independent and robust legal system specific to the remanufacturing industry (Wang, 2010). Furthermore,
there is no clear direction for the growth of the remanufacturing industry (Zhang et al, 2011).
Most of the studies on remanufacturing in China focus on research and development (R&D) of technology and products.
Extant literature that qualitatively analyzes the drivers and barriers of remanufacturing are limited, and empirical studies are
rare. Although Zhang et al. (2011) propose different development paths based on the features of the resources input in
different phases of automobile remanufacturing development, the current situation and influential factors have not been
tested empirically. Hammond et al. (1998) explore the influential factors of the automobile remanufacturing industry in the
USA by carrying out a series of empirical investigations. Seitz (2007) has empirically examined the influencing factors of
remanufacturing by interviewing a number of Original Equipment Manufacturers (OEM). Nevertheless, owing to the
different development level and overall environment of remanufacturing, the influential factors of remanufacturing industry
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 46-61, 2015

AN INTEGRATED FRAMEWORK- FOR DESIGNING A STRATEGIC GREEN


SUPPLY CHAIN WITH AN APPLICATION TO THE AUTOMOTIVE
INDUSTRY
S. Maryam Masoumi K1; Salwa Hanim Abdul-Rashid1,*, Ezutah Udoncy Olugu1; Raja Ariffin Raja Ghazilla1
1

Centre for Product Design and Manufacturing (CPDM), Department of Mechanical Engineering, Faculty of Engineering,
University of Malaya, 50603 Kuala Lumpur, Malaysia
*Corresponding authors e-mail: [email protected]

In todays global business, several organizations have realized that green supply chain practices provide them with
competitive benefits. In this respect, a strategically oriented view of environmental management is critical to supply chain
managers. Regarding the importance of this issue, an attempt has been made to develop an integrated framework for
designing a Strategic Green Supply Chain (SGSC). Firstly, by reviewing the literature, a causal relationship model is
developed. This model presents the main factors affecting the decisions for prioritizing green strategies and initiatives.
Secondly, based on this model, a decision-making tool using the Analytic Network Process (ANP) is provided. This tool
assists companies to prioritize the environmental strategies and related initiatives in different operational areas of their supply
chain. Finally, in order to provide part of the data required in this tool, a performance measurement system is developed to
evaluate the strategic environmental performance of the supply chain.
Keywords: strategic green supply chain; green supply chain design; analytical network process; environmental strategy;
environmental performance measurement
(Received on November 30, 2013; Accepted on January 2, 2015)
1. INTRODUCTION
In recent years, increased pressure from various stakeholders, such as regulators, customers, competitors, community groups,
global communities, and non-governmental organizations (NGOs), have motivated companies to initiate environmental
management practices not only at the firm level, but also throughout the entire supply chain (Corbett and Klassen 2006,
Gonzalez-Benito and Gonzalez-Benito 2006). This shift from the implementation of green initiatives at the firm level towards
the whole supply chain, requires a broader development of environmental management from the initial sources of raw
material to the end-user customers in both the forward and reverse supply chain (Linton et al. 2007).
Previous studies have introduced a long list of green initiatives associated with various operational areas of supply
chains (Thierry et al. 1995, Zsidisin and Hendrick 1998, Rao and Holt 2005, Zhu et al. 2005). The highly competitive nature
of the business environment requires the companies to carefully consider the outcomes of these green initiatives, focusing on
only those that are strategic to their operational and business performance. In fact, making the wrong choice of green
initiatives can lead to wasted cost and effort, and may even reduce competitive advantages (Porter and Kramer 2006). In this
respect, supply chain managers have to consider only the green supply chain initiatives (GSCIs) that are strategic to their
business performance. In other words, there is a need to make informed decisions in terms of selecting practices that would
potentially deliver better value and competitiveness.
Adopting the concept of strategic social responsibility defined by Porter and Karmer (2006), the term Strategic Green
Supply Chain (SGSC) in this paper refers to a green supply chain (GSC) that strategically selects and manages green
initiatives to generate sustainable competitive advantage when implemented throughout the entire chain. The term strategic
reflects the proactive approach as opposed to a responsive approach taken in initiating GSCIs.
According to the theory of the Natural-Resource-Based-View (NRBV) developed by Hart (1995), and Hart et al. (2003),
there are three distinct kinds of green strategy pollution prevention, product stewardship, and clean technology. Each of
these green strategies has its own drivers, which enable it to provide the organizations with a specific competitive advantage.
In an attempt to decide which green strategy is more suitable for a firms business, it has to consider several determining
factors.
In this study, an integrated framework is developed to assist the organizations to design a SGSC that provides them with
a framework for selecting the most suitable green strategy for their supply chain and aligning all of their green initiatives with
the selected strategy. This framework will provide the insight into the strategic importance of green initiatives for a company
and assist the managers to strategically manage their green supply chain improvement programmes. The strategic importance
of green strategies to an enterprise will be determined by evaluating the role of these initiatives in meeting the requirements of
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 62-79, 2015

SELECTING AN OPTIMAL SET OF KEYWORDS FOR


SEARCH ENGINE ADVERTISING
Minhoe Hur1, Songwon Han1, Hongtae Kim1, and Sungzoon Cho1,*
1

Department of Industrial Engineering


Seoul National University,
Seoul, Korea
*Corresponding authors e-mail: [email protected]
Online advertisers who want their website to be shown in the web search pages need to bid for relevant keywords. Selecting
such keywords in advertising is challenging because they need to find relevant keywords of different click volumes and costs.
Recent works focused on merely generating a list of words by using semantic or statistical methodologies. However, limited
previous studies do not guarantee that those keywords will be used by customers and subsequently provide large traffic
volume with lower costs. In this study, we propose a novel approach of generating relevant keywords by combining search
log mining and proximity-based approach. Subsequently the optimal set of keywords with a higher volume while minimizing
costs was determined. Experiment results show that our method generate an optimal set of keywords that are not only
accurate, but also attract more click volume with less cost.
Keywords: search engine advertising; ad keywords; query logs; knapsack problem; genetic algorithm
(Received on November 30, 2013; Accepted on December 29, 2014)
1. INTRODUCTION
Search engine advertising is a widely used business model in the online search engine system (Chen et al., 2008, Shih et al.,
2013). In this model, advertisers who want their ads to be displayed in the search results page bid on keywords that are related
to the context of ads (Chen Y. et al., 2008). The ads can be displayed when the corresponding keywords are searched and
their bid prices are higher than the minimum threshold (Chen et al., 2008). It is demonstrated that this business model offers a
much better return on investment for advertisers, because those ads are presented to the target users who consciously made
search queries using relevant keywords (Szymanski et al., 2006). Figure 1 shows the example of search engine advertising
where advertisements are displayed on the result page followed by a query.
To bid on keywords, advertisers need to choose which keywords would be associated by considering their ads that will
be displayed (Ravi et al., 2010). In general, there are three criteria widely known that apply to good keywords. First,
advertisers need to select relevant keywords that relate to their advertisement closely so that many potential customers would
query those keywords to find their product or services (Kim et al., 2012). It is the most important step for reducing the gap
between keywords selected by advertisers and their potential customers (Oritz-Cordova and Jansen, 2012). Secondly,
choosing keywords that attract larger volume of clicks toward their advertisements among relevant keywords will be more
desirable (Ravi et al., 2010). As keywords have their own click volume in the search engine, selecting them to increase the
number of clicks on their ads as possible is one of the critical elements in search engine marketing. Finally, when comparing
a group of keywords that are relevant and popular, identifying and selecting keywords that are cheaper than others will be
also desirable to implement, more efficient and an effective marketing campaign with limited budgets.
However, selecting keywords manually by considering such criteria is a challenging and time-consuming task for
advertisers (Abhisher and Hosanagar, 2007). For one, it is difficult to determine which keywords are relevant to the target
ads. Though advertisers generally have a good understanding over their ads, their desire is to select keywords that would not
only represent their ads well but also be used by potential customers who would ultimately be interested in the products or
services they offer. Moreover keywords have volatile click volumes and cost-per-click influenced by user search behavior in
search engine for a long time. Therefore it is not easy to grow influx of customers into their websites while reducing costs at
once.
To overcome the raised problems, many studies have been proposed and they can be divided into two categories: (1)
Generating related keywords by developing certain automatic methods for generating related keywords so that advertisers
would find suitable keywords more easily and (2) Selecting an optimal set of keywords to maximize the objective values such
as click volume or ad effects with budget constraints. Though such efforts work well in their own experiments, they have
several limitations to be widely applied in real problems. First, some studies have no guarantee that the keywords would
actually be queried by users. Generated keywords should be familiar to not only advertisers but also potential customers so
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 80-92, 2015

A STUDY ON THE EFFECT OF IRRADIATION ANGLE OF LIGHT


ON DEFECT DETECTION IN VISUAL INSPECTION
Ryosuke Nakajima1,*, Keisuke Shida2, and Toshiyuki Matsumoto1
1

Department of Industrial and Engineering


Aoyama Gakuin University
Kanagawa, Japan
*
Corresponding authors e-mail: [email protected]
2

Department of Administration Engineering


Keio University
Kanagawa, Japan

This study focuses on the difference in the visibility of defects according to the irradiation angle of the light in visual
inspection using fluorescent light, and also considers the relationship between the irradiation angle and the defect detection.
In the experiment, the irradiation angle of light is considered as the experimental factors. The visibility of defects that are
different according to the irradiation angle of the light are reproduced using a tablet PC, and the effect of inspection
movement on the defect detection is evaluated. As the result, it is observed that the inspection oversights occurs by the
irradiation angle of light. Also, it is observed that the angle formed by the visual line and the inspection surface becomes
not perpendicular, the defects detection also becomes more difficult. Based on the above observation, new inspection
method is proposed instead of the conventional inspection method.
Keywords: visual inspection, peripheral vision, irradiation angle of light, inspection movement
(Received on November 30, 2013; Accepted on October 20, 2014)
1. INTRODUCTION
In order to prevent defective products from being overlooked, product inspection has been given as much attention as
processing and assembling in the manufacturing industries. There are two types of inspections, functional inspection and
appearance inspection. In functional inspection, the effectiveness of the products are inspected, whereas in appearance
inspection, small visual defects like scratches, stains, surface dents and unevenness of the coating color are inspected.
Advancements have been made in functional inspection automation because it is easy to determine whether a product is
working (Hashimoto et al., 2009). On the other hand, in appearance inspection, it is not easy to establish standards to
determine whether a product is defective, because there are many types of defects. In addition, the categorization of a
product as non-defective or defective is affected by the size and depth of the defect. Moreover, some products have recently
become more detailed and smaller and the type of production has shifted to high-mix, low-volume production. Thus, it is
difficult to develop technologies that can discover small defects and create algorithms that identify different types of defects
with high precision. Therefore, appearance inspection depends on visual inspection using human senses (Kitagawa, 2001)
(Kubo et al., 2009) (Chang et al., 2009).
It is common in visual inspection to overlook the defects on defective products. This problem must be solved in
manufacturing industries. Generally, visual inspection is performed under a fluorescent light, and the inspectors check for
various defects by irradiating the light on the inspection surface. The defects that are frequently overlooked have a common
features, including a difference in the visibility of the defects according to the irradiation angle of the light (Hirose et al.,
2003) (Morita et al., 2013). Furthermore, the irradiation angle of the light that allows a defect to be visible differs with the
condition and type of defect. Therefore, it is necessary to change the irradiation angle of the light by moving the product in
order to detect various defects.
Moreover, the inspection movements of the inspector should change according to the irradiation angle, since the
visibility of a defect is determined by the virtual angle between the irradiation angle of the light and the inspection surface.
Although it is clear that the light should be installed in the appropriate position, the effect of the relation between the
irradiation angle of the light and the inspection movement on defect detection has not been clarified, and no one has
determined the appropriate position for the light to be installed. Therefore, rather than being installed in a consistent
position, the light is installed at either the upper front or upper rear of the object to be inspected.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 93-101, 2015

HEURISTIC RULES BASED ON A PROBABILISTIC MODEL AND A GENETIC


ALGORITHM FOR RELOCATING INBOUND CONTAINERS WITH
UNCERTAIN PICKUP TIMES
Xun Tong1, Youn Ju Woo2, Dong-Won Jang2, Kap Hwan Kim2,*
1

Department of Logistics and Maritime Studies


The Hong Kong Polytechnic University
Hung Hom, Hong Kong
2

Department of Industrial Engineering


Pusan National University
Busan, Korea
*Corresponding authors e-mail: [email protected]
Because dwell times of inbound containers are uncertain and trucks request containers in a random order, there are many
rehandling operations for containers on the top of the pickup containers. By analyzing dwell time data of various groups of
inbound containers, it is possible to derive a probability distribution of each group of containers. Assuming that dwell times of
each group of inbound containers follow a specific probability distribution, this paper discusses how to determine the
locations for rehandled inbound containers during the pickup process. The aim of this study was to minimize the total
expected number of rehandling steps for retrieving all the inbound containers from a bay. Two heuristic rules were suggested:
a heuristic rule obtained from a genetic algorithm, and a heuristic rule considering the confirmed and potential rehandlings
based on statistical models. A simulation study was performed to compare the performance of the two heuristic rules.
Keywords: container terminal; relocation; simulation; statistics; storage location
(Received on December 01, 2013; Accepted on August 10, 2014)
1. INTRODUCTION
Efficient operation of container yards is an important issue for the operation of container terminals (Ma and Kim, 2012; Jeong
et al., 2012). One of major operational inefficiencies in container terminals comes from rehandling operations for inbound
containers. Inbound containers may be picked up after discharging only if required administrative procedures including
customs clearance are finished. But, the pickup time of a container from a port container terminal is determined by the
corresponding consignee or the shipping liner considering various factors such as delivery request for the container from the
consignee, the storage charge for the container in the terminal, and the free-of-charge period. However, from the viewpoint of
the terminal operator, the pickup time of an inbound container is uncertain.
Data on inbound containers were collected from a container terminal in Busan, which has the 1,050m quay, 11 quay
cranes, 30 rubber tiered gantry cranes (RTGCs), and the total area of 446,250m2. The terminal handled 260,761 inbound
containers during 2012. The average duration of stay of an inbound container at the terminal was 5.7 days. Figure 1 illustrates
the average dwell times of inbound containers picked up by different groups of trucking companies, which were obtained
from the data. Ryu (1998) reported the results of time study for various operations by RTGCs. According to the study, the
average cycle time for a pickup operation, which is performed by an RTGC in the yard for transferring an inbound container
to a road truck, was 84 seconds. The average cycle time for a rehandling operation by an RTGC within the same bay was 74
seconds. There are 20~40 bays in a block. An RTGC can access all the bays in a block or even bays in neighboring blocks. But,
an RTGC holding a container does not usually move from one bay to another and so this study focused on the rehandling
operation within one bay.
Because of uncertainty of the dwell time (Kim and Kim, 2010), which is the duration of stay of a container at the yard,
the rehandling is a serious problem during the pickup operation of inbound containers. Thus, in studies of container terminals,
it is important to minimize the total expected number of relocations during the pickup process. Instead of assuming that the
pickup order of containers is completely unknown, when some attributes of containers are analyzed, there is a possibility to
reduce the number of relocations by utilizing the results of the analysis. Figure 1 illustrates that the average dwell times of
inbound containers, which are picked up by trucks from different groups of companies, are significantly different from each
other. This figure shows that by analyzing data on pickup times and various information which may be useful for reducing the
number of rehandles, can be derived. Voyages of vessels, vessel carriers, and shippers may be attributes to be used for the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 102-116, 2015

ADAPTIVITY OF COMPLEX NETWORK TOPOLOGIES FOR DESIGNING


RESILIENT SUPPLY CHAIN NETWORKS
Sonia Irshad Mari1, Young Hae Lee1,*, Muhammad Saad Memon1, Young Soo Park2, Minsun Kim2
1

Department of Industrial and Management Engineering


Hanyang University
Ansan, Gyoenggi-do, Korea
*Corresponding Authors email: [email protected]
2

Korea National Industrial Convergence Center


Korea Institute of Industrial Technology, Korea

Supply chain systems are becoming more complex and dynamic as a result of globalization and the development of
information technology. This complexity is characterized by an overwhelming number of relations and their
interdependencies, resulting in highly nonlinear and complex dynamic behaviors. Supply chain networks grow and selforganize through complex interactions between their structure and function. The complexity of supply-chain networks
creates unavoidable difficulty in prediction making it dicult to manage and control them using a linearized set of
models. The aim of this article is to design resilient supply chain network from the perspective of complex network
topologies. In this paper, various resilience metrics for supply chains are developed based on a complex network theory,
then a resilient supply chain growth algorithm is also developed for designing a resilient supply chain network. An
agent-based simulation analysis is carried out to test the developed model based on the resilience metrics. The results of
the proposed resilient supply chain growth algorithm are compared with major complex network models. A simulation
result shows that a supply chain network can be designed based on complex network theory, especially as a scale-free
network. It is also concluded that the proposed model is more suitable than general complex network models for the
design of a resilient supply chain network.
Keywords: supply chain network, resilient supply chain, disruption, complex network, agent-based simulation
(Received on December 1, 2013; Accepted on January 02, 2015)
1. INTRODUCTION
The development of information technology and increasing globalization make supply chain systems more dynamic
and complex. Todays supply chain represents a complex network of interrelated entities, which includes many
suppliers, manufacturers, retailers, and customers. The concept of considering the supply chain as a supply network has
been suggested by many researchers (Surana et al., 2005). It has also been argued that the concepts of complex systems,
particularly complex networks, should be incorporated into the design and analysis of supply chains (Choi et al., 2001;
Pathak et al., 2007). A supply chain is a complex network with an overwhelming number of interactions and
interdependencies among the dierent entities, processes, and resources. A supply chain network is highly nonlinear,
shows complex multi-scale behavior, has a structure spanning several scales, and evolves and self-organizes through a
complex interplay of its structure and function. However, the sheer complexity of supply-chain networks, with its
inevitable lack of prediction, makes it dicult to manage and control them using the assumptions underlying a
linearized set of models (Surana et al., 2005). The concept of the supply chain as a logistics systems has therefore
changed from linear structures to complex systems (Wycisk et al., 2008). Thus, this new supply network concept is
more complex than a simple supply chain concept. Supply networks are comprised of the mess and complexity of
networks including reverse loops, two-way exchanges, and lateral links. They contain a comprehensive, strategic view
of resource management, acquisition, development, and transformation. Recently many researchers work on developing
resilient supply chain networks such as (Bhattacharya et al., 2012; Klibi et al., 2012; Kristianto et al., 2014; Zeballosa
et al., 2012).
Generally, supply networks exhibit complex dynamic behaviors and are highly nonlinear. They grow and selforganize with the help of complex connections between their structure and function. Because of this complexity, it is
very difficult to control and manage a supply network. Due to these complexities, supply network requires robustness
to cope with disruption risk and they should also be resilient enough to bounce back to its original state after disruption
risks (Christopher et al., 2004). Furthermore, the instability in todays business organizations and changing market
environments requires a supply network to be highly agile, dynamic, re-configurable, adaptive, and scalable that should
effectively and efficiently respond to satisfy demands. Many researchers have investigated supply networks by various
static approaches such as control theory, programming method, and queuing theory. For example, Kristianto et al.
(2014) proposed resilient supply chain model by optimizing inventory and transportation routes. Klibi et al. (2012)
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 117-125, 2015

OPTIMAL MAINTENANCE OPERATIONS USING A RFID-BASED


MONITORING SYSTEM
Sangjun Park1, Ki-sung Hong2, Chulung Lee3,*
1

Graduate School of Information Management and Security


Korea University
Seoul, Korea
2

Graduate School of Management of Technology


Korea University
Seoul, Korea

School of Industrial Management Engineering and Graduate School of Management of Technology


Korea University
Seoul, Korea
*Corresponding authors e-mail: [email protected]

A high-technology manufacturing operation requires extremely low levels of raw material shortage due to its critical
manufacturing line down recovering cost. It is important to determine the replacement time of a raw material against any line
down risk. We propose an RFID monitoring and investment decision system in the context of semiconductor raw material
maintenance operation. This paper provides the framework of the RFID monitoring system, the mathematical model to
calculate the optimal replenishment time, and the simulation model for the RFID investment decision under different risk
attitude with an aggressive new supply notion of Make to Consume. The simulation result presents that the frequency of
replenishment increases and the value of the RFID monitoring system increases as the manufacturers risk factor that reflects
the degree of risk aversion reduction.
Keywords: rfid; maintenance operation; value of information; risk reverse attitude
(Received on December 1, 2013; Accepted on January 10, 2015)
1. INTRODUCTION
Improvements in modern information technology have been applied to diverse industries (Emigh 1999). In spite of recent
progress, most concerns of enterprises still focus on their daily safety stock (SS) operations management. One of the key
purposes of keeping an SS is to have an immediate supply into a manufacturing line to prevent any risk of sales loss or
manufacturing line down. However, the traditional SS program based on Make to Stock (MTS) often faces a shortage and
an overage issue in practice for various reasons, such as a fluctuating order, incorrectly estimated demand information, and a
lead time that causes a bullwhip effect (Lee et al. 1997, Kelle and Milne 1999). A high level of SS increases an inventory
holding cost, while a low level of SS increases the possibility of a supply shortage and a delivery expedition cost. For this
reason, diverse sophisticated supply chain programs have been introduced to decrease the bullwhip effect and the inventory
level. The Vender Managed Inventory (VMI) program has been introduced as one supply chain initiative (Forrester, 1958,
Cachon and Zipkin 1999). As such, the VMI reduces or even removes the customer SS at a manufacturing site through
sharing customer (manufacturer)s real-time stock information with a vendor. However, it still relies heavily on the accuracy
of demand forecasts. In particular, a vender should take additional supplying liabilities and inventory holding costs for a
certain inventory level by a VMI agreement compared to a traditional Order to Make (OTM) model based on a firm order.
This means that venders have to keep additional buffer stocks in their warehouse for timely VMI replenishment in addition to
the stored VMI volume at customer manufacturing sites, considering a production and replenishment lead time. Also, the
customer should take the liability with respect to consuming a certain level of inventory and risks of keeping dead stocks by
the VMI agreement when customers and venders improperly set the SS quantity with incorrect sales forecasting information.
In particular, high-technology industries, such as the semiconductor industry, are characterized as having a short product life
cycle and fast market changes. Thus, the overage and consumption liability could be a critical burden and risk for both
venders and customers.
For this reason, there have been a number of studies focusing on improving supply accuracy. The use of Radio
Frequency Identification (RFID) is a recent systematic approach that has contributed to the significant growth in sharing
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 126-133, 2015

OPTIMAL NUMBER OF WEB SERVER RESOURCES


FOR JOB APPLICATIONS
Xufeng Zhao1, Syouji Nakamura2,*, and Toshio Nakagawa3
1

Department of Mechanical and Industrial Engineering


Qatar University
Doha, Qatar
2

Department of Life Management


Kinjo Gakuin University
Nagoya, Japan
*Corresponding authors e-mail: [email protected]
3

Department of Business Administration


Aichi Institute of Technology
Toyota, Japan

The main purpose of this paper is to propose optimization problems in which how many number N of web servers should be
provided for net jobs with random process times. We consider the first case when a single job with random time S is processed
and take up the second case when number n of jobs with successive times are processed. The number n may not be a constant
value that could be predefined from the practical point, so that we modify the model in the second case by supposing n to be
a random variable. Next, we introduce shortage and excess costs into models to consider both costs suffered before and after
failures of server system. We obtain the total expected costs for each model and optimize them analytically. When physical
server failure time and job process time are exponentially distributed, optimal numbers that minimize the expected costs are
computed numerically.
Keywords: web server; random process; multi-jobs; system failure; shortage cost.
(Received on December 1, 2013; Accepted on September 15, 2014)
1. INTRODUCTION
The web server system is one kind of net service forms in which computers process jobs without considering their
physical constitution of computations. This is of great importance in net services due to the merit of efficiency and flexibility.
For example, when increased demand in data center is required, and its facilities and resources have approached to the up
limit, this web server system can assign all available computing resources by using a flexible technique. So that resources
could be shared with multi-users and accessed by authorized devices through nets.
Queuing theory (Sundarapandian, 2009) is one study of waiting lines, which is used for predicting queue lengths and
waiting times. The queuing models have been widely applied in decision-makings on resources that should be provided, e.g.,
sequencing jobs that are processed on a single machine (Sarin et al., 1991). However, using the queuing models contains too
many algorithms which are time-consuming for the load of the systems, and in general, it would be difficult to predict exactly
the process times (Chen and Nakagawa, 2012, 2013) for jobs. Further, most models have paid little attention to failures and
reliabilities (Nakagawa, 2008; Lin, 2013) of web server systems in operations.
Many studies have addressed the problem of downtime cost after system failure (Nakagawa, 2008), which may be
considered to arise from carelessly scheduled plans. By comparing failure time of provided servers with required process time,
we pay attention for another case when process times are too far in advance of failure times, which involves a waste of
resources, as more jobs might be completed. So that we will introduce shortage and excess costs into models by considering
both costs suffered before and after server failures.
From such viewpoints, this paper proposes optimization problems in which how many number of web server resources
should be provided for net job computations with random process times. That is, we suppose that a web server system with N
(N=1,2) servers are available for random job processes, where N could be optimized to minimize the total expected cost
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 21(6), 134-146, 2015

U-SHAPED ASSEMBLY LINE BALANCING WITH TEMPORARY WORKERS


Koichi Nakade1,*, Akiyasu Ito2 and Syed Mithun Ali2
1

Department of Civil Engineering and Systems Management


Nagoya Institute of Technology
Gokiso-cho, Showa-ku
Nagoya, JAPAN 466-8555
*Corresponding authors e-mail: [email protected]

Department of Civil Engineering and Systems Management


Nagoya Institute of Technology
Gokiso-cho, Showa-ku
Nagoya, JAPAN 466-8555

U-shaped assembly lines are useful in an efficient allocation of workers to stations. In assembly lines, temporary workers
are placed to correspond to the fluctuation of demand. Sets of feasible tasks for temporary workers are different from those
of permanent workers. The tasks which are familiar to permanent workers also vary. For the U-shaped assembly balancing
problem under these situations the optimal cycle times for a given number of temporary workers and the optimal number of
workers for given cycle time are derived and compared between U-shaped line balancing and straight line balancing. We
also discuss the optimal allocation for a single U-shaped line and two U-shaped lines. In several cases, in particular when
high throughputs are required, it is shown numerically that the number of temporary workers in optimal allocation for two
lines is less than that of optimal allocation for a single line.
Keywords: u-shaped line; optimal allocation; mathematical formulation; temporary workers; permanent workers
(Received on November 25, 2013; Accepted on Febraury 26, 2015)
1. INTRODUCTION
Assembly line balancing is very important because balancing workload among workers leads to the reduction of labor costs
and increase of throughput of finished products. Therefore theory and solving method on assembly line balancing have
been developed. For example, for mixed models in a straight line, Chutima et al.(2003) have applied a fuzzy genetic
algorithm for minimizing production time and Tiacci et al. (2006) have presented a genetic algorithm for assembly line
balancing with parallel stations. In Villarreal and Alanis (2011) simulation is used to guide the improvement efforts on the
redesign of a traditional line.
In assembly line balancing, a U-shaped assembly line is effective in an allocation of workers and tasks to stations,
because more types of allocations are available compared with those in straight lines, and appropriate arrangement leads to
more throughput. Baybars (1986) has formulated a U-shaped line as a mixed integer program and proposed a heuristic
algorithm for solving. Recently, Hazir and Dolgui (2011) have proposed a decomposition algorithm. Chiang et al. (2007)
have proposed a formulation of U-shaped assembly line balancing with multiple lines, and have shown that there are the
cases that multiple lines can process with a fewer stations than a single line by numerical examples.
Temporary workers are sometimes placed in assembly lines, because the system can remain efficient by increasing or
decreasing the number of temporary workers corresponding to the fluctuation of demand. Sets of feasible tasks for
temporary workers are different from those of permanent workers. The familiar jobs among permanent workers may be also
different. In this case, it is important to allocate permanent and temporary workers to stations appropriately by considering
their abilities for different types of tasks. Corominas et al. (2008) have considered a straight line balancing with temporary
workers. Tasks which temporary workers can process is limited and time necessary for temporary workers to finish their
tasks is assumed to be longer than that for permanent workers to do those. In general, however, tasks which permanent
workers can complete in standard time are different among those workers, because their skills are different.
In this paper, we consider a U-shaped assembly balancing problem with a fixed number of permanent different
workers and temporary workers under precedence constraints on tasks. The model is formulated as an optimization integer
program for deriving the minimal cycle time for a given number of temporary workers, or deriving the minimal number of
temporary workers under given cycle time, and an algorithm is proposed to derive the throughput and an optimal allocation
of workers and jobs to stations for all possible numbers of temporary workers. Then we compare the optimal values
between U-shaped line balancing and straight line balancing in numerical examples by using software Xpress. In addition,
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 147-158, 2015

A GAME THEORETIC APPROACH FOR THE OPTIMAL INVESTMENT


DECISIONS OF GREEN INNOVATION IN A MANUFACTURER-RETAILER
SUPPLY CHAIN
Sha Xi1 and Chulung Lee2,*
1

Graduate School of Information Management and Security


Korea University
Seoul, Republic of Korea

School of Industrial Management Engineering and Graduate School of Management of Technology


Korea University
Seoul, Republic of Korea
*Corresponding authors e-mail: [email protected]

With increasing consumers awareness of eco-friendly products, Manufactures and Retailers are proactive to invest in green
innovation. This paper analyzes a single manufacturer, single retailer supply chain where both participants are engaged in
green innovation investment. Consumer demand is dependent on selling price and investment level of green innovation. We
consider the effects of consumer environmental awareness, perception difficulty of green products, and degree of goods
necessity on decision making. According to the relationship between the manufacturer and the retailer, three non-coordinated
game (including Manufacturer-Stackelberg, Retailer-Stackelberg, and Vertical Nash) and one coordinated supply chain
structures are proposed. The pricing and investment level of green innovation are investigated under these four supply chain
structures, respectively. A Retail Fixed Markup policy is analyzed when channel members fail to achieve supply chain
coordination. The effects of RFM on supply chain performance are evaluated. We numerically compare optimal solutions and
profits under the coordination, the Manufacturer-Stackelberg, and the Retail Fixed Markup supply chain structure and
provide managerial insights for practitioners.
Keywords: green supply chain management; consumer environmental awareness; product type; game theory
(Received on November 30, 2013; Accepted on February 26, 2015)
1. INTRODUCTION
As the escalating deterioration of environment in past decades, Green Supply Chain Management has attracted increasing
attention from entrepreneurs and researchers. Public pressure, such as consumer demand for eco-friendly products, first put
companies on to the thinking of greening. Nowadays, companies are proactive to invest in green innovation and regard it as a
potential competitive advantage rather than a burden. Porter (1995) explained the fundamentals of greening as a competitive
strategy for business practitioners and reported green investment may increase resource productivity and save cost.
People are more aware of environmental problems and willing to behave eco-friendly. According to the report of Cone
communications (2013), 71% of Americans take environmental factors into consideration and 45% of consumers actively
gathered environmental information about their objective products. In a meta-analysis on 83 research papers, Tully and Winer
(2013) found more than 60% consumers are willing to pay a positive premium for socially responsible products and, on
average, those consumers are willing to pay 17.3% more for these products. The increasing consumer demand of eco-friendly
products drives companies engaging in green innovation to differentiate its product (Amacher et al., 2004; Ibanez and
Grolleau, 2008, Borchardt et al., 2012). Land Rover, one of the worlds most luxurious and stylish 4x4s, has launched Ranger
Rover Evoque which is regarded as the lightest, most fuel efficient Ranger Rover to meet requirements for lower CO2
emissions and fuel economy. LG has produced a water efficient washing machine which saves 50L or more per load and uses
less detergent. Meanwhile, retailers are also engaged in investment of green innovation recently. Home Depot, an American
home improvement products retailer, conducts business in an environmentally responsible manner. Home Depot leads in
reducing greenhouse gas emissions and selecting manufacturer of eco-friendly products. For explanation of properties and
functions of eco-friendly products, Home Depot also provides leaflets, product labeling, and in-store communication, which
help consumers to know eco-friendly well.
Most of companies decide optimal investment decisions of green innovation without considering their manufacturers or
retailers decisions. With requirements of operational efficiency and environmental protection, companies have tried to
improve the entire supply chains performance rather than a single supply chain members. Beamon(1999) discussed the
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 159-170, 2015

DYNAMIC PRICING WITH CUSTOMER PURCHASE POSTPONEMENT


Kimitoshi Sato
Graduate School of Finance, Accounting & Law
Waseda University
Japan
*Corresponding authors e-mail: [email protected]
We consider a dynamic pricing model for a firm that sells perishable products to customers who have the potential to
postpone the purchase decision to reduce their perceived risk. The firm has a competitor in the market and knows that the
competitor adopts a static pricing strategy. We assume that the customer arrivals follow a stochastic differential equation
with delay and establish a continuous-time model so as to maximize the expected profit. When the probability distribution
of the customers reservation value is exponential and its parameter is constant in time, a closed-form optimal pricing
policy is obtained. Then, we show the impact of the competitor's pricing policy on the optimal price sample path through a
martingale approach. Moreover, we show that the purchasing postponement reduces the firms total expected profit.
Keywords: revenue management; dynamic pricing; stochastic delay equation
(Received on November 28, 2013; Accepted on February 26, 2015)
1. INTRODUCTION
We consider a dynamic pricing policy of a firm that faces the problem of selling a fixed stock of products over a finite
horizon in a competitive market and knows that the competitor adopts a static pricing strategy. Such a situation can be
found everywhere. Examples include high-speed rail versus low-cost carriers, suite versus regular hotel rooms, national
versus store brands, department versus Internet shops, etc. Since the static pricing policy provides a simple and clear price
to customers, some companies (especially firms offering the high-quality product) place importance on this advantage.
In this paper, we investigate how the customer behavior of delayed purchases impacts on the pricing strategy of the firm.
Causes of delay in customer decision-making include the difficulty of selecting the product and perceived risk. Some nonpurchase customers will return to a shop or web site at intervals. Thus, the number of the present arrival customers is
affected by some of the previous arrival customers. Pricing without considering such behavior may affect the total revenue
of the firm.
Recently, various authors have considered a pricing policy with strategic customer behavior in the management science
literature (Levin et al. 2009, Liu and Zhang, 2013). The strategic customer behavior is that customers compare the current
purchasing opportunity to potential future opportunities and decide whether to purchase immediately or to wait. These
papers model customer's purchase timing so as to maximize their individual consumer surpluses. The strategic customers
take future price expectations into account in their purchase decisions.
Unlike previous works, we consider the number of customers to postpone purchases at an aggregate level, rather than at the
individual customer level. Proportions of customers who postpone the purchase vary depending only on the time of arrival.
In other words, the earlier the arrival, the more delay in purchasing the product. To take into account of such customer
behavior, we model the problem as a stochastic control problem that is driven by a stochastic differential equation with
delay.
Larssen and Risebro (2003) and Elsanosi et al. (2001) consider the applications of the stochastic control problem with
delay in harvesting problem, and consumption and portfolio optimization problems, respectively. Bauer and Rieder (2005)
provide conditions that enable us to reduce the stochastic control problem with delay to the problem that is easier to solve.
By using conditions, we show that our problem can be reduced to the similar model of Sato and Sawaki (2013), which does
not take into account of the delay. Then, we obtain a closed-form optimal pricing policy when the probability distribution
of the reservation value is exponential. Xu and Hopp (2006) apply martingale theory to investigate the trend of optimal
price sample paths in a dynamic pricing model for exponential demand case. Xu and Hopp (2009) consider the dynamic
pricing in continuous-time in which the customer arrivals follow a non-homogeneous Poisson process. They show that the
trend of optimal price increases (decreases) when customers willingness-to-pay increases (decreases) in time. We also
apply martingale theory to study how the competitor's pricing strategy and customers' delay behavior effect optimal price
path when customers willingness-to-pay is constant in time.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 171-182, 2015

INTEGRATION OF SCENARIO PLANNING AND DECISION TREE


ANALYSIS FOR NEW PRODUCT DEVELOPMENT: A CASE STUDY OF A
SMARTPHONE PROJECT IN TAIWAN
Jei-Zheng Wu1, Kuo-Sheng Lina2,*, and Chiao-Ying Wu b1
1

Department of Business Administration, Soochow University


56 Kueiyang St., Sec. 1, Taipei 100, Taiwan, R.O.C.
2
Department of Financial Management, National Defense University
70 Zhongyang N. Rd., Sec. 2, Taipei 112, Taiwan, R.O.C.
*Corresponding authors e-mail: [email protected]
Although the demand for smartphones has increased rapidly, the R&D and marketing of smartphones have encountered
severe competition in a dynamic environment. Most studies on new product development (NPD) have focused on the
traditional net present value method and real options analysis, which lack the flexibility required to model asymmetric
multistage decisions and flexible uncertain states. The aim of this study was to integrate scenario planning and decision tree
analysis for NPD evaluation. Through such integration, scenarios for modeling uncertainties can be generated
systematically. This study presents a case study of a Taiwanese original equipment manufacturing company for validating
the proposed model. Compared to the performance of realized decisions, the proposed analysis is more robust and
minimizes risk if the R&D resource allocation is appropriate. Two-way sensitivity analysis facilitates balancing the
probability of R&D success with the R&D cost of an R&D project becoming profitable.
Keywords: decision tree analysis; scenario planning; new product development project; influence diagram; discounted cash
flow
(Received on December 1, 2013; Accepted on February 26, 2015)
1. INTRODUCTION
Over the past decade, the mobile phone market has exhibited a substantial increase in demand; sales have increased from a
relatively small number of phones in the 1990s to 140 million today. The integration of communication, entertainment, and
business functions with the availability of simple and fashionable designs has contributed to the increasing use of mobile
communication products. New product development (NPD) projects for mobile phones often encounter resource or
budgetary limitations, resulting in limited choices of project investments. Moreover, NPD involves high risk and
uncertainties. When new product investments are financially evaluated, the most common questions are whether projects
are worth investing in and how all uncertainties can be factored into the evaluation, including the uncertainty in the
temporal variation of the product value after launch.
The net present value (NPV) method, also known as the discounted cash flow (DCF) method, is commonly used for
budgeting capital and evaluating investment in R&D projects. The traditional NPV method involves applying the risk-free
rate and risk-adjusted discount rate for discounting future expected cash flows, including financial benefits and expenditure,
to derive the NPV (Brando and Dyer 2005). A project is considered investment worthy only if the NPV is positive.
Although the NPV method is simple and intuitive, its applications are limited because of the unrealistic assumptions of (1)
reversible investment and (2) nondeferrable decisions. According to the reversible investment assumption, an investment
can be undone and incurred expenditure can be recovered (Dixit and Pindyck 1995). Furthermore, the nondeferrable
decision assumption requires the R&D investment decision to be made immediately. Because it entails using only one
scenario (the so-called now-or-never scenario) for decision-making, the NPV method evaluates one-stage decisions without
considering contingencies or changes that reflect future uncertainties (Trigeorgis and Mason 1987).
In practice, information on the reversibility, uncertainty, and timing of decisions is critical for managers in making
R&D investment decisions at the strategic level (Dixit and Pindyck 1995). An R&D project entails at least four stages: (1)
initialization, (2) outcome, (3) commercialization, and (4) market outcome (Faulkner 1996). In responding to future
uncertainties, managers require flexibility to adjust their actions by using real options, such as deferring decisions,
altering the operation scale, abandoning or switching the project, focusing on growth, and engaging in multiple interactions
(Trigeorgis 1993). Real option analysis is complementary to the NPV method, in that the total project value can be
formulated as the sum of the NPV, adjusted option value, and abandonment value (van Putten and MacMillan 2004).
Considering the real option of exercising the right to manage real assets without obligation to proceed with actions when
anticipating uncertainties, R&D project investment is based on a multistage, sequential decision-making process (Ford and
Sobek 2005).
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(1), 183-194, 2015

TRADE-INS STRATEGY FOR A DURABLE GOODS FIRM FACING


STRATEGIC CONSUMERS
Jen-Ming Chen1,* and Yu-Ting Hsu2
1,2

Department of Industrial Management

National Central University


300 Jhongda Road, Jhongli City, Taoyuan County, Taiwan, 32001
*Corresponding authors e-mail: [email protected]
Trade-ins rebate from the manufacturer to the consumers is a commonly used device by a durable goods firm to price
discriminate between new and replacement buyers. It creates segment effect by offering different prices to different groups
of customers. This study deals with such an effect by considering three trade-ins policies facing the firm, i.e., no trade-ins,
trade-ins to replacement consumers with high quality used goods, and trade-ins to all replacement consumers. This study
determines the optimal pricing and/or trade-in rebate, and examines the strategic choice among the three options facing the
firm. We develop analytic models that incorporate key features of durable goods into model formulation, namely the
deterioration rate and the quality variation of the used goods. Our research findings include: the strategic choice among the
three options depends critically on the two features and the price of new goods, and the trade-ins-to-all policy outperforms
the others when the deterioration rate is high and/or new goods price is high.
Keyword: trade-Ins; rebate; Deterioration; Utility Assessment; Stationary Equilibrium
(Received on December 3, 2013; Accepted on February 26, 2015)
1. INTRODUCTION
An original equipment manufacturer often faces two distinct types of consumers in the market: replacement buyers and new
buyers. Especially in a durable good market, the replacement purchases represent a significant portion of the total sales. In
highly saturated markets like refrigerators and electric water heaters, the percentage of replacement purchases is between
60% and 80% of the annual sales in the United States (Fernandez, 2001). In the automobile industry, approximately half of
all new car sales involve a trade-in (Zhu, Chen, & Dasgupta, 2008; Kim et al., 2011). To increase sales and purchasing
frequency by the customers, the firm usually adopts a price discrimination approach by offering the replacement buyers a
special trade-in rebate that is referred to the firms decision of accepting a used good as partial payment for a new good. The
replacement customers will pay less for the new goods by redeemed rebates. In the cellphone industry, Apple offers
replacement customers a trade-in rebate up to $345 for an iPhone 4S and up to $356 for an iPhone 5 (www.apple.com).
Such a manufacturer-to-consumer rebate stimulates new goods sales.
This study deals with such a prevalent practice in durable goods markets. We propose analytic models for decisionmaking of optimal trade-in rebates facing the durable goods producer, especially when the replacement buyers act
strategically, that is, their replacement decision depends on the quality condition of the goods after a certain period of use.
We analyze and compare three benchmark scenarios, that is the no trade-ins, the trade-ins to consumers with high quality
used goods (denoted by trade-ins-to-high), and the trade-ins to all consumers with high and low quality used goods (denoted
by trade-ins-to-all). This study especially focuses on investigating the impacts of the two trade-ins policies on the behaviors
and actions the buyers may take, as well as the potential benefit the firm may gain among the three options. Our research
findings suggest that the strategic choice on trade-ins policies facing the firm depends critically on the deterioration rate (or
durability in a reversed measure), quality variation of the used goods, and the new goods price. We also show that as the
deterioration or quality variation increases, the magnitude of trade-in rebates increases.
There are mainly two research streams that deal with trade-in rebates in durable goods markets: (i) models from
economics and marketing literature and (ii) models from operations literature. We provide reviews of both streams.
Waldman (2003) identified some critical issues facing the durable goods producers, including durability choice and
information asymmetric problem. This study is related to the former one but does not deal with the second that was one of
the major research concerns in Rao, Narasimhan, and John (2009). They showed that trade-in programs mitigate the lemon
problem or equivalently information asymmetric problem in markets with adverse selection, and hence increase the firms
profit.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 195-212, 2015

Improving In-plant Logistics: A Case Study of a Washing Machine


Manufacturing Facility
Cagdas Ucar1 and Tuncay Bayrak2,*
1

Yildiz Technical University


Department of Industrial Engineering
Istanbul, Turkey
2

Department of Business Information Systems


Western New England University
Springfield, MA, 01119, USA
*Corresponding Authors E-mail:[email protected]
This study presents a case study on the enhancement of in-plant logistics at a discrete manufacturing plant using lean
manufacturing/logistics principles. Two independent application scenarios are presented. In the first application, we
improve the operation of a supermarket (small internal warehouse) from the ergonomics point of view by (1) placing
heavy boxes on waist-level shelves, and (2) applying rolling racks/trolleys to release the physical load on the workers.
In the second application, the logistic processes related to a new supermarket are fundamentally re-designed.
Key Words: In-plant logistics, supermarket, milkrun, ergonomics, fatigue, just-in- time production.
(Received on September 20, 2013; Accepted on Septemeber 13, 2014)
1. INTRODUCTION
Logistics activities, regardless of whether it is in manufacturing or service business, have become an important business
function as they are seen to contribute to the competitiveness of the enterprise. In such a competitive environment,
logistics activities are one of the most important factors for companies in delivering products and services in a timely
and competitive manner. In other words, the logistics service quality emerges as an important element of being able to
compete.
Logistics can be seen as in-plant logistics and out-of-plant logistics. In-plant logistics or internal logistics covers
the activities between the arrival of raw materials and the full output of the product. Out-of plant logistics or external
logistics covers the remaining activities. In recent years, the importance of in-plant logistics has increased as it is of
great importance for running production smoothly. In-plant logistics implies the co-ordination of activities within the
plant. One would agree that the elements of the in-plant logistics need to be integrated with the external logistics. For
manufacturers, managing the in-plant logistics is as important as managing the external logistics to improve the
efficiency of production activities.
Running in-plant logistics in the best way is of great importance for businesses that adopted just-in-time and lean
manufacturing philosophies to continue functioning without problems. This study reports on the experiences of
redesigning in-plant logistics operations of a washing machine manufacturing facility. How to improve in-plant
logistics, within the framework of just-in-time production and lean manufacturing philosophies, is investigated from
different perspectives such as ergonomics, time spent, and distance traveled. Two real-life examples are presented in
terms of how in-plant logistics activities can be improved using milkrun and supermarket approaches. The first
application deals with logistics activities in terms of ergonomics. In the second application, problems with internal
logistic activities are identified, and solutions are provided to minimize the time spent, and distance traveled by the
employees.
2. LITERATURE REVIEW
Logistics management can be defined as that part of supply chain management that plans, implements, and controls
the efficient, effective forward and reverse flow and storage of goods, services and related information between the
point of origin and the point of consumption in order to meet customers' requirements (CSCMP, 2012). Kample et al.,
(2011) suggest logistics is both a fundamental business activity and the underlying phenomenon that drives most other
business processes.
While in-plant logistics plays a vital role in achieving the ideal balance of process efficiency and labor
productivity, unoptimized in-plant logistics may present a considerable challenge for companies in all sectors of
consumer goods and result in poor operation management, human error, and some other problems. Thus, optimized inplant logistics is a prerequisite for the economic operation of the factory. As pointed out by Jiang (2005), in-plant
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2) , 213-222, 2015

Toll Fraud Detection of Voip Services via an Ensemble of Novelty Detection


Algorithms
Pilsung Kang1, Kyungil Kim2, and Namwook Cho2,*
1

School of Industrial Management Engineering


Korea University
Seoul, Korea

Department of Industrial & Information Systems Engineering


Seoul National University of Science and Technology
Seoul, Korea
*
Corresponding authors e-mail: [email protected]

Communications fraud has been dramatically increasing with the development of communication technologies and the
increasing use of global communications, resulting in substantial losses to telecommunication industry. Due to the
widespread deployment of voice over internet protocol (VoIP), the fraud of VoIP has been one of major concerns of the
communications industry. In this paper, we develop toll fraud detection systems based on an ensemble of novelty detection
algorithms using call detail records (CDRs). Initially, based on actual CDRs collected from a Korean VoIP service provider
for a month, candidate explanatory variables are created using historical fraud patterns. Then, a total of five novelty
detection algorithms are trained for each week to identify toll frauds during the following week. Subsequently, fraud
detection performance improvements are attempted by selecting significant explanatory variables using genetic algorithm
(GA) and constructing an ensemble of novelty detection models. Experimental results show that the proposed framework is
practically effective in that most of the toll frauds can be detected with high recall and precision rates. It is also found that
the variable selection using GA enables us to build not only more accurate but also more efficient fraud detection models.
Finally, an ensemble of novelty detection models further boosts the fraud detection ability especially when the fraud rate is
relatively low.
Keywords: toll fraud detection; novelty detection; genetic algorithm (GA); ensemble; VoIP service; call detail records
(CDRs).
(Received on November 29, 2013; Accepted on July 09, 2014)
1. INTRODUCTION
Communications fraud has been dramatically increasing with the development of communication technologies and the
increasing use of global communications, resulting in substantial losses to telecommunication industry (Kou, 2004).
Moreover, due to the widespread deployment of the Voice over Internet Protocol (VoIP), the fraud of VoIP has been one of
major concerns of the communications industry. VoIP is more vulnerable to fraud attacks so its potential loss is greater than
traditional telecommunication technologies. According to the survey conducted by Communications Fraud Control
Association (CFCA, 2009), global fraud losses in 2009 are estimated to be in the range of $72 - $80 billion (USD), which is
up 34% from 2005. The top two fraud loss categories, which constitute nearly 50 percent of the total loss, can be considered
as toll fraud. Toll fraud is defined as an unauthorized use of ones telecommunications system by an unauthorized party
(Avaya, 2010), which often results in substantial additional charges for telecommunications services. Figure 1 shows a
typical toll fraud pattern. While normal traffic is activated from the normal user groups and transmitted through a VoIP
service provider and an internet telephony service provider (ITSP), toll fraud attacks result from an illegal use of
unauthorized subscriber information and/or the compromise of vulnerable telecommunication systems such as PBX and
voicemail systems.
In telecommunication industry, most fraud analysis applications have been relying on rule-based systems (Rosset,
1999). In the rule-based systems, fraud patterns are pre-defined by a set of multiple conditions, and an alert is raised
whenever any of the rules is met. Rosset et al. (1999) suggested a rule-discovery framework for fraud detection in a
traditional telecommunications environment. Ruiz-Agundez et al. (2010) proposed a fraud detection framework for VoIP
services consisting of a rule engine built over a prior knowledge base. However, relying on the knowledge of domain
experts, rule-based approaches can hardly provide an early warning effectively; they are vulnerable to unknown and
abnormal fraud patterns (Kim, 2013).
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 223-242, 2015

A Multi Depot Simultaneous Pickup and Delivery Problem with Balanced


Allocation of Routes to Drivers
Morteza Koulaeian1, Hany Seidgar1, Morteza Kiani1 and Hamed Fazlollahtabar2,*
1
Department of Industrial Engineering
Mazandaran University of Science and Technology
Babol, Iran
2

Faculty of Management and Technology


Mazandaran University of Science and Technology
Babol, Iran
*Corresponding authors e-mail: [email protected]
In this paper, a new mathematical model is developed for a multi-depot vehicle routing problem with simultaneous pickup
and delivery. A non-homogenous fleet of vehicles and a number of drivers with different levels of capabilities are employed
to service customers with pickup and delivery demands. The capability of drivers is considered to have a balanced
distribution of travels. The objective is to minimize the total cost of routing, penalties for overworking of drivers and fix
costs of drivers employment. Due to the problems NP-hard nature, two meta-heuristic approaches based on Imperialist
Competitive Algorithm (ICA) and Genetic Algorithm (GA) are employed to solve the generated problems. The parameter
tuning is conducted by Taguchi experimental design method. The obtained results show the high performance of the
proposed ICA in the quality of the solutions and computational time.
Keywords: vehicle routing problem (VRP); multi depot simultaneously pickup and delivery; imperialist competitive
algorithm (ICA).
(Received on January 09, 2014; Accepted on October 19, 2014)
1. INTRODUCTION
Pickup and Delivery problem (PDP) is one of the main classes of the Vehicle Routing problem (VRP) in which a set of
routes is designed in order to meet customers pickup and delivery demands. In Simultaneous Pickup and Delivery problem
(SPDP) a fleet of vehicles originating from a distribution center should deliver some goods to customers and at the same
time collect back their excess stuff. This problem arises especially in the reverse logistics context where companies are
increasingly faced with the task of managing the reverse flow of finished goods or raw-materials (Subramanian et al.,
2010).
Min (1989) was the first researcher to introduce vehicle routing problem with simultaneous pickup and delivery
(VRPSPD) for minimizing the total travel time of the route by considering the vehicle capacity as the problem constraint.
Dethloff (2001), and Tang and Galvano (2006) then, contributed on mathematical reformulations. Berbeglia et al., (2007)
also introduced a general framework to model static pickup and delivery problems. Jin and Kachitvichyanukul (2009)
generalized the three existing formulation and reformulated the VRPSPD as a direct extension of basic VRP. In solution
technique areas, Moshivio (1998) studied PDP with divisible demands, in which each customer can be served by more than
one vehicle, and presented greedy constructive algorithms based on tour partitioning. Salhi and Nagy (1999) proposed four
insertion-based heuristics, in which partial routes are constructed for some customers in basic steps and then the remaining
customers will be inserted to the existing routes. Dell 'Amico et al., (2006) presented an exact method for solving VRPSPD
based on column generation, dynamic programming, and branch and price algorithm. Bianchessi and Righini (2007)
proposed a number of heuristic algorithms to solve this problem approximately in a small amount of computing time.
Emmanouil et al., (2009) proposed a hybrid solution approach incorporating the rationale of two well-known metaheuristics namely tabu search and guided local search. Mingyong and Erbao (2010) proposed an improved differential
evolution algorithm (IDE) for a general mixed integer programming model of VRPSPD with time windows.
Hsiao-Fan Wang and Ying-Yen Chen (2012) presented a co-evolution genetic algorithm with variants of the cheapest
insertion method for this kind of problem. Ran Liu et al. (2013) propose a genetic algorithm based on a permutation
chromosome, a split procedure and local search for VRPSPD in home health care problem. They also propose a tabu search
method based on route assignment attributes of patients, an augmented cost function and route re-optimization. Tao Zhang
et al., (2012) develop a new scatter search and a generic genetic algorithm approach for the stochastic travel-time VRPSPD.
Goksal et al., (2013) proposed a particle swarm optimization in which a local search is performed by variable neighborhood
descent algorithm for VRPSPD. The reviewed papers so far were single depot problems but there are studies considering
multi-depot vehicle routing problem (MDVRP) in which there exist more than one distribution center. Here, some

International Journal of Industrial Engineering, 22(2), 243-251, 2015

A Branch-and-Price Approach for the Team Orienteering Problem with Time


Windows
Hyunchul Tae and Byung-In Kim*
Department of Industrial and Management Engineering,
Pohang University of Science of Technology (POSTECH)
Pohang, Korea
*
Corresponding authors e-mail: [email protected]
Given a set of vertices, each of which has its own prize and time window, the team orienteering problem with time windows
(TOPTW) is a problem of finding a set of vehicle routes with the maximum total prize that satisfies vehicle time limit and
vertex time window constraints. Many heuristic algorithms have solved the TOPTW; to our knowledge, however, no exact
algorithm that can solve this problem optimally has yet been identified. This study proposes an exact algorithm based on the
branch-and-price approach to solve the TOPTW. This algorithm can find optimal solutions for many TOPTW benchmark
instances. We also apply the proposed algorithm to the team orienteering problem (TOP), which is a time window constraint
relaxed version of the TOPTW. Unlike the TOPTW, a couple of exact algorithms have solved the TOP. The proposed
algorithm can find more number of optimal solutions to TOP benchmark instances.
Keywords: team orienteering problem with time windows; branch and price; exact algorithm; column generation
(Received on October 2, 2014; Accepted on February 20, 2015)

1. INTRODUCTION
Given a weighted digraph
, , where
, ,
is a set of vertices and is a set of arcs between the vertices,
,,
may be visited by a set of identical vehicles
,,
that departs from the
a set of customers
. A vehicle
collects a prize by visiting . A vehicle
takes travel
origin and ends at the sink
time , to traverse , and service time to serve . A vehicle
can visit only between its time
window , and should wait until if it arrives
before . The total working time of each vehicle should be less than or
0,
, and a complete graph for simplicity. The team orienteering
equal to the time limit . We assume that
problem with time windows (TOPTW) is an issue that involves finding a set of vehicle routes with a maximum total prize that
satisfies vehicle time limit and vertex time window constraints. The TOPTW can be formulated as a set partitioning problem
as a route if the customers in can be visited by one vehicle. Let be
as [TOPTW]. We regard a subset of customers
a set of all possible routes.
[TOPTW]
max

(1)

Subject to
,

1,

(2)

(3)

0,1 ,

(4)


, where , is 1 if includes
and 0 otherwise. A
, represents the prize of a route
binary decision variable
is 1 if is selected and 0 otherwise. The objective function (1) maximizes the total prize.
Constraints (2) prohibit a customer from being visited more than once. Constraint (3) ensures that vehicles can be used at
most. Constraints (4) restrict
to be binary.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 252-266, 2015

An Inhomogeneous Multi-Attribute Decision Making Method and Application to


IT/IS Outsourcing Provider Selection
Rui Qiang1 and Debiao Li2
1,2

School of Economics and Management


Fuzhou University
Fuzhou, China

Corresponding authors e-mail: [email protected]


Selecting a suitable outsourcing provider is one of the most critical activities in supply chain management. In this paper, a
new fuzzy linear programming method is proposed to select outsourcing providers by formulating it as a fuzzy
inhomogeneous multi-attribute decision making (MADM) problems with fuzzy truth degrees and incomplete weight
information. In this method, the decision makers preferences are represented as trapezoidal fuzzy numbers (TrFNs), which
obtained through pair-wise comparisons of alternatives. Based on the fuzzy positive ideal solution (FPIS) and fuzzy negative
ideal solution (FNIS), the fuzzy consistency and inconsistency indices are defined by the relative closeness degrees in TrFNs.
The attribute weights are estimated by solving the proposed fuzzy linear programming. And then the selection ranking is
determined by the comprehensive relative closeness degree of each alternative to the FPIS. An industrial IT outsourcing
provider selection example is analyzed to demonstrate the implementation process of this method.
Keywords: outsourcing provider; multi-attribute decision making; production operation; fuzzy linear programming; supply
chain management
(Received on August 08, 2013; Accepted on January 01, 2015)
1. INTRODUCTION
In the ever-increasing business competitiveness of today, outsourcing has become a main stream practice in global business
operations (Cai et al., 2013). Information systems outsourcing is modeled as one-period two-party non-cooperative games to
analyze the outsourcing arrangement by considering a variety of interesting characteristics, including duration, evolving
technologies, difficulty to assess, and vender fees (Elitzur and Wensley,1999, Elitzur et al., 2012). Many organizations also
attempt to enhance their competitiveness, reduce costs, increase their focus on internal resources and core activities, and
sustain competitive advantage by Information technology/ information system (IT/IS) outsourcing (Yang and Huang, 2010).
The selection of a good provider is a difficult task. Some providers that meet some selection criteria may fail in some other
criteria. Therefore, selecting the outsourcing providers may be ascribed to a multi-attribute decision making (MADM)
problems.
Currently, some integrated decision-making methods have been proposed for solving the problems of selecting
outsourcing providers. Compared to the sequential decision making based on one-dimension rules, integrated
decision-making methods yield more integrative and normative solutions based on multi-attributes (Jansen et al., 2012). For
example, Chou et al. (2006) developed a fuzzy multi-criteria decision model approach to evaluating IT/IS investments. Chen
and Wang (2009) developed the fuzzy Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR) method for the
strategic decision of optimizing partners choice in IT/IS outsourcing projects. Lin et al. Combining the DEMATEL, ANP,
and zero-one goal programming (ZOGP), Tsai et al. (2010) developed a MCDM approach for sourcing strategy mix decision
in IT projects. From a policy-makers perspective, Tjader et al. (2010) researched the offshore outsourcing decision-making.
(2010) proposed a novel hybrid multi-criteria decision-making (MCDM) approach for outsourcing vendor selection
combining a case study for a semiconductor company in Taiwan. Chen et al. (2011) presented the fuzzy Preference Ranking
Organization Method for Enrichment Evaluation (fuzzy PROMETHEE) to evaluate four potential suppliers using seven
criteria and four decision makers using a realistic case study. Ho et al. (2012) integrated the quality function deployment
(QFD), fuzzy set theory, and analytic hierarchy process (AHP) approach, to evaluate and select the optimal third-party
logistics service providers (3PLs). Fan et al. (2012) utilized an extended DEMATEL method to identify risk factors of IT
outsourcing using interdependent information. Buyukozkan and Cifci (2012) proposed a novel hybrid MCDM approach
based on fuzzy DEMATEL, fuzzy ANP and fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)
to evaluate green suppliers.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 267-276, 2015

Multi-Criteria Model For Selection of Collection System in Reverse Logistics: A


Case for End of Life Electronic Products
Md Rezaul Hasan Shumon1,*, Shamsuddin Ahmed2
1

Department of Industrial and Production Engineering


Shahjalal University of Science and Technology
Sylhet-3114, Bangladesh
*Corresponding authors e-mail: [email protected]
2

Department of Mechanical Engineering


University of Malaya, Kuala Lumpur, Malaysia

The purpose of this paper is to propose a multi-criteria model for end-of-life electronic products collection system
selection in a reverse supply chain. The proposed models first determines the pertinent criteria for collection system
selection by conducting questionnaire survey and then uses analytic hierarchy process (AHP) rating method to evaluate
the priorities of the criteria and alternatives, respectively. Finally, global weights of the criteria and evaluation score of
the alternatives are combined to get the final ranking of the collection systems. The analysis result demonstrates the
relative importance of the criteria for evaluating the collection methods, and a real application that shows the preference
of collections system(s) to be selected. The use of this newly proposed model indicates that, decision makers can use it
to determine the most appropriate collection system(s) from available options in the considering territory. Furthermore,
it would be able to make the decision process more systematic and reduce the considerable efforts needed by using the
criteria weights created in this model.
Keywords: reverse logistics; multi-criteria analysis; end-of-life electronic products; analytical hierarchy process;
decision making
(Received on January 3, 2014; Accepted on October 05, 2014)
1. INTRODUCTION
Electronic waste (e-waste) management has gained a significant attention to researchers and policy makers around the
world as their through-away impact is hazardous to the physical environment. The advancing technology and
shortened product life cycle makes e-waste one of the fastest growing waste streams, creating significant risks to human
health and the environment (Yeh & Xu, 2013). Use of the reverse supply chain approach is one way of minimizing the
environmental impact of e-wastes entitled as end-of-life (EOL) electronic products (Quariguasi Frota Neto, Walther,
Bloemhof, van Nunen, & Spengler, 2009). Reverse supply chain is a process by which a manufacturer systematically
accepts the previously shipped products or parts from the point of consumption for possible reuse, remanufacturing,
recycling, or disposal (Tsai & Hung, 2009). This process provides with advantage of recycling of material resources,
development of newer technologies and creation of income-oriented job opportunities (Shumon, 2011).
Initially, the significance of this research was based on a problem confronting in Malaysia, a Southeast Asian
country, where companies and organizations are in doubt which system they should use for e-waste collection.
However, this problem is faced by other countries around the world as well. Collection of e-wastes is the first activity to
trigger the reverse supply chain as part of product recovery activities. In this regard, several approaches have been
applied by different countries like individual manufacturers buy-back program, municipalitys collection program, and
NGO and government initiatives (Chung, Lau, & Zhang, 2011; Qu, Zhu, Sarkis, Geng, & Zhong, 2013). It is
understandable that no single collection system can ensure the maximum collection of e-wastes, because it largely
depends on the geographical, social and economic conditions of the country under consideration. Some systems are well
established in developed countries but may not be economically feasible in other developing countries. Some systems
are economically feasible but are not well accepted by stakeholders. This resulted use of inappropriate methods or
systems, which ultimately lead to a lower collection rate and higher investment or operating cost. Such system(s) cannot
meet the financial objectives with respect to the investment made. Thus, there is a need for systematic approach of
selecting appropriate collection system(s) by identifying and prioritizing the pertinent criteria and evaluating the tradeoffs between strategic, economic, operational and social performance aspects. The model presented by this research
would be a useful decision making aid for the companies and organizations in any territory to rank and select the
effective and suitable method(s) for their concerned areas.
Hao, Jinhui, Xuefeng, and Xiaohua (2007) investigated on the collection method of domestic e-waste in urban
China by applying case study methods. They analyzed the four alternative collection modes currently exist in Beijing
and proposed a few other modes. The existing modes are door to door collection, Take-back in related business(second
hand market), Collection in recycling spot, Collection for donation and the proposed modes are i) government to formal
recycler ii) enterprise to formal recycler iii) collectors-formal recyclers. The use of multi-criteria decision analysis
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 277- 291, 2015

A Fuzzy Expert System for Supporting Returned Products Strategies


H.Hosseininasab1, M.Dehghanbaghi2,*
1,2

Department of Industrial Engineering


Yazd University
Yazd, Iran
*Corresponding authors e-mail: [email protected]
A key strategic consideration in the recovery system of any product is to make proper decisions on reverse manufacturing
alternatives including both recovery and disposal options. The nature of such decisions is complex due to the uncertainty
existing in the quality of the product returns and lack of information about the product. Consequently, the need of correct
diagnosis of recovery/ disposal options for the returned products necessitates the development of a comprehensive model
considering all technical and non-technical parameters. Although human experts with the aid of practical experience may
handle such complex problems, this procedure is time consuming and may lead to imprecise decisions. This study presents
a fuzzy rule-based system to provide a correct decision mechanism for ranking the recovery/disposal strategies by
knowledge acquisition through a simple reverse supply chain with a collection center for each particular returned product.
The proposed system has applications with a focus on brown goods, although the system may be applied to other similar
kinds of products through some changes. To achieve the objective of this study, the proposed model is used to analyze a
case of mobile phone, ending up in coherent results.
Keywords: Fuzzy expert system, Product returns, Return strategies
(Received on January 15, 2014; Accepted on December 22, 2014)

1. INTRODUCTION
In addition to the effects of ever-changing technologies, the rapid changes in the natural environment, the enforcements by
governments and the proven profitable engagement of recovery and reuse activities have influenced the way most
companies perform their business in increasing the rate of reusing returned products. The implementation of extended
producer responsibility in the light of new governmental policies, together with the growing public interest in
environmental issues, will cause Original Equipment Manufacturers (OEMs) to take care of their products after they have
been discarded by the consumer (Krikke et al., 1998). In this regard, product recovery management (PRM), proposed by
Thierry et al. (1995), serves to recover much of the economic and ecological value of products by reducing the quantity of
wastes.
There are four recovery and disposition categories for product returns including reuse/resell, product upgrade,
material recovery and waste management. Each category includes recovery/disposal alternatives. Table 1 presents the
alternatives for each category together with their explanations. Thus, we have 8 different recovery/ disposal activities when
a product is returned back to the chain: reusing, reselling, repairing, remanufacturing, refurbishing, cannibalization,
recycling and disposal. Every returned products/parts should pass one/more of these activities to be back to the second
market or to be disposed. One of the key strategic issues in product recovery management is to find a proper option for
recovery or disposal activities, as each of these activities bears its own costs.
As stated by Behret and Korugan, (2009), uncertainties in the quality, quantity and timing of the product return flow
make it hard to select the best disposition alternative decisions. Large variations in the quality of returns are a major factor
for uncertainties in the time, cost and rate of the recovery process (Liu et al., 2012). Thus, it seems necessary to provide a
strategic decision model for exploring the detailed quality of returned products before making the recovery decisions.
This paper aims at providing a comprehensive expert system through defining the factors mostly affecting the ranking
of the above-mentioned recovery options for product returns. The proposed model analyzes the properties of returned
products to find the best recovery option(s) in an accurate way. Although there are numerous studies in fuzzy decision
making as in Chan et al. (2003), Liu et al. (2013), Ozdaban et al. (2010), Tsai (2011) and Olugu et al. (2012), based on our
findings, there are just a few pieces of research in which expert and fuzzy rule-based decision systems are applied in reverse
logistic issues. They are mainly focused on performance measurement, disassembly process, life cycle and recovery
management (Singh et al., 2003; Meimei et al., 2004; Fernandez et al., 2008; Jayant, 012). There are also few published
research studies that provide clear policies for managing and clustering of returned products. Thus, we review those studies
that are the most relevant to the research we conduct.
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22 (2), 292-300 ,2015

Landfill Location with Expansion Possibilities in Developing Countries


Pablo Manyoma1,*, Juan P. Orejuela 1, Patricia Torres2, Luis F. Marmolejo2, and Carlos J. Vidal1
1

School of Industrial Engineering,


Universidad del Valle,
Santiago de Cali, Colombia
*Corresponding authors e-mail: [email protected]
2

School of Natural Resources and Environment Engineering


Universidad del Valle
Santiago de Cali, Colombia

Municipal Solid Waste Management (MSWM) has become one of the main challenges of urban areas in the world. For
developing countries, this situation is of greater severity due to disordered population growth, rapid industrialization, and
deficiency in regulations, among other factors. One component of MSWM is the final disposal, where landfills are the most
commonly used technologies for this purpose. According to a body of research, landfill location should meet the needs of
all stakeholders, thus we propose a model based on multi-objective programming considering several decisions such as
landfill opening, when they should be opened, and especially a common situation in our countries: the kind of expansion
capacity that should be used. We present an example that reflects the conflict of two objectives: cost and environmental
risk. The results show the allocation of each municipality to each landfill and the amount of municipal solid waste to be
sent, among other variables.
Keywords: capacity expansion; landfill location; multi-objective programming; municipal solid waste management;
undesirable facilities.
(Received on January 7, 2014; Accepted on January 25, 2015)
1. INTRODUCTION
Waste has increasingly become a major environmental concern for modern society, due to population growth, the high level
of urbanization, and the mass consumption of different products (Eriksson and Bisaillon, 2011). For this reason, one of the
greatest challenges in urban areas worldwide, especially in developing countries cities, is the Municipal Solid Waste
Management - MSWM. Even if a combination of this management technique is utilized and policies of waste reduction and
reuse are applied, the existence of sanitary landfills is necessary for any MSWM system (Moeinaddini et al., 2010).
Particularly in Latin America and the Caribbean countries, waste disposal has become a serious problem and it is
currently a critical concern. Even though some of these countries have a legal framework for waste control, very few
possess the infrastructure and human resources to enforce regulations, especially those related to recycling and disposal. In
these countries, landfills are the main alternative used to dispose of solid waste (Zamorano et al., 2009). During the last
years, an important change in the use of regional solutions for solid waste management has been observed. A growing
number of municipalities in the region have been associated in communities in order to achieve significant scale economies
and better enforcement of regulatory standards (OPS-BID-AIDIS, 2010).
Nowadays, landfills are seen as engineering projects that consider the whole management cycle: planning, design,
operation, control, closure, and post-closure. There is a vital step in the first planning stage: site location. The problem of
identifying the best location must be based on many different criteria. Issues such as political stability, the existing
infrastructure in regions, and the availability of a trained workforce are critical on a macro level when making such
decisions. Once a set of feasible regions have been identified for locating a new facility, selecting the ultimate location
takes place on a micro level (Gehrlein and Pasic, 2009).
Identifying and selecting a suitable site for a landfill is one of the most outstanding tasks. Here, it must be considered
the collection and processing of information that relate to environmental, socioeconomic and operational aspects such as the
distance to the site, local environmental conditions, the existing patterns of land use, site access, and the potential uses of
the landfill after being completed, among many others features. That is why the location of landfills is a complex problem
(OLeary and Tchobanoglous, 2002; Geneletti, 2010).
During the past 20 years, many authors around the world have applied different approaches to address the landfill
location problem. Erkut and Moran (1991), Hokkanen and Salminen (1997), and Banias et al. (2010), among others, have
ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

International Journal of Industrial Engineering, 22(2), 301-313, 2015

Establishing a Conceptual Model for Assessing Project Management Maturity in


Industrial Companies
Seweryn Spalek
Faculty of Organization and Management
Silesian University of Technology
Gliwice, Poland
Corresponding authors e-mail address: [email protected]

The number of projects undertaken by companies nowadays is significant. Therefore, there is a need to establish processes
in the company supporting and increasing project management efficacy. In order to achieve this, the companies need to
know how good they are at organizational project management, taking into consideration different perspectives. Knowing
their strengths and weaknesses, they are able to improve their activities in challenging areas. In view of the critical
literature review and interviews with chosen companies, the article proposes a conceptual model for assessing project
management maturity in industrial companies. The model is based on four assessment areas. Three of them (human
resources, methods & tools, and environment) represent the traditional approach to maturity measurement, whilst the
fourth, knowledge management, represents a new approach to the topic. The model was tested in over 100 companies in the
machinery industry to verify its practical application and establish valid results of implementation, which have not been
previously explored.
Keywords: project management, model, assessment, maturity, industry, knowledge management.
(Received on November 15, 2011; Accepted on March 16, 2015)

1. INTRODUCTION
The need for models that could be implemented in industry is recognized by authors of publications in different areas of
expertise (Bernardo, Angel, & Eloisa, 2011; Jasemi, Kimiagari, & Memariani, 2011; Kamrani, Adat, & Azimi, 2011;
Metikurke & Shekar, 2011). The importance of new product development from a different perspective was recognized, for
example, by Adams-Bigelow et al. (2006) and measured by Metikurke & Shekar (2011) and Kahn, Barczak, & Moss
(2006). New product development is a laborious endeavour that must be managed properly. Therefore, industrial
companies are interested in having an efficient tool to measure how good they are when it comes to project management.
That assessment must be done in different areas, including the set of best practices as the reference.
Moreover, Kwak (2000) noticed that there is an influence on the companys project management maturity level and
the key performance indicators of projects. Furthermore, Spalek (2014a, 2014b), based on his studies in the industrial
companies, shows that increasing the maturity level potentially reduces the costs and time of ongoing and new projects.
In fact, industrial companies are managing an increasing number of projects every year (Aubry et al., 2010). Besides
the typical operational representatives in the project-oriented environment like the IT and construction sectors, companies
in other industries have increasingly embraced newer project management methods (Cho & Moon, 2006; Grant &
Pennypacker, 2006; Liu, Ma, & Li, 2004; McBride, Henderson-Sellers, & Zowghi, 2004; C. T. Wang, Wang, Chu, & Chao,
2001). A good example is the machinery sector, which is very focused on the efficient development of new products that
are then used by other industries. The products of machinery industry are divided into those of general purpose, heavyindustry machines and their elements and components, totalling more than 200 products (ISIC, 2008). Therefore,
companies in the machinery industry are a kind of backbone of the entire economy and are located all over the world.
However, the most significant production comes from the EU (European Union), ASEAN+6 (Japan, Korea, Singapore,
Indonesia, Malaysia, Philippines, Thailand, China (including Hong Kong), Brunei, Cambodia, Laos, Burma, Vietnam,
India, Australia, New Zealand) and NAFTA & UNASUR (Canada, Mexico, USA, Argentina, Bolivia, Brasilia, Chile,
Columbia, Ecuador, Guyana, Paraguay, Peru, Surinam, Uruguay, Venezuela) areas (Kimura & Obashi, 2010). The main
customers of products of the machinery industry are companies from the following industries: construction, agriculture,
mining, steelworks, food and textiles.

ISSN 1943-670X

INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING

You might also like