IJETR031106
IJETR031106
IJETR031106
Abstract The focus of this article is on designing a decision periodically, production process is affected, and even could
support system, which optimizes maintenance and operation stop during conduction [3], [4]. Otherwise, sudden failure of
processes in petrochemical industries based on Markov decision equipment may cut production systems. Sometimes, High rate
process. Since a consistent process is located at the end of
of production because of high demand, leads to imposing
processes range, these industries machines form a continuous
chain, and the resulting flow on them always continue from the
more pressure on production systems which itself expedite the
feed to the final product. Flaws result in equipment failure, failure of systems [5]. The motivation for doing so is resulted
hiatus in production, and even decrease of product quality and from the need for development of optimized policies of
in practice any case of production stopping costs a lot to maintenance, and operation for time-continuous systems, so
resume. that a correct and accurate method could be designed for
This research has been conducted to analyze the effects of supporting the production [6]. This can be possible by
changes resulting from high demand of market on equipment. developing the concept of effectiveness using the Markov
The new formula calculates relations between maintenance and decision process according to the weight of lack of production
repair controls clearly with regard to the weight of lack of
in case of equipment failure in continuous systems [7] [11].
production in case of equipment failure in different production
conditions in a continuous system based on Markov decision
Overall equipment efficiency (OEE) in Total Productive
process. This model optimizes a set of maintenance controls of Maintenance (TPM) can be defined based on Tkachima's
different production conditions in each period, and in this way Definition (1998) as following [12]:
optimized values of condition, control, and optimized policy of OEE = availability production rate quality rate (1).
control are obtained. This model has been performed in 12 to 72 Therefore, based on the given definition in (1), and because of
month periods. Having chosen the optimized policy of control the studied equipment being hidden, Overall System
and obtaining the sample number of surveyed statistical Effectiveness (OSE) is the integrated availability, the
population, the indicators have been measured for six months capability of production, and quality measurement of all
both before and after performing the optimized policy of
control where the overall effectiveness of system increased to 87 equipment calculated by (2): OSE= (2)
from 82.7, and positive and acceptable changes were seen. The rest of this article is organized as following. The subject
has been reviewed in the first two sections. In the third
Index Terms Decision Support System, Maintenance, section, the issue has been presented in detail and the problem
Markov Decision Approach, Optimization, Overall Systems is expressed by theoretical backgrounds of Markov decision
Effectiveness process, considering the lack of production in case of
equipment failure in continuous systems, and a mathematical
I. INTRODUCTION model to maximize the Overall System Effectiveness. In
In the third millennium, different companies and section four, the calculation method based on the defined
organizations face numerous challenges to have a successful process in section three has been offered. Also, a numerical
presence in the business world, and these challenges are example of application is shown, and in section six the
sometimes conflicting. Meanwhile, one of the most conclusion of this article is done.
fundamental factors of changing the return on asset is the
Equipment plays an undeniable role in qualities of products, II. REVIEW OF SUBJECT
and a production of high quality is one of the major objectives In this article, support system is designed to optimize
of any system [2]. We consider a petrochemical factory with a maintenance and operation processes in petrochemical
high number of equipment, devices, and machines which all industries based on Markov decision process, considering the
contribute to the production process. Since planning for lack production in case of equipment failure. The final
conducting maintenance operation of equipment is done decision is, then, taken by calculating operating costs
depending on different states along which the operation time,
Manuscript received December 30, 2014.
Gholam Reza Esmaeilian, Assistant Professor, Department of risk due to the equipment age and irregular inspection, and
Industrial Engineering, Payame Noor University, Isfahan, Iran, replacements are considered depending on time and operating
Abdullah Divanipour, MSC Student, Department of Industrial costs. In another research by Koochaki, Javid et al. (2013)
Engineering, Payame Noor University, Assalouyeh, Iran, about the impacts of maintenance based on equipment
Morteza Kazemi, Assistant Professor, Department of Industrial
Engineering, Shiraz University, Shiraz, Iran,
conditions for workforce planning and maintenance [13], the
Asghar Tahan, MSC Student, Department of Industrial Engineering, maintenance policy is used to predict a failure event
Payame Noor University, Assalouyeh, Iran,
15 www.erpublication.org
Decision support system designed to optimize maintenance and operation processes in petrochemical industry based On
Markov Decision Approach
according to the circumstances of each part, and therefore, it gu(st,st+1): Change the state of t to t + 1 the cost control when u
tries to prevent any unwanted failure, and unnecessary apply.
maintenance and repair activities from happening. The two (s,u): Select a control policy u in state s.
factors of Condition Based Monitoring (CBD) or Age Based : The cost of policy when state s choice
Replace (ABR) have been considered both in parallel and : Return value function, the policy is the expected
series multi-component systems regardless of staff limitation return starting from states S.
and with internal staff of the unit, or with external : The control value, the expected return of the S
maintenance and repair staff with their attendance time. mode with the control policy u is .
AlDurgam, Mohammad et al. (2012), carried out a study titled : Minimum function return values for all policies
common policies of optimal operation, maintenance and : Minimum value for all policies is the function
repairs to maximize overall equipment efficiency [14], and control.
performed it in one of the refinery units. This model optimizes : Discount factor and a maximum is one
maintenance and repair operation, and production rate for
: The best control policy u in state s
each period. The decision maker (controller) can choose the
optimal policy using the space vector of system state (belief
state). Zhang Zaifang et al (2010) carried out a research on B. Problem expression
conceptual design of production and maintenance. They The decision support system is designed to optimize the
investigated the increasing importance of services such as maintenance and operation processes, and to integrate the
maintenance to a manufactured product, and their role in production, taking into account the equipment failure effects
improvement of customer satisfaction and promotion of on Markov decision process based-production volume, and
consistent consumption [15]. Another research was done by then, the production and maintenance data are collected
Murat Kurt et al (2010) about the optimal maintenance of a according to localization of Markov decision process. Finally,
critical Markov system with incomplete limited repairs. They the optimal values of production state and of maintenance
considered a problem in a critical periodically inspected control are calculated, and the optimal policy is then chosen,
system using time-discrete Markov process with limitation on using the created modern approach. Using the maintenance
number of repairs before replacements to achieve the optimal data of six years ending 2013, the model has programmed
maintenance [16]. After each inspection, the decision maker each group of equipment used in petrochemical companies in
should decide whether a replacement should be done to repair a way that the changes in production rate have the least impact
the system or the equipment could be used until the next on quality and the extent of equipment depreciation, which in
inspection, and introduce the optimal structure and policy. In turn increases the overall system effectiveness. The Markov
2009, a research was done by Radouane Laggounea et al decision process model consists of a set of different operation
about preventive maintenance planning for multi-component states S, a set of possible maintenance actions U, weights of
system with not-negligible replacement times. The impacts of various equipment MW, and the real cost of G(S, U) function
maintenance time on the desirable policy have been shown [18] [25].
clearly by numeral results in an oil refinery [17]. This research tries to find the real cost of G(S, A) function
In this paper, maintenance, as well as production operation with regard to repairing groups in the maintenance system,
and the relations between them have been modeled by the and production through modeling different combination
optimal value of state and control in systems. In addition to scenarios and operating actions. Then, using Markov decision
differences in production states, other differences in control process, the optimal control policy is obtained based on the
activities of this process have been considered, and control minimum forced cost [26] [41].
models have been categorized based on target function. The
target is to minimize the total cost of maintenance controls in
different states of production. IV. PROBLEM CALCULATIONS
III. SIGNS AND SYMBOLS OF PROBLEM The defined objective of problem is to find the optimal
control policy of maintenance for production systems, while
The marks used in this paper are shown in A. and the problem the weight of the used equipment set are variable.
is described in Section B. In the problem, the activity s is defined as the function ,
A. Symbols used in the paper where is the policy of decision maker for a state where S has
been chosen.
S: All states of system facilities
The objective has been to choose the policy , when the
St: System state at time t
accumulation function sums of stochastic costs were at their
P [st , st+1]: Transition probability from states st to st + 1
minimum.
P: Transition matrix State S
The sum of expected costs over a six-year period is defined as
U: Maintenance activities
(3):
Ut: Control element of U, u0 (no maintenance) to un
(replacement) where ut= (st) (3)
P(u): Possibility of U control is the discounting factor with a maximum of one (01);
: Change state of s to s' with the control u Mw is the weight of any equipment; and is the
M: Matrix of the cost of production in the equipment failure cost imposed upon system when control u is performed, and
Mw: non-production equipment costs in the event of failure of the transition from t to t=1 take places.
W.
P [ui|s]: states transition probability when ui control runs.
16 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-3, Issue-1, January 2015
A. Production states of the expected cost of lost production due to downtime of
The production states of system are shown as set S, which equipment, the relation (9) is defined.
includes all states.
The initial state in Markov decision process is defined as s,
and the alternative state as s. Then, the transition state is
assumed as following: (9)
Pss= P [st, st+1] Figure (2) represents the expected Bellman equation for the
value of the control u control the state s to state s' to change its
B. Control states state when the policy is chosen.
States of system maintenance (control) is defined as follows. S,U
Q(s,u)
M=
Figure 2: Bellman equation for the expected value of the state
s to state s' with the control u and the choice of policy
D. the status of the maintenance control Equation (10) Bellman equation for the value of state s is
Figure (1) states the overall system different from controls are expected, when the state s to s' is done with the policy .
shown.
S0
(10)
S0
g(s,u)::0%
Equation (11) is expected Bellman equation, u is the control
value when the state s to s' is done with the policy .
g(s,u):100%
Control Control
g(s,u)::75%
S1
S1 Control S3
S3
S2
S2
g(s,u):40%
(11)
According to equation (10) the expected Bellman equation
Control
g(s,u):120%
Control can be summarized as equation (12) should be used.
S4
S4 (12)
Figure 1: show the different positions of the controller After obtaining the relation (12), the expected Bellman
equation of state value by equation (13) is obtained for the
E. Calculation of the probability of changing the state by various policies.
control of the policy
Policy change control condition of equation (4) is (13)
obtained. (4)
IV.IX. Calculate the optimal policy
F. Calculation of cost control by policy The optimal policy is the minimum control value in the
The cost of changing the control mode of relation (5) is production states obtained through (14).
obtained. = (5)
G. Calculation of the state value function and the control
value function (14)
State value function of a Markov Decision Process MDP The obtained policy is the definite optimal policy based on
policy is the expected return starting from state S, that the Markov decision process.
relation (6) is obtained.
(6)
Control value function of a Markov decision process is V. NUMERICAL COMPUTATION PROBLEM
expected to return to the start of the S state control policy u is
. From equation (7) is obtained. A. Frequency of changes in year
(7) Frequency of changes in production conditions in Table (1) is
displayed.
H. Bellman Expected equation Table 1: Cumulative Frequency Percent of Total Change
Bellman equation for the value function state expected to State Change No State Change No State Change No
affect the cost of lost production due to downtime of from 0 to 0 0 from 40 to 0 0 from 75 to 0 0
equipment, the relation (8) is defined. from 0 to 40 0 from 40 to 40 0 from 75 to 40 1
(8) from 0 to 75 1 from 40 to 75 2 from 75 to 75 6
from 0 to 100 0 from 40 to 100 0 from 75 to 100 1
from 0 to 120 0 from 40 to 120 0 from 75 to 120 0
Bellman equation for the value function to control the impact Sum 1 Sum 2 Sum 8
17 www.erpublication.org
Decision support system designed to optimize maintenance and operation processes in petrochemical industry based On
Markov Decision Approach
VI. CONCLUSION
D. change between different states of the system In this paper, a model was designed to choose the optimal
State transition matrix in Table (3) is displayed. control policy provided that the activities of maintenance and
production decisions in the context of Markov decision
Table 3: State-transition matrix process depend on each other, and the impact of lack of
production volume is considered, when the equipment fails in
0 40 75 100 120
the production support system.
0 0% 0% 0% 0% 0% The optimal control policy has been chosen and implemented
40 0% 0% 14% 0% 100% using the calculations of the fifth section over a specific
period. Then, sampling of equipment was done according to
P12 = 75 100% 100% 72% 100% 0%
the studied statistical population to measure the overall
10 system effectiveness where indicators of reliability, failure
0% 0% 14% 0% 0%
0 rate of equipment, mean time between two failures, and, at the
12 end overall system effectiveness were calculated over a
0% 0% 0% 0% 0%
0 six-month period both before and after the implementation of
the optimal policy. The aforementioned indicators have been
E. change production modes with different control shown in Table 5, and positive and acceptable effects could
0
P 75,100=P(0) P(75,100)=4.61% 100%=4.61% be seen there.
REFERENCES:
G. Calculate the function value control [1] Mobery, J. (1997). Reliability - Centered Maintenance. New
York,USA: Industrial Press Inc.
Control value with a discount rate according to equation (11) [2] Dellagi, Sofiene, Rezg, Nidhale, Gharbi, Ali.(2010).Optimal
maintenance / production policy for a manufacturing system
in Table (4) is displayed.
18 www.erpublication.org
International Journal of Engineering and Technical Research (IJETR)
ISSN: 2321-0869, Volume-3, Issue-1, January 2015
subjected to random failure and calling upon several subcontractors. Automatic Control. 48: 758769.
International Journal of Management Science and Engineering [29] Cao, X. R. (2007). Stochastic learning and optimization: a
Management. 5(4):261-267 sensitivity-based approach. Berlin, Germany: Springer.
[3] Srinivas Kumar Pinjala, L. P. (2006). An empirical investigation on [30] Cavazos-Cadena,R.(1991). A counterexample on the optimality
the relationship between business and maintenance strategies. equation in Markov decision chains with average cost criterion.
Production Economics. 104(1):214-229. International Journal of Systems and Control Letters. 16: 387392.
[4] Keith Mobley,R. (2002). An Introduction to Predictive Maintenance. [31] Chen, M. F. (2004). From Markov chains to non-equilibrium particle
2nd edition. New York, USA: Elsevier Science. systems. Singapore: World Scientific.
[5] Feld, W. (1998). Lean Production: Tools, Techniques and how to use [32] Feinberg, E. A. (2004). Continuous-time jump Markov decision
them. Virginia, USA : St. Lucie Press processes: a discrete-event approach. International Journal of
[6] Shum, Yu-Su, Gong, Dah-Chuan.(2010). Design of an integrated Mathematics of Operations Research. 29: 492524.
maintenance management system. Journal of the Chinese Institute of [33] Feinberg, E. A., & Shwartz, A. (2002). Handbook of Markov decision
Industrial Engineers. 20(4):337-354 processes. Dordrecht, Netherlands: Kluwer Academic.
[7] Guo, X.P. and Hernndez-Lerma O. (2003). Zero-Sum Games for [34] Goldberg, D. E. (1989). Genetic Algorithm in Search, Optimization &
Continuous-Time Markov Chains with Unbounded Transition and Machine Learning. New York, USA: Addison-Wesely Publishing
Average Payoff Rates. Journal of Applied Probability. 40: 327345 Company.
[8] Howard, R. A. (1960). Dynamic Programming and Markov Processes. [35] Huang, Chun-Chen, Yuan, John.(2011). A general maintenance
London, UK: The M.I.T. Press. policy for a multi-state deterioration system with multiple choices of
[9] Kageyama, M .(2008). On optimality gaps for fuzzification in finite imperfect maintenances. Journal of the Chinese Institute of Industrial
Markov decision processes. Journal of Interdisciplinary Mathematics. Engineers.28(5):336-345
11(1): 77-88 [36] Liu , Q , Tan, H and Guo, H.(2012). Denumerable continuous-time
[10] Mrinal K. G. and Subhamay S.(2013). Non-Stationary Semi-Markov Markov decision processes with multiconstraints on average costs.
Decision Processes on a Finite Horizon. Journal of Stochastic International Journal of Systems Science. 43(3):576-585
Analysis and Applications. 31(1) : 183-190 [37] Pearl, J. (1984). Heuristic Intelligent search strategies for computer
[11] Puterman, M. L. (2005). Markov Decision Processes: Discrete problem solving. New York, USA : Addison-Wesley Publishing
Stochastic Dynamic Programming. New Jersey ,USA : Hoboken Company.
[12] Faddoul , R, Wassim ,R. and Chateauneuf, A. (2011). A generalised [38] Rabbinge, D.W. (1985). Dynamic programming and the computation
partially observable Markov decision process updated by decision of economic injury level for crop disease control. Agricultural
trees for maintenance optimisation. Journal of Structure and Systems.18(4):207-226
Infrastructure Engineering. 7(10): 783-796 [39] Shih, His-Kung and Dah-Chuan Gong.(2011). Efficient Methodology
[13] Koochaki, Javid B, Jos A. C, Wortmann, H, Klingenberg, for Facility Location Problem Solving with Combination of Delphi
Warse.(2012). The influence of condition-based maintenance on Method, AHP, and DEA. Journal of Management. 28(1): 81-96.
workforce planning and maintenance scheduling. International [40] Singh, N and Brumer,P.(2012). Efficient computational approach to
Journal of Production Research. 51(8):2339-2351 the non-Markovian second order quantum master equation: electronic
[14] AlDurgam, Mohammad M, Duffuaa, Salih O.(2012). Optimal joint energy transfer in model photosynthetic systems. Journal of
maintenance and operation policies to maximize overall systems Molecular Physics. 110(15): 1815-1828
effectiveness. International journal of production research. [41] White, D. J. (1993). Markov Decision Processes. Chichester, UK:
51(5):1319-1330 John Wiley & Sons Lid
[15] Zhang, Zaifang,Chu, Xuening.(2010). A new approach for conceptual
design of product and maintenance. International Journal of
Computer Integrated Manufacturing .23(7):603-618
[16] Murat Kurt and Jeffrey P. Kharoufeh (2010).Optimally Maintaining a
Markovian Deteriorating System with Limited Imperfect Repairs.
European Journal of Operational Research, 205: 368-380.
[17] Radouane Laggounea, Alaa Chateauneuf and Djamil Aissania(2009).
Preventive maintenance scheduling for a multi-component system
with non-negligible replacement time.International Journal of
Systems Science.41(7):747761
[18] Anderson, W. J. (1991). Continuous-time Markov chains. Berlin
,Germany: Springer.
[19] Bain, L. a. (1991). Statistical analysis of reliability and life testing
models- Theory and Methods. New York, USA: Marcel Dekker.
[20] Bather, J. (1976). Optimal stationary policies for denumerable
Markov chains in continuous time. Journal of Advances in Applied
Probability. 8: 144158.
[21] Bellman, R. (1957). Dynamic programming. Princeton, USA :
Princeton University Press.
[22] Bellman, R. E. (1975). Dover paperback Dynamic Programming.
Princeton ,USA : University Press
[23] Bertsekas,D. P. (2001). Dynamic programming and optimal control,
Vol. II. Belmont USA: Athena Scientific.
[24] Beutler, F. J., & Ross, K. W. (1985). Optimal policies for controlled
Markov chains with a constraint. Journal of Mathematical Analysis
and Applications. 112: 236252.
[25] Blackwell, D. (1962). Discrete dynamic programming. Journal of
Annals of Mathematical Statistics. 33: 719726.
[26] Borkar, V. S. (1991). Pitman research notes in mathematics and
Optimal control of diffusion processes. Harlow, UK: Longman
Scientific & Technical.
[27] Cao, X. R. (2000). A unified approach to Markov decision problems
and performance sensitivity analysis. Journal of automatics (Oxford).
36: 771774
[28] Cao, X. R. (2003). Semi-Markov decision problems and performance
sensitivity analysis. International Journal of IEEE Transactions on
19 www.erpublication.org