Unit Iii
Unit Iii
Unit Iii
UNIT III
3.2 Estimating
3.3 Contingency
3.4 Milestones
3.9 Transportation Model, Assignment Models, Queuing Models : Single Channel and
Models
84
Unit 3
Planning Tools and Techniques
85
and segmentation of management responsibilities in these organizations. The results
have been remarkable but with this, increasing specialization has created a new
problem to meet out organizational challenges. The allocation of limited resources to
various activities has gained significant importance in the competitive market. These
types of problems need immediate attention which is made possible by the application
of OR techniques. The tools of operations research are not from any one discipline,
rather Mathematics, Statistics, recent years application of OR techniques have
achieved significance in all walk of life, may it be industry or office work for making
NOTES
strategical decisions more scientifically. Economics, Engineering, Psychology, etc.
have contributed to this newer discipline of knowledge. Today, it has become a
professional discipline that deals with the application of scientific methods for
decision-making, and especially to the allocation of scare resources.
Features of operations research
The significant features of operations research include the followings:
(i) Decision-making. Every industrial organization faces multifaceted
problems to identify best possible solution to their problems. OR aims to help the
executives to obtain optimal solution with the use of OR techniques. It also helps the
decision maker to improve his creative and judicious capabilities, analyze and
understand the problem situation leading to better control, better co-ordination, better
systems and finally better decisions.
(ii) Scientific Approach. OR applies scientific methods, techniques and tools
for the purpose of analysis and solution of the complex problems. In this approach
there is no place for guess work and the person bias of the decision maker.
(iii) Inter-disciplinary Team Approach. Basically the industrial problems are
of complex nature and therefore require a team effort to handle it. This team
comprises of scientist/mathematician and technocrats. Who jointly use the OR tools
to obtain a optimal solution of the problem. The tries to analyze the cause and effect
relationship between various parameters of the problem and evaluates the outcome of
various alternative strategies.
(iv) System Approach. The main aim of the system approach is to trace for
each proposal all significant and indirect effects on all sub-system on a system and to
86
evaluate each action in terms of effects for the system as a whole. The
interrelationship and interaction of each sub-system can be handled with the help of
mathematical/analytical models of OR to obtain acceptable solution.
(v) Use of Computers. The models of OR need lot of computation and
therefore, the use of computers becomes necessary. With the use of computers it is
possible to handle complex problems requiring large amount of calculations.
NOTES
The objective of the operations research models is to attempt and to locate best or
optimal solution under the specified conditions. For the above purpose, it is necessary
that a measure of effectiveness has to be defined which must be based on the goals of
the organization. These measures can be used to compare the alternative courses of
action taken during the analysis.
Importance of operations research
The scope of OR is not only confined to any specific agency like defense services but
today it is widely used in all industrial organizations. It can be used to find the best
solution to any problem be it simple or complex. It is useful in every field of human
activities, where optimization of resources is required in the best way. Thus, it
attempts to resolve the conflicts of interest among the components of organization in
a way that is best for the organization as a whole. The main fields where OR is
extensively used are given below, however, this list is not exhaustive but only
illustrative.
(i) National Planning and Budgeting - OR is used for the preparation of Five
Year Plans, annual budgets, forecasting of income and expenditure, scheduling of
major projects of national importance, estimation of GNP, GDP, population,
employment and generation of agriculture yields etc.
(ii) Defense Services - Basically formulation of OR started from USA army,
so it has wide application in the areas such as: development of new technology,
optimization of cost and time, tender evaluation, setting and layouts of defence
projects, assessment of “Threat analysis”, strategy of battle, effective maintenance
and replacement of equipment, inventory control, transportation and supply depots
etc.
87
(iii) Industrial Establishment and Private Sector Units OR can be effectively
used in plant location and setting finance planning, product and process planning,
facility planning and construction, production planning and control, purchasing,
maintenance management and personnel management etc. to name a few.
(iv) R & D and Engineering - Research and development being the heart of
NOTES
technological growth, OR has wide scope for and can be applied in technology
forecasting and evaluation, technology and project management, preparation of tender
and negotiation, value engineering, work/method study and so on.
(v) Business Management and Competition - OR can help in taking business
decisions under risk and uncertainty, capital investment and returns, business strategy
formation, optimum advertisement outlay, optimum sales force and their distribution,
market survey and analysis and market research techniques etc.
(vi) Agriculture and Irrigation - In the area of agriculture and irrigation also
OR can be useful for project management, construction of major dams at minimum
cost, optimum allocation of supply and collection points for fertilizer/seeds and
agriculture outputs and optimum mix of fertilizers for better yield.
(vii) Education and Training - OR can be used for obtaining optimum number
of schools with their locations, optimum mix of students/teacher student ratio,
optimum financial outlay and other relevant information in training of graduates to
meet out the national requirements.
(viii) Transportation - Transportation models of OR can be applied to real life
problems to forecast public transport requirements, optimum routing, forecasting of
income and expenses, project management for railways, railway network distribution,
etc. In the same way it can be useful in the field of communication.
(ix) Home Management and Budgeting - OR can be effectively used for
control of expenses to maximize savings, time management, work study methods for
all related works. Investment of surplus budget, appropriate insurance of life and
properties and estimate of depreciation and optimum premium of insurance etc.
88
3.2 Estimation in operations research
In general solution to an operations research problem will have the following stages.
We have two new terms “optimal” and “feasible”. Optimal means the best possible
solution under the given conditions and feasible means a practical solution. Therefore
optimal and feasible solution means that the suggested solution must be both practical
to implement and the best one under the given conditions. All types of problems in
operations research can be categorized as either MINIMIZING or MAXIMIZING
type. We will focus on MINIMIZING costs, time and distances while we will be
interested in MAXIMIZING revenue, profits and returns. So you must be very careful
in identifying the type of problem while deciding upon the choice of algorithm to
solve the given problem.
Estimating the effort, time, and resources needed to complete project activities is one
of the most challenging tasks that project managers must face. This is because of the
inherent uncertainty associated with many activities. Projects are unique. That is one
of the differences between projects and processes. This uniqueness often creates
uncertainty. Uncertainty because the activity is unique to the project, or the activity is
being accomplished by a resource that is not a practiced expert or the interaction of
this activity with other project activities is unique in this project. All of these can
create problems when estimating effort, time or resources.
89
Uncertainty in one aspect of an estimate leads to uncertainty on the other aspects. If
the effort needed to complete the scope is uncertain - for instance the number of hours
of work needed to complete an analysis - the time and resources needed will be
uncertain. If the timing of when an activity starts or ends is uncertain, the resource
availability and amount of effort required may change. If the resource assigned to an
NOTES
activity is uncertain, the number of hours required to complete the activity and the
timing of the availability of the resource will be uncertain.
However, the good news is that not all project activities are uncertain. In many cases,
the activity is one that is well defined and the organization routinely accomplishes it.
When possible a project team is formed so that an expert is doing the work and the
availability of the expert is predictable. In those cases an accurate estimate can be
quickly generated.
We will discuss three types of activities and what type of estimating approach should
be used with each of them. Those are the Stable Activities, the Dependent Activities,
and the Uncertain Activities. Of course there is a fourth category which is the
unknown activity. These can't be estimated but must be accounted for in the project
reserves. A Traditional or Discovery project often will have a small reserve (or
possibly none at all) - at least for that portion of the project that is approved. Whereas
an Adaptive or Extreme project may need a large reserve. Also, as complexity
increases typically the level of reserve increases since there is a greater possibility of
unrecognized activities.
The techniques will include Analogous, Parametric Modeling, 3 Point Estimate,
Expert Judgment, Published Data Estimates, Vendor Bid Analysis, Reserve Analysis,
Bottom up Analysis, and Simulation. Finally, how to estimate a project when the key
boundary condition is the End Date or the Total Cost of the project and the effort is
tailored to fit this constraint.
Stable Activities
Stable Activities are those that are well understood and predictable. For activities in
this category, the estimating is usually straightforward. One will typically use
90
analogous, expert judgment, a parametric model, or published estimating data for
these types of activities. Based upon the information available to the project team
members, use the appropriate technique and set the estimate.
Dependent Activities
NOTES
Dependent Activities are those activities where the time or effort is highly dependent
upon some project attribute or characteristic that is not yet know or knowable at the
time the original estimate is furnished. For instance, the amount of time needed to
complete testing will depend upon whether the test is successful on the first try or
whether a retest is required. For these types of activities, an assumption is made that
will drive the estimated effort, time and resources. This assumption is a risk and
should be tracked on the Risk Register. If the assumption is incorrect, the time or
money required to do the activity may be very different from the estimate. If a
conservative estimate is used, this is a positive risk. If an aggressive estimate is used,
this is a negative risk.
Uncertain Activities
Uncertain Activities are the most difficult to estimate. There is often very little data to
support a precise estimate. In addition, there are many factors that could affect the
estimate so one can't just make one assumption and track that in his risk register. An
example of an Uncertain Activity is a requirements definition task on a Complex
project. There are numerous stakeholders who have different opinions of what is
needed. Getting all of them to agree on the requirements will be an iterative process
with the number of iterations being completely unpredictable. Yet if this task is not
done well, there are likely to be major problems later in the project getting the
stakeholders to agree that the project deliverables have been met. Uncertain Activities
typically are listed in the Risk Register since the timing and cost are impossible to
estimate accurately.
91
Analogous Estimating
3 Point Estimating
92
Analogous or Parametric Model. However, because of the high degree of uncertainty
due to the risk assumptions, the three estimates are used to create a boundary on
NOTES
expectations for the activity. A variation on this technique, the PERT analysis, uses a
weighted average of these estimates to create a PERT estimate. When using this
approach, the most likely estimate is normally what is put in the project plan but the
optimistic and pessimistic estimates are used during the reserve analysis. Also, an
activity that has a great deal of difference between the optimistic and pessimistic
estimates is an uncertain activity and should be tracked in the Risk Register. The
advantage of this technique is that it provides boundaries on expectations. The
disadvantages are that it takes more work - since three estimates must be created not
one - and the most likely is still very much a guess - the actual could be significantly
better or worse.
93
Published Data Estimating
Published Data Estimating is an excellent technique for those activities for which
there is published data. In this technique, the activity is compared to the activities for
NOTES
which data exists and the actual cost or durations of the closest comparable activity is
selected from the data and used as the estimate. The advantage of this technique is
that it is very accurate when the project conditions match the conditions under which
the published data was generated. The disadvantages are that data does not exist for
many activities and that the published data that does exist is based upon the
characteristics of the organizations that compiled and published the data - which may
not correspond with your organization's characteristics.
The Vendor Bid Analysis is a technique used when working with suppliers on
uncertain activities. The analysis considers the assumptions the vendor worked with
and does a sensitivity assessment on those assumptions. In addition, for effort that the
buying organization does not have experience with, they can contract with a
consulting firm that has experience to do a "Should Cost" analysis. This "Should
Cost" estimate is compared to the suppliers quote to identify any shortcomings. The
advantage of this technique is that it exposes supplier risk that can be accounted for in
the reserve analysis and it increases the confidence in the supplier's approach. The
disadvantages are that this can take a fair amount of time and if a consultant is used to
create a "Should Cost" it adds to the cost of the project.
Reserve Analysis
94
of time, resources, or possibly performance that can be drawn upon to offset the un
estimated issues that arise.
NOTES
Bottom up Analysis
Project Simulation
95
Estimating Based Upon Project End Date
In some cases, the project end date is set even before the scope and deliverables are
defined. In those cases, a high-level time line is created starting from the end date and
NOTES
going backward to the present time. Given the amount of time allocated for the major
activities, the project team considers the needed deliverables and available resources
during the time period. Essentially, the schedule side of the triangle is fixed and the
scope and resource sides are varied so as to create a viable project. Often this will
require an iterative estimating approach. Once the high level plan is established,
estimates for the activities are developed and then iterations are done varying
resources and scope until a viable estimate can be created. The Risk Register will be
dominated by schedule risk items. Sometimes, an estimate cannot be created. In those
cases, the project should not even be initiated, since it is doomed.
In some cases, the project total cost is set even before the scope and schedule are
defined. In those cases, a high-level allocation of the budget is created between the
likely project deliverables. Each major activity is then estimated and if the estimate is
greater than the allocated cost, the timing of resources or scope and deliverables are
varied until the project is able to meet the budget goals. This is often an iterative
process that may take much iteration until it completes.
3.3 Contingency
When estimating the cost for a project, product or other item or investment, there is
always uncertainty as to the precise content of all items in the estimate, how work
will be performed, what work conditions will be like when the project is executed and
so on. These uncertainties are risks to the project. Some refer to these risks as
"known-unknowns" because the estimator is aware of them, and based on past
96
experience, can even estimate their probable costs. The estimated cost of the known-
unknowns is referred to by cost estimators as cost contingency. Contingency "refers
to costs that will probably occur based on past experience, but with some uncertainty
NOTES
regarding the amount. The term is not used as a catchall to cover ignorance. It is poor
engineering and poor philosophy to make second-rate estimates and then try to satisfy
them by using a large contingency account. The contingency allowance is designed to
cover items of cost which are not known exactly at the time of the estimate but which
will occur on a statistical basis."
The cost contingency which is included in a cost estimate, bid, or budget may be
classified as to its general purpose that is what it is intended to provide for. For a class
1 construction cost estimate, usually needed for a bid estimate, the contingency may
be classified as an estimating and contracting contingency. This is intended to provide
compensation for "estimating accuracy based on quantities assumed or measured,
unanticipated market conditions, scheduling delays and acceleration issues, lack of
bidding competition, subcontractor defaults, and interfacing omissions between
various work categories." Additional classifications of contingency may be included
at various stages of a project's life, including design contingency, or design definition
contingency, or design growth contingency, and change order contingency
AACE International, the Association for the Advancement of Cost Engineering, has
defined contingency as "An amount added to an estimate to allow for items,
conditions, or events for which the state, occurrence, or effect is uncertain and that
experience shows will likely result, in aggregate, in additional costs. Typically
estimated using statistical analysis or judgment based on past asset or project
experience. Contingency usually excludes:
97
4) Escalation and currency effects
Some of the items, conditions, or events for which the state, occurrence, and/or effect
is uncertain include, but are not limited to, planning and estimating errors and
omissions, minor price fluctuations, design developments and changes within the
scope, and variations in market and environmental conditions. Contingency is
NOTES
generally included in most estimates, and is expected to be expended". A key phrase
above is that it is "expected to be expended". In other words, it is an item in an
estimate like any other, and should be estimated and included in every estimate and
every budget. Because management often thinks contingency money is "fat" that is
not needed if a project team does its job well, it is a controversial topic.
In general, there are four classes of methods used to estimate contingency”.These
include the following:
1) Expert judgment
2) Predetermined guidelines (with varying degrees of judgment and
empiricism used)
3) Simulation analysis (primarily risk analysis judgment incorporated in a
simulation such as Monte-Carlo)
4) Parametric Modeling (empirically-based algorithm, usually derived through
regression analysis, with varying degrees of judgment used).
While all are valid methods, the method chosen should be consistent with the first
principles of risk management in that the method must start with risk identification,
and only then are the probable cost of those risks quantified. In best practice, the
quantification will be probabilistic in nature (Monte-Carlo is a common method used
for quantification).
Typically, the method results in a distribution of possible cost outcomes for the
project, product, or other investment. From this distribution, a cost value can be
selected that has the desired probability of having a cost under run or cost overrun.
Usually a value is selected with equal chance of over or under running. The difference
98
between the cost estimate without contingency and the selected cost from the
distribution is contingency. Contingency is included in budgets as a control account.
As risks occur on a project, and money is needed to pay for them, the contingency can
NOTES
be transferred to the appropriate accounts that need it. The transfer and its reason is
recorded. In risk management, risks are continually reassessed during the course of a
project, as are the needs for cost contingency.
99
3.5 Gantt chart
The basic purpose of a Gantt chart is to break a large project into a series of smaller
tasks in an organized way. The chart shows when each task should begin and how
long it should take. The left-most column lists each of the tasks in chronological order
NOTES
according to their start time. The remaining columns show the timeline (often shown
in weeks, but use whatever units are convenient for your project). For each row, a task
is listed and a line in drawn through the timeline for the weeks during which that task
will be addressed.
Following is a simple example of what a Gantt chart looks like. In this chart, a rough
outline of what tasks are to be accomplished up to the first design review on October
9.
This is not given to show you how you should organize your own teams time so much
as to provide a sample Gantt chart for illustrative purposes. Notice how some tasks
take longer than others, so some weeks have more than one associated task. For
example, on the time period beginning September 18 th, the team will be continuing
the information gathering started a week prior and begin shopping around for a
product for the reverse engineering exercise on October 2.
100
Problem Definition: problem statement, characteristics of problem,
characteristics of solution
Divergence: brainstorming, morphological analysis, functional decomposition
Transition: Pugh chart, QFD, sketch models, user feedback NOTES
Convergence: analysis, detailed configuration, optimization, user feedback
Prototyping: build rough prototype, test rough prototype, plan the build,
collect materials, start machining, physical testing, user testing
Documentation: proposal, progress report, final report, presentations
3.6 (PERT)
Network Analysis
Routing is the first step in production planning. In small projects, routing is very
simple. Sequence of operations is almost decided and the operations can be
performed one after the other in a given sequence. But in large project, this is rather a
difficult problem. There may be more than one routes to complete a job. The
function of production manager is to find out the path which takes the least time in
completing the project.
In a big project, many activities are performed simultaneously. There are many
activities which can be started only at the completion of other activities. In such
cases a thorough study is required to collect in complete details about the project and
then to find out a new, better and quicker way to get the work done in a decent way.
In such cases, the first step is to draw some suitable diagram showing various
activities and their positions in the project. It should also explain the time to be taken
in completing the route from one operation to the other. It also defines the way in
which the delay in any activity can affect the entire project in terms of both money
and time. Such a diagram is called network diagram. In the words of James L.
Riggs ‘ A network is a picture of a project, a map of requirements tracing the work
from a departure points to the final completion objective. It can be a collection of all
the minute details involved or only a gross outline of general functions.
101
Important Characteristics in a Network Analysis
102
Floats may be total, free, and independent:
A) Total Float. Total float is the maximum amount by which duration time of an
activity can be increased without increasing the total duration time of the project.
NOTES
Total float can be calculated as follows:
(i) First, the difference between Earliest Start Time
(EST) of tail event and Latest Finish Time (LFT) of head event for the
activity shall be calculated.
(ii) Then, substract the duration time of the activity from
the value obtained in (i) above to get the required float for the activity.
The total float can be helpful in drawing the following conclusions:
(a) If total float value is negative, it denotes that the
resources for completing the activity are not adequate and the activity,
therefore, cannot finish in time. So, extra resources or say critical path
needs crashing in order to reduce the negative float.
(b) If the total float value is zero, it means the
resources are just sufficient to complete the activity without any delay.
(c) If the total float value is positive, it points out that
total resources
are in excess of the amount required or the resources should be reallocated to avoid
the delay otherwise the activity will be delayed by so much time.
103
(B) Free Float.
It is that the fraction from total float of an activity which can be used for rescheduling
the activity without affecting the succeeding activity. If both tail and head events are
given their earliest times, i.e., EST and EFT the free can be calculated by deducting
NOTES
head slack from total float, i.e.,
Free Float = Total flat – Slack time of the head event.
One of the main features of PERT and related techniques is their use of a network or
precedence diagram to depict major project activities and their sequential
relationships. There are two slightly different conventions for constructing these
104
network diagrams. Under one convention, the arrows designate activities; under the
other convention, the nodes designate activities. These conventions are referred to as
activity-on-arrow (AOA) and activity-on-node (AON). Activities consume
NOTES
resources and /or time. The nodes in the AOA approach represent the activities
starting and finishing points, which are called events. Events are points in time.
Unlike activities, they consume neither resources not time. The nodes in an AON
diagram represent activities.
In the AOA diagram, the arrows represent activities and they show the sequence in
which certain activities must be performed (eg., Interview precedes Hire and Train);
in the AON diagram, the arrows show only the sequence in which certain activities
must be performed while the nodes represent the activities. Activities in AOA
networks can be referred to in either of two ways. One is by their endpoints (eg.,
activity 2-4) and the other is by a letter assigned to an arrow (e.g., activity c). Both
methods are illustrated in this chapter. Activities in AON networks are referred to by
a letter (or number) assigned to a node. Although these two approaches are slightly
different, they both show sequential relationships – something Gantt charts don’t.
Note that the AON diagram has a starting node, S, which is actually not an activity
but is added in order to have a single starting node.
Despite these differences, the two conventions are remarkably similar, so you should
not encounter much difficulty in understanding either one. In fact, there are
convincing arguments for having some familiarity with both approaches. Perhaps the
most compelling is that both approaches are widely used. However, any particular
organization would typically use only one approach, and employees would have to
work with that approach. Moreover, a contractor doing work for the organization
may be using the other approach, so employees of the organization who deal with the
contractor on project matters would benefit from knowledge of the other approach
A path is a sequence of activities that leads from the starting node to the ending node.
For example, in the AOA diagram, the sequence 1-2-4-5-6 is a path. In the AON
diagram, S-1-2-6-7 is a path. Note that in both diagrams there are three paths. One
reason for the importance of paths is that they reveal sequential relationships. The
105
importance of sequential relationships cannot be overstated: If one activity in a
sequence is delayed (i.e., late) or done in-correctly, the start of all following activities
on that path will be delayed.
NOTES
Another important aspect of paths is the length of a path: How long will a particular
sequence of activities take to complete? The length (of time) for any path can be
determined by summing the expected times of the activities on that path. The path
with the longest time is of particular interest because it governs project completion
time. In other words, expected project duration equals the expected time of the
longest path. Moreover, if there are any delays along the longest path, there will be
corresponding delays in project completion time. Attempts to shorten project
completion must focus on the longest sequence of activities. Because of its influence
on project completion time, the longest path is referred to as the critical path, and its
activities are referred to as critical activities.
Paths that are shorter than the critical path can experience some delays and still not
affect the overall project completion time as long as the ultimate path time does not
exceed the length of the critical path. The allowable slippage for any path is called
slack, and it reflects the difference between the length of a given path and the length
of the critical path. The critical path, then, has zero slack time.
The critical path analysis is an important tool in production planning and scheduling.
Gnatt charts are also one of the tools of scheduling but they have one disadvantage
for which they are found to be unsuitable. The problem with Gnatt Chart is that the
sequence of operations of a project or the earliest possible date for the completion of NOTES
the project as a whole cannot be ascertained. This problem s overcome by this
method of Critical Path Analysis.
CPM is used for scheduling special projects where the relationship between the
different parts of projects is more complicated than that of a simple chain of task to be
106
completed one after the other. This method (CPM) can be used at one extreme for the
very simple job and at other extreme for the most complicated tasks.
A CPM is a route between two or more operations which minimizes (or maximizes)
some measures of performance. This can also be defined as the sequence of activities
which will require greatest normal time to accomplish. It means that the sequence of
activities which require longest duration are singled out. It is called at critical path
because any delay in performing the activities on this path may cause delay in the
whole project. So, such critical activities should be taken up first.
Under CPM, the project is analyzed into different operations or activities and their
relationship are determined and shown on the network diagram. So, first of all a
network diagram is drawn. After this the required time or some other measure of
performance is posted above and to the left of each operation circle. These times are
them combined to develop a schedule which minimizes or maximizes the measure of
performance foe each operation. Thus CPM marks critical activities in a project and
concentrates on them. It is based on the assumption that the expected time is actually
the time taken to complete the object.
107
Situations where CPM can be effectively used
Advantages of CPM
There are so many modern techniques that have developed recently for the planning
and control of large projects in various industries especially in defense, chemical and
construction industries. Perhaps, the PERT is the best known of such techniques.
PERT is a time-event network analysis technique designed to watch how the parts of
a program fit together during the passage of time and events. This technique was
developed by the special project office of the U.S. Navy in 1958. It involves the
108
application of network theory to scheduling problems. In PERT we assume that the
expected time of any operation can never be determined exactly.
NOTES
Major Features of PERT or Procedure or Requirement for PERT
NOTES
Here it is assumed that the time estimates follow the Beta distribution.
The next step is to compute the critical path and the slack time.
A critical path or critical sequence of activities is one which takes the longest time to
accomplish the work and the least slack time.
Advantages of PERT
PERT is a very important of managerial planning and control at the top concerned
with the overall responsibility of a project. PERT has the following merits.
(i) PERT forces managers and subordinate managers to make a plan for
production because time event analysis is quite impossible without
planning and seeing how the pieces fit together.
(ii) PERT encourages management control by exception. It concentrates
attention on critical elements that may need correction.
(iii) It enables forward-working control as a delay will affect the
succeeding events and possibly the whole project. The production
manager can somehow make up the time by shortening that of some
other event.
(iv) The network system with its sub-systems creates a pressure for action
at the right spot and level and at the right time.
(v) PERT can be effectively used for rescheduling the activities.
110
Limitations in using PERT
Although these techniques (PERT and CPM) use the same principles and are based on
network analysis yet they in the following respects from each other:
(i) PERT is appropriate where time estimates are uncertain in the duration of
activities as measured by optimistic time, most likely time, and pessimistic
time, whereas CPM (Critical Path Method) is good when time estimates
are found with certainty. CPM assumes that the duration of every activity
is constant and therefore every activity is critical or not.
(ii) PERT is concerned with events which are the beginning or ending points
of operation while CPM is concerned with activities.
(iii) PERT is suitable for non-repetitive projects while CPM is designed for
repetitive projects.
(iv) PERT can be analyzed statistically whereas CPM not.
(v) PERT is not concerned with the relationship between time and cost,
whereas CPM establishes a relationship between time and cost and cost is
proportionate to time.
111
3.8 Linear programming
Formulation of LPP
Elixir paints produces both interior and exterior paints from two raw materials M1
and M2. The following table provides the data. The market survey restricts the market
daily demand of interior paints to 2 tons. Additionally the daily demand for interior
paints cannot exceed that of exterior paints by more than 1 ton. Formulate the LPP.
112
Solution:
The general procedure for formulation of an LPP is as follows
NOTES
1. To identify and name the decision variables.
113
M1 is required. But the total quantity of M1 used for producing x1 quantity of
exterior paint and x2 quantity of interior paint cannot exceed 24 tons (since that is the
maximum availability). Therefore the constraint equation can be written as 6 x1 +4 x2
< = 24
NOTES
In the same way the constraint equation for the raw material M2 can be framed. At
this point I would suggest that you must try to frame the constraint equation for raw
material M2 on your own and then look into the equation given in the text. To
encourage you to frame this equation on your own I am not exposing the equation
now but am showing this equation in the consolidated solution to the problem.
Well now that you have become confident by framing the second constraint equation
correctly (I am sure you have), let us now look to frame the demand constraints for
the problem. The problem states that the daily demand for interior paints is restricted
to 2 tons. In other words a maximum of 2 tons of interior paint can be sold per day. If
not more than 2 tons of interior paints can be sold in a day, it advisable to limit the
production of interior paints also to a maximum of 2 tons per day (I am sure you
agree with me).
Since the quantity of interior paints produced is denoted by x2, the constraint is now
written as x2 <= 2
Now let us look into the other demand constraint. The problem states that the daily
demand for interior paints cannot exceed that of exterior paints by more than 1 ton.
This constraint has to be understood and interpreted carefully. Read the statement
carefully and understand that the daily demand for interior paints can be greater than
the demand for exterior paints but that difference cannot be more than 1 ton. Again
we can conclude that based on demand it is advisable that if we produce interior
paints more than exterior paints, that difference in tons of production cannot exceed 1
ton. By now you are familiar that the quantities of exterior paint and interior paint
114
produced are denoted by x1 and x2 respectively. Therefore let us frame the constraint
equation as the difference in the quantities of paints produced.
x2 - x1 <=1
In addition to the constraints derived from the statements mentioned in the problem,
NOTES
there is one more standard constraint known as the non-negativity constraint. The
rationale behind this constraint is that the quantities of exterior and interior paints
produced can never be less than zero. That is it is not possible to produce negative
quantity of any commodity. Therefore x1 and x2 must take values of greater than or
equal to zero. This constraint is now written as x1, x2 are > = 0. Thus we have
formulated (that is written in the form of equations) the given statement problem.
The transportation model uses the principle of 'transplanting' something, like taking a
hole from one place and inserting it in another without change. First it assumes that to
disturb or change the idea being transported in any way will damage and reduce it
somehow. It also assumes that it is possible to take an idea from one person's mind
into another person's so that the two people will then understand in exactly the same
way. The transportation model is a valuable tool in analyzing and modifying existing
transportation systems or the implementation of new ones. In addition, the model is
effective in determining resource allocation in existing business structures.
115
The model requires a few keys pieces of information, which include the following:
Origin of the supply
Destination of the supply
Unit cost to ship
NOTES
The transportation model can also be used as a comparative tool providing business
decision makers with the information they need to properly balance cost and supply.
The use of this model for capacity planning is similar to the models used by engineers
in the planning of waterways and highways.
This model will help decide what the optimal shipping plan is by determining a
minimum cost for shipping from numerous sources to numerous destinations. This
will help for comparison when identifying alternatives in terms of their impact on the
final cost for a system. The main applications of the transportation model mention in
the chapter are location decisions, production planning, capacity planning and
transshipment. Nonetheless, the major assumptions of the transportation model are
the following :
116
Transportation costs play an important role in location decision. The transportation
problem involves finding the lowest-cost plan for distributing stocks of goods or
supplies from multiple origins to multiple destinations that demand the goods. The
transportation model can be used to compare location alternatives in terms of their
NOTES
impact on the total distribution costs for a system. It is subject to demand satisfaction
at markets supply constraints. It also determines how to allocate the supplies available
form the various factories to the warehouses that stock or demand those goods, in
such a way that total shipping cost is minimized.
Assignment model
The assignment problem is one of the fundamental combinatorial optimization
problems in the branch of optimization or operations research in mathematics. It
consists of finding a maximum weight matching in a weighted bipartite graph. In its
most general form, the problem is as follows:
There are a number of agents and a number of tasks. Any agent can be assigned to
perform any task, incurring some cost that may vary depending on the agent-task
assignment. It is required to perform all tasks by assigning exactly one agent to each
task and exactly one task to each agent in such a way that the total cost of the
assignment is minimized. If the numbers of agents and tasks are equal and the total
cost of the assignment for all tasks is equal to the sum of the costs for each agent (or
the sum of the costs for each task, which is the same thing in this case), then the
problem is called the linear assignment problem. Commonly, when speaking of the
assignment problem without any additional qualification, then the linear assignment
problem is meant.
The Hungarian algorithm is one of many algorithms that have been devised that solve
the linear assignment problem within time bounded by a polynomial expression of the
number of agents. The assignment problem is a special case of the transportation
problem, which is a special case of the minimum cost flow problem, which in turn is
a special case of a linear program. While it is possible to solve any of these problems
117
using the simplex algorithm, each specialization has more efficient algorithms
designed to take advantage of its special structure. If the cost function involves
quadratic inequalities it is called the quadratic assignment problem.
Suppose that a taxi firm has three taxis (the agents) available, and three customers
NOTES
(the tasks) wishing to be picked up as soon as possible. The firm prides itself on
speedy pickups, so for each taxi the "cost" of picking up a particular customer will
depend on the time taken for the taxi to reach the pickup point. The solution to the
assignment problem will be whichever combination of taxis and customers results in
the least total cost. However, the assignment problem can be made rather more
flexible than it first appears. In the above example, suppose that there are four taxis
available, but still only three customers. Then a fourth dummy task can be invented,
perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it.
The assignment problem can then be solved in the usual way and still give the best
solution to the problem. Similar tricks can be played in order to allow more tasks than
agents, tasks to which multiple agents must be assigned (for instance, a group of more
customers than will fit in one taxi), or maximizing profit rather than minimizing cost.
Queuing Models
Delays and queuing problems are most common features not only in our daily-life
situations such as at a bank or postal office, at a ticketing office, in public
transportation or in a traffic jam but also in more technical environments, such as in
manufacturing, computer networking and telecommunications. They play an essential
role for business process re-engineering purposes in administrative tasks. “Queuing
models provide the analyst with a powerful tool for designing and evaluating the
performance of queuing systems.”
Whenever customers arrive at a service facility, some of them have to wait before
they receive the desired service. It means that the customer has to wait for his/her
turn, may be in a line. Customers arrive at a service facility (sales checkout zone in
ICA) with several queues, each with one server (sales checkout counter). The
118
customers choose a queue of a server according to some mechanism (e.g., shortest
queue or shortest workload). Sometimes, insufficiencies in services also occur due to
an undue wait in service may be because of new employee. Delays in service jobs
NOTES
beyond their due time may result in losing future business opportunities. Queuing
theory is the study of waiting in all these various situations. It uses queuing models to
represent the various types of queuing systems that arise in practice. The models
enable finding an appropriate balance between the cost of service and the amount of
waiting.
If there are more jobs at the node than there are servers then jobs will queue and wait
for service. The M/M/1 queue is a simple model where a single server serves jobs that
arrive according to a Poisson process and have exponentially distributed service
requirements. In an M/G/1 queue the G stands for general and indicates an arbitrary
probability distribution. The M/G/1 model was solved by Felix Pollaczek in 1930, a
solution later recast in probabilistic terms by Aleksandr Khinchin and now known as
the Pollaczek–Khinchine formula. After World War II queueing theory became an
area of research interest to mathematicians.
119
Work on queueing theory used in modern packet switching networks was performed
in the early 1960s by Leonard Kleinrock. It was in this period that John Little gave a
proof of the formula which now bears his name: Little's law. In 1961 John Kingman
gave a formula for the mean waiting time in a G/G/1 queue: Kingman's formula. The
matrix geometric method and matrix analytic methods have allowed queues with
NOTES
phase-type distributed inter arrival and service time distributions to be considered.
Problems such as performance metrics for the M/G/k queue remain an open problem.
There are two types of multi channel problems. The first type occurs when the system
has more service centers, and the queues to every of them are isolated and an element
cannot pass from one queue to the other. The second type of waiting-queue problem is
said to be of multiple exponential channels or of several service channels in parallel,
when the element in queue can be served equally well by more than one station.
The queuing problems in multi-channel problems should be considered rather as
several problems of the single-channel type. Fig. 1 illustrates a valid case; the
formation of each queue is independent from the others. When an element has
selected the concrete queue, it becomes part of the single-channel system.
The probability of the arrivals in queue A is independent from the probability of the
arrivals in queue B (and C) due to different characteristics of the routes. On the other
hand, in multiple exponential channels type each one of the stations can deliver the
same type of service and is equipped with the same type of facilities. The element
120
which selects one station makes this decision without any external pressure from
anywhere. Due to this fact, the queue is single. The single queue (line) usually breaks
into smaller queues in front of each station. Fig. 2 schematically shows the case of a
single line (which has its mean rate ) that randomly scatters itself toward four
NOTES
stations (S = 4), each of which has an equal mean service rate
Under such situations simulation is used. It should be noted that simulation does not
solve the problem by itself, but it only generates the required information or data
needed for decision problem or decision-making.
Simulation modeling is the process of creating and analyzing a digital prototype of a
physical model to predict its performance in the real world. Simulation modeling is
used to help designers and engineers understand whether, under what conditions, and
121
in which ways a part could fail and what loads it can withstand. Simulation modeling
can also help predict fluid flow and heat transfer patterns.
122
Although this is a logical ordering of steps in a simulation study, much iteration at
various sub-stages may be required before the objectives of a simulation study are
achieved. Not all the steps may be possible and/or required. On the other hand,
NOTES
additional steps may have to be performed. The next three sections describe these
steps in detail.
They are different from statistical models (for example linear regression) whose aim
is to empirically estimate the relationships between variables. The deterministic
model is viewed as a useful approximation of reality that is easier to build and
interpret than a stochastic model. However, such models can be extremely
complicated with large numbers of inputs and outputs, and therefore are often
noninvertible; a fixed single set of outputs can be generated by multiple sets of inputs.
Thus taking reliable account of parameter and model uncertainty is crucial, perhaps
even more so than for standard statistical models, yet this is an area that has received
little attention from statisticians.
123
Probabilistic simulation
In Monte Carlo simulation, the entire system is simulated a large number (e.g., 1000)
of times. Each simulation is equally likely, and is referred to as a realization of the
system. For each realization, all of the uncertain parameters are sampled (i.e., a single
random value is selected from the specified distribution describing each parameter).
The system is then simulated through time (given the particular set of input
parameters) such that the performance of the system can be computed. This results in
a large number of separate and independent results, each representing a possible
“future” for the system (i.e., one possible path the system may follow through time).
The results of the independent system realizations are assembled into probability
distributions of possible outcomes.
124
3.11 Dynamic programming
The idea behind dynamic programming is quite simple. In general, to solve a given
problem, we need to solve different parts of the problem (sub problems), then
combine the solutions of the sub problems to reach an overall solution. Often when
using a more naive method, many of the sub problems are generated and solved many
times. The dynamic programming approach seeks to solve each sub problem only
once, thus reducing the number of computations: once the solution to a given sub
problem has been computed, it is stored or "memorized": the next time the same
solution is needed, it is simply looked up. This approach is especially useful when the
number of repeating subproblems grows exponentially as a function of the size of the
input.
Dynamic programming algorithms are used for optimization (for example, finding the
shortest path between two points, or the fastest way to multiply many matrices). A
dynamic programming algorithm will examine all possible ways to solve the problem
and will pick the best solution. Therefore, we can roughly think of dynamic
programming as an intelligent, brute-force method that enables us to go through all
possible solutions to pick the best one. If the scope of the problem is such that going
through all possible solutions is possible and fast enough, dynamic programming
guarantees finding the optimal solution. The alternatives are many, such as using a
greedy algorithm, which picks the best possible choice "at any possible branch in the
road". While a greedy algorithm does not guarantee the optimal solution, it is faster.
125
Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to
lead to the optimal solution.
For example, let's say that you have to get from point A to point B as fast as possible,
NOTES
in a given city, during rush hour. A dynamic programming algorithm will look into the
entire traffic report, looking into all possible combinations of roads you might take,
and will only then tell you which way is the fastest. Of course, you might have to wait
for a while until the algorithm finishes, and only then can you start driving. The path
you will take will be the fastest one (assuming that nothing changed in the external
environment).
On the other hand, a greedy algorithm will start you driving immediately and will
pick the road that looks the fastest at every intersection. As you can imagine, this
strategy might not lead to the fastest arrival time, since you might take some "easy"
streets and then find yourself hopelessly stuck in a traffic jam.
126