Strategic Financial Management
Strategic Financial Management
Strategic Financial Management
Introduction
One of the popular definitions of strategic financial management as per the official terminology of the CIMA is,
“the identification of the possible strategies capable of maximizing an organization’s net present value, the
allocation of scarce capital resources among the competing opportunities and the implementation and
monitoring of the chosen strategy, so as to achieve stated objectives”. So it can be said that, strategy depends
on the stated objectives or targets. So, let us start with identifying and formulating these objectives.
Financial Objectives
It is needless to say that one of the most important objectives of a company is maximizing the wealth of its
shareholders. It is to be kept in mind that a company is financed by its ordinary shareholders, preference
shareholders, loan stock holders and other long-term and short-term creditors. The entire fund that is surplus,
belongs to the legal owners of the company, and its ordinary shareholders. Any retained profits are the
undistributed wealth of these equity shareholders. The non-financial objectives do not ignore financial
objectives altogether, but they point towards the fact that the simple theory of company finance which
postulates that the primary objective of a firm is to maximize the wealth of ordinary shareholders, is too
simplistic. In essence, the financial objectives may have to be compromised in order to satisfy non-financial
objective.
When the prices of the shares of a company that are traded on a stock market rises, the wealth of the
shareholders tends to get increased. The price of a company’s share goes up when the company is expected to
make attractive profits, which it plans to pay as dividends to its shareholders or re-invest in the business to
achieve future profit growth and dividend growth. However, it is also to be kept in mind that in order to increase
the price of the share, the company should achieve its profits without taking business risks and financial risks
which might worry its shareholders.
Having discussed the financial objectives of the firm at length, let us now look into some of the non-financial
objectives. The non-financial objectives of a firm can be as follows:
a. General welfare of the employees, which includes providing the employees with good wage, salaries,
comfortable and safe working conditions, good training and career developments and good pensions.
b. Welfare of the management which includes providing them with the better salaries and perquisites though it
will be at the cost of the company’s expenditure.
c. Welfare of the society as a whole. For example, the oil companies confront with the problem of protecting
the environment and preserving the earth’s dwindling energy resources.
d. Fulfillment of responsibilities towards customers and suppliers.
e. Leadership in research and development.
Agency Theory
Agency theory is often described in terms of the relationships between the various interested parties in the firm.
The theory examines the duties and conflicts that occur between parties who have an agency relationship.
Agency relationships occur when one party, the principal, employs another party, called the agent, to perform a
task on their behalf. Agency theory is helpful in explaining the actions of the various interest groups in the
corporate governance debate. For example, managers can be seen as the agents of shareholders, employees
as the agents of managers, managers and shareholders as the agents of long and short-term creditors, etc. In
most of these principal-agent relationships conflicts of interest is seen to exist. It has been widely observed that
the conflicts between shareholders and managers and in a similar way the objectives of employees and
managers may be in conflict. Although the actions of all the parties are united by one mutual objective of
wishing the firm to survive, the various principals involved might make various arrangements to ensure their
agents work closer to their own interests. For example, shareholders might insist that part of management
remuneration is in the form of a profit related bonus. The agency relationship arising from the separation of
ownership from management is sometimes characterized as the agency problem. For example, if managers
hold none or very little of the equity shares of the company they work for, what is to stop them from: Working
inefficiently? Not bothering to look for profitable new investment opportunities? Giving themselves high salaries
and perks?
1
One power that shareholders possess is the right to remove the directors from office. But shareholders have to
take the initiative to do this, and in many companies, the shareholders lack the energy and organization to take
such a step. Even so, directors will want the company’s report and accounts, and the proposed final dividend, to
meet with shareholders’ approval at the AGM. Another reason why managers might do their best to improve the
financial performance of their company is that managers’ pay is often related to the size or profitability of the
company. Managers in very big companies, or in very profitable companies, will normally expect to earn higher
salaries than managers in smaller or less successful companies. Perhaps the most effective method is one of
long-term share option schemes to ensure that shareholder and manager objectives coincide. Management
audits can also be employed to monitor the actions of managers. Creditors commonly write restrictive
covenants into loan agreements to protect the safety of their funds. These arrangements involve time and
money both in initial set-up, and subsequent monitoring, these being referred to as agency costs.
The actions of stakeholder groups in pursuit of their various goals can exert influence on strategy. The greater
the power of the stakeholder, the greater the influence will be. Each stakeholder group possesses different
expectations about what it wants, and the expectations of the various groups’ conflicts with each other. Each
group, however, tends to influence strategic decision-making. The relationship between management and
shareholders is sometimes referred to as an agency relationship, in which the managers act as agents for the
shareholders, using delegated powers to run the affairs of the company in the shareholders’ best interests.
Although most of the financial management theory is developed keeping in mind the assumed objective of
maximizing shareholder wealth, it is, at the same time, important to note that in reality, companies may be
working toward other objectives. The other parties that share interests in the organization (e.g., employees, the
community at large, creditors, customers, etc.) have objectives that differ to those of the shareholders. As the
objectives of these other parties might conflict with those of the shareholders, it will be impossible to maximize
shareholder wealth and satisfy the objectives of other parties at the same time. In such situations, the firm faces
multiple, conflicting objectives, and satisfying of the interested parties’ objectives becomes the only practical
approach for management. If this strategy is adopted, then the firm seeks to earn a satisfactory return for its
shareholders while at the same time (for example) is able to pay reasonable wages to its workforce.
Goal Congruence
Goal congruence is the term which describes the situation when the goals of different interest groups coincide.
A way of helping to achieve goal congruence between shareholders and managers is by the introduction of
carefully designed remuneration packages for managers which would motivate managers to take decisions
which were consistent with the objectives of the shareholders. Agency theory sees employees of businesses,
including managers, as individuals, each with his or her own objectives. Within a department of a business,
there are departmental objectives. If achieving these various objectives also leads to the achievement of the
objectives of the organization as a whole, there is said to be goal congruence.
Goal congruence can be achieved, and at the same time, the ‘agency problem’ can be dealt with, providing
managers with incentives which are related to profits or share price, or other factors such as:
a. Pay or bonuses related to the size of profits termed as profit-related pay.
b. Rewarding managers with shares, e.g.: when a private company ‘goes public’ and managers are invited to
subscribe for shares in the company at an attractive offer price.
c. Rewarding managers with share options. In a share option scheme, selected employees are given a
number of share options, each of which gives the right (after a certain date) to subscribe for shares in the
company at a fixed price. The value of an option will increase if the company is successful and its share
price goes up. For example, an employee might be given 10,000 options to subscribe for shares in the
company at a price of Rs.30,000 (by buying Rs.50,000 worth shares for Rs.20,000).
Such measures might encourage management in the adoption of “creative accounting” methods which will
distort the reported performance of the company in the service of the managers’ own ends. However, creative
accounting methods such as off-balance sheet finance present a temptation to management at all times given
that they allow a more favorable picture of the state of the company to be presented than otherwise, to
shareholders, potential investors, potential lenders and others. An alternative approach is to attempt to monitor
2
managers’ behavior, for example, by establishing ‘Management audit’ procedures, to introduce additional
reporting requirements, or to seek assurance from managers that shareholders’ interests will be foremost in
their priorities.
Economic Influences
The impact of the rate of price inflation in the economy has the following affects:
a. Costs of production and selling prices
b. Interest rates
c. Foreign exchange rates
d. Demand in the economy (high rates of inflation seem to put a brake on real economic growth).
Interest Rates
A positive real rate of interest enhances an investor’s real wealth to the income he earns from his investments.
However, when interest rates go up or down, perhaps due to a rise or fall in the rate of inflation, there will also
be a potential capital loss or gain for the investor. In other words, the market value of interest-bearing securities
will alter. Market values will fall when interest rates go up and vice versa.
When interest rates change, the return expected by investors from shares also tends to change. For example, if
interest rates fall from 14 percent to 12 percent on government securities, and from 15 percent to 13 percent on
company debentures, the return expected from shares (dividends and capital growth) would also fall. This is
because shares and debt are alternative ways of investing money. If interest rates fall, shares become more
attractive to buy. As demand for shares increases, their prices too rise, and so the dividend return gained from
them falls in percentage terms.
Interest rate is important for financial decisions by companies. The incidence of the interest rates can have the
following effects.
3
Interest Rates and New Capital Investments
When interest rates go up, consequently the cost of finance to a company also goes up; the minimum return
that a company will require on its own new capital investments also goes up. A company’s management is
supposed to give close consideration, when interest rates are high, keeping investments in assets, particularly
unwanted or inefficient fixed assets, stocks and debtors, down to a minimum. This activity of the company is
done in order to reduce the company’s need to borrow. At the same time, the management also needs to bear
in mind the deflationary effect of high interest rates that deters spending by raising the cost of borrowing.
Financial Planning
The management function of planning requires the development, definition and evaluation of the following:
a. The organization’s objectives,
b. Alternative strategies for achievement of these objectives.
The objectives of business activity are invariably concerned with money, as the universal measure of the ability
to command resources. Thus, financial awareness probes into all business activities. Nevertheless, finance
cannot be managed in isolation from other functions of the business and, therefore, financial planning will be
undertaken within the framework of a plan for the whole organization, i.e., a corporate plan.
The process of financial planning must begin at the strategic level, where the corporate strengths and
weaknesses are reviewed and long-term objectives are identified. It is to be kept in mind that business review
should enable a forecast to be made of future changes in sales, profitability and capital employed. When this
forecast is compared with the results desired by the corporate objectives, a gap may be identified which must
be made good by developing new strategies. Senior management must negotiate with middle management,
until a single strategic plan for the whole company is agreed. From this strategic plan, tactical plans must be
drawn up (e.g., pricing policies, personnel requirements, and production methods) and a medium-term plan
established. This medium-term plan can be broken down into a series of short-term financial plans at a later
point of time.
Companies are often accused of favoring short-term profitability at the expense of long-term prosperity. For
example, an investment in the latest technology in production machinery might be postponed because of fear of
increasing the depreciation charge, although longer-term profitability will be improved by the investment.
The different types of long-term strategies can be better understood with the help of the following flow chart.
Long-term
strategies
Survival Growth
By acquisition Internal
4
Survival Strategies
Non-growth Strategies
A non-growth strategy refers to that strategy where there is no growth in earnings. This does not necessarily
mean no turnover. A company might pursue a non-growth strategy if it saw its non-economic objectives as
more important than its economic objectives.
In certain cases, there could even be negative growth, by paying out dividends larger than current earnings, so
that shareholders are effectively receiving a refund of their capital investment, and there is a net fall in assets
employed. A negative growth strategy can be adopted in pursuit of an objective to increase the percentage
return to the shareholders – if the company pulls out of the least profitable areas of its operations first, it will
increase its overall return on investment, although the total investment will be less. The negative growth
strategy consists of an orderly, planned withdrawal from less profitable areas, and while the shareholder’s
dividend may eventually decline, his return can rise since the capital invested also falls. If the company simply
runs down, his return will also fall.
Corrective Strategies
A non-growth strategy certainly does not mean that the company can afford to be complacent. A considerable
amount of management time should be devoted to consider the actions needed to correct its overall strategic
structure to achieve the optimum. This involves seeking a balance between its overall strategic structures to
achieve the optimum. This involves seeking a balance between different areas of operations and also seeking
the optimum organization structure for efficient operation.
Thus although there is no overall growth (or negative growth occurs) the company will shift its product market
position, employ its resources in different fields and continue to search for new opportunities. In particular, the
company will aim to correct any weaknesses which it has discovered during its appraisal. For this reason the
term corrective strategy is also used. A non-growth strategy is bound to be a corrective strategy, but a
corrective strategy can also be used in conjunction with, or as one component of, a growth strategy.
A company faces risk because of its lack of knowledge of the future. The extent of the risk it faces can be
revealed by the use of performance-risk gap analysis, where forecasts of the outcome in n years’ time takes
into account not only the likely return but also the risk involved. While on the subject of risk, it should be
remembered that although it is desirable to reduce risk, risk is inevitably involved in any business. In fact there
are different ways of looking at risk.
• Risk which is inevitable in the nature of the business; this risk should be minimized as
above.
• Risk which an organization can afford to take. In general, high return involves higher risk
and a company which is in a strong position might be prepared to take a higher risk in the hope of achieving
a high return.
• Risk which an organization cannot afford to take. A company cannot afford to commit penny
(and perhaps an overdraft as well) to a risky project. In the event of failure it would be left in an extremely
vulnerable position and could even face winding up.
• Risk which an organization cannot afford not to take. Sometimes a company is forced to
take a risk because it knows that its competitors are going to act and if it does not follow it could be
seriously left behind.
5
Chapter 2
Conceptual Framework
The capital structure is the basic concept that should be designed with the aim of maximizing the market
valuation of the firm in the long run. The important determinants in designing capital structure are:
1. Type of Asset Financed: Ideally short-term liabilities should be used to create short-term assets
and long-term liabilities for long-term assets. Otherwise a mismatch develops between the time to
extinguish the liability and the asset generation of returns. This mismatch may introduce elements of risks
like interest rate movements and market receptivity at the time of refinancing.
2. Nature of the Industry: A firm generally relies more on long-term debt and equity if its capital
intensity is high. All short-term assets need not be financed by short-term debt. In a non-seasonal and non-
cyclical business, investments in current assets assume the characteristics of fixed assets and hence need
to be financed by long-term liabilities. If the business is seasonal in nature, the funding needs at seasonal
peaks may be financed by short-term debt. The risk of financial leverage increases for businesses subject
to large cyclical variations. These businesses need capital structures that can buffer the risks associated
with such swings.
3. Degree of Competition: A business characterized by intense competition and low entry barriers
faces greater risk of earnings fluctuations. The risks of fluctuating earnings can be partially hedged by
placing more weightage for equity financing. Reductions in the levels of competition and higher entry
barriers decrease the volatility of the earnings stream and present an opportunity to safely and profitably
increase the financial leverage.
4. Obsolescence: The key factors that lead to technological obsolescence should be identified and
properly assessed. Obsolescence can occur in products, manufacturing processes, material components
and even marketing. Financial maneuverability is at a premium during times of crisis triggered by '
obsolescence. Excessive leverage can limit the firm's ability to respond to such crisis. If the chances of
obsolescence are high, the capital structure should be built conservatively.
5. Product Life Cycle: At the venture stage, the risks are high. Therefore equity, being risk capital per
se, is usually the primary source of finance. The venture cannot assume additional risks associated with
financial leverage. During the growth stage, the risk of failure decreases and the emphasis shifts to
financing growth. Rapid growth generally signals significant investment needs and requires huge sums of
capital to fuel growth. This may entail large doses of debt and periodic induction of additional equity capital.
As growth slows, seasonality and cyclicality become more apparent, As the business reaches maturity
stage, leverage is likely to decline as cash flows accelerate.
6. Financial Policy: Designing an optimum capital structure should be done in response to overall
financial policy of the firm. The management may have evolved certain financial policies like
maximum debt-equity ratio, predetermined dividend pay-out, minimum debt service coverage level, etc,
Designing of capital structure will become subservient to such constraints and the solution provided may be
suboptimal.
7. Past and Current Capital Structure: The proposed capital structure is often determined by past
events. Prior financing decisions, acquisitions, investment decisions, etc. create conditions which may be
difficult to change in the short run. However, in the medium- to long-term, capital structure can be changed
by issuing or retiring debt, issuing equity, equity buy-backs (when permitted), securitization, altering
dividend policies, changing asset turnover, etc.
8. Corporate Control: Firms which are vulnerable to takeover are averse to further issuance of equity
as it can result in the dilution of the ownership stake. Such firms place an excessive reliance on debt and
retained earnings. Firms with 'strong' management (having controlling stake) are unlikely to have
reservations over further issue of equity.
9. Credit Rating: The market assigns a great deal of weightage to the credit rating of a firm. Hence obtaining
and maintaining a target rating has become imperative for most firms. Rating agencies maintain constant
watch to identify any signs of deterioration in the creditworthiness of the company. The market reacts
negatively to any downgrading of the rating of a firm. This may result in a denial of access to capital
either due to the provision of any law/regulations (companies below a certain rating cannot issue CPs) or
by the market forces (investors may not subscribe to debt with low ratings). The possibility of downgrading
of rating due to the increase in leverage should be factored in while making capital structure decisions.
ROI-ROE Analysis
The relationship between the Return on Investment (total capital employed) and the return on equity (net worth)
at different levels of financial leverage needs to be analyzed.
6
The relation between ROI and ROE is as follows:
Where,
The ROE of an unlevered firm (or a firm with a lower leverage) is higher than the ROE of a levered firm (or a
firm with a higher leverage) when the ROI is lower than the cost of debt. Conversely, the ROE of a levered firm
is higher than the ROE of an unlevered firm (or a firm with lower leverage) when the ROI is higher than the cost
of debt. The ROE will remain constant irrespective of the levels of leverage if the ROI is equal to the cost of
debt.
Beta and Theta are identical firms except for their capital structure.
We shall examine the impact on ROE of both the firms if the ROI is 5%, 10% and 20%.
It can be observed that firm Beta is better off (generates a higher ROE) when the ROI at 5% is less than the
cost of debt at 10%. On the other hand, firm Theta is better off when the ROI at 20% is higher than the cost of
debt at 10%. When the ROI is equal to the cost of capital, both the firms generate an identical ROE of 6%.
DU PONT ANALYSIS
It is important to examine a firm's rate of return on assets (ROA) in terms of profit margin and asset turnover.
The profit margin measures the profit earned per Rupee of gross revenue but does not consider the amount of
assets used to generate the revenue margin ratio.
7
= Net profit margin x Average asset turnover
When analyzing a change in return on assets, the analyst could look into the above equation to see changes in
its components: net profit margin and total assets turnover.
A firm's rate of return on firm equity (ROE) is related to ROA through the interest-expense to-average-asset
ratio and a leverage ratio - the asset-to-equity ratio, often termed the equity multiplier. Thus, the impact of ROE
of changes in leverage as well as changes can be determined in firm operations and efficiency. Du point
analysis is an excellent method to determine the strengths and weaknesses of a firm. A low or declining ROE is
a signal that there may be a weakness. However, using Du pont analysis, source of the weakness can be
determined. Asset, management, expense control, production efficiency or marketing could be the potential
weaknesses within the firm. Expressing the individual components rather than interpreting ROE itself, may
identify these weaknesses more readily.
Economic Value Added or EVA is the economic profit generated after the cost of invested capital. EVA
incorporates the opportunity cost of invested capital that is not realized by traditional accounting measures.
Numerous studies have shown EVA to have a higher correlation to stock valuation than accounting based
measures.
EVA = Net Operating Profit after Tax - (Invested Capital x Cost of Capital)
There are two steps required to convert GAAP net income to EVA. First, calculate net operating profit after tax
(NOPAT) by adjusting net income. Common adjustments include extraordinary gains and losses, securities
gains and losses, provision expenses and preferred stock dividends. Second, calculate invested capital and
apply cost of capital. Invested capital includes book value of common and preferred equity, after-tax allowance
for loan losses, and certain adjustments for cumulative non-operating gains and losses. Cost of capital equals
the minimum required rate of return for investors (e.g. 15%). Whenever EVA is positive, shareholders have
received a total economic return on their investment in excess of their required rate of return. .
CFROI is defined as the return on investment expected over the average life of the firm's existing assets.
CFROI is nothing but another form of IRR measure. The key difference between the IRR and CFROI is that
cash flows and investment are stated in constant monetary units in CFROI which overcome deficiencies of the
traditional return on investment methods.
The following section discusses a few models for maximizing shareholders' wealth. Management focused on
maximizing shareholders' wealth is referred to as value-based management. The models being discussed are
• Marakon model
• Alcar model
• McKinsey model.
8
MARAKON MODEL
The Marakon model was developed by Marakon Associates, a management consulting firm known for its work
in the field of value-based management. According to this model, a firm's value is measured by the ratio of its
market value to the book value. An increase in this ratio depicts an increase in the value of the firm, and a
reduction reflects a reduction in the firm's value. The model further states that a firm can maximize its value by
following these four steps:
• Understand the strategic forces that affect the value of the firm
• Create internal structures to counter the divergence between the shareholders1 goals and the
management's goals.
Financial Factors
The first step in this model is to identify the financial factors that affect the value of the firm. The model states
that a firm's market value to book value ratio, and hence, its value depends on three factors - return on equity,
cost of equity, and growth rate. This conclusion is drawn indirectly from the constant growth dividend discount
model.
D1
P0 =
k −g
Further,
D0 – B x r x b
Bxr xb
P0 =
k −g
P0 r xb
=
B k −g
g = r (1- b)
9
or, rxb=r-g
P0 M r xb
= =
B B k −g
Thus, a firm's market value to book value ratio can be derived from its return on equity, its cost of equity and its
growth rate. It can be observed from the formula that
1. A firm's market value will be higher than its book value only if its return on equity is higher than its
cost of equity. This is supported by the other theories of valuation of equity.
2. When the return on equity is higher than the cost of equity, the higher a firm's growth rate, the
higher its market value to book value ratio.
Hence, a firm should have a positive spread between the return on equity and the cost of equity, and a high
growth rate in order to create value tor its shareholders.
Strategic Forces
The financial factors that affect a firm's value are in turn affected by some strategic forces. The two important
strategic factors that affect a firm's value are market economics and competitive position. The market
economics determines the trend of the growth rate and the spread between the return on equity and cost of
equity for (he industry as a whole. The firm's competitive position in the industry determines its relative rate of
growth and its relative spread. The following figure illustrates the effect of the strategic factors on the firm's
value.
Market economics refers to the forces that affect the prospects of the industry as a whole. These include
• Number of suppliers
• Kinds of regulations
• Customers' influence.
10
Competitive Position refers to a firm's relative position within the industry, A firm's relative position is affected by
its ability to produce differentiated products and its economic cost position. A product can be referred to as a
differentiated product when the consumers perceive its quality to be better than the competitive products and
are ready to pay a premium for the same. The firm can benefit from a differentiated product in two ways. It may
either increase its market share by pricing it competitively, or can command a higher price for its product than
its competitors, and forego the higher market share. Thus, the ability to produce differentiated products
improves a firm's relative position vis-a-vis its competitors. The other factor that helps a firm enjoy a strategic
advantage over its competitors is a low per unit economic cost. Economic costs include operating costs and the
cost of capital employed. A low economic cost may result from a number of factors like
• Better management
Strategies
Once a company has identified its potential growth prospects and analyzed its strengths and weaknesses, it
needs to develop strategies that would help it utilize its strengths and underplay its weaknesses, thus achieving
the maximum possible growth and creating value. For achieving this objective two kinds of strategies are
required - participation strategy and competitive strategy.
A company, to create value for its shareholders, has to either operate in an area where the market economies
are favorable, or has to produce those products in which it can enjoy a highly competitive position. The strategy
that specifies the broad product areas or businesses in which a firm is to be involved is referred to as its
participation strategy. At the level of a business unit, this strategy outlines the market areas (in terms of the
geographical areas, the high-end market or the low-end market, the level of quality and differentiation to be
offered) to be entered.
The strategy on the preferred markets is followed by the competitive strategy, which specifies the plan of action
required for achieving and maintaining a competitive advantage in those markets. It includes deciding the way
of achieving product differentiation, the method for utilizing the differentiation so created (i.e. by increasing the
price of the product or the market share) and the means of creating an economic cost advantage.
Internal Structures
The separation of ownership and management in the traditional manner results in the management bearing all
the risks associated with value-adding decisions, without their enjoying any of the benefits. This often results in
the management taking sub-optimal decisions. A firm needs internal structures which can control this tendency
of the management. These may include
• Corporate governance mechanisms that specify responsibilities and holds managers accountable for their
decisions
• Resource allocation among projects guided by the specific requirements of the projects rather than the past
allocations and capital rationing
• A mechanism for making sure that the various projects undertaken form part of a strategy, rather than being
disjointed, discrete projects
Plans being made in accordance with the long-term goals and target performance being fixed in accordance
with these plans, rather than the level of achievable targets determining the plans. Performance targets should
11
be a function of the plans, rather than being the base for the plans.
Target performance, when achieved, should be rewarded with promised incentives. Non-fulfillment of such
promises affects the future performance.
ALCAR MODEL
The Alcar model, developed by the Alcar Group Inc., a company into management education and software
development, uses the discounted cash flow analysis to identify value adding strategies. According to this
model, there are seven 'value drivers' that affect a firm's value. These are
• Cost of capital.
Value growth duration refers to the time period for which a strategy is expected to result in a higher than normal
growth rate for the firm. The first six factors affect the value of the strategy for the firm by determining the cash
flows generated by a strategy. The last term, i.e. the cost of capital, affects the value of the strategy by
determining the present value of these cash flows. The following figure represents the Alcar approach.
According to the model, a strategy should be implemented if it generates additional value for a firm. For
ascertaining the value generating capability of a strategy, the value of the firm's equity without the strategy is
compared to the value of the firm's equity if the strategy is implemented. The strategy is implemented if the
latter is higher than the former. The following steps are undertaken for making the comparison.
Figure 2.3
The present value of the expected cash flows of the firm is calculated using the cost of capital. The cash flows
should take the firm's normal growth rate and its effect on operating flows and additional investment in fixed
assets and working capital into consideration. The cost of capital would be the weighted average cost of the
various sources of finance, with their market values as the weights. The value of the equity is arrived at by
deducting the market value of the firm's debt from its present value.
The firm's cash flows are calculated over the value growth duration, taking into consideration the growth rate
12
generated by the strategy and the required additional investments in fixed assets and current assets. These
cash flows are discounted using the post-strategy cost of capital. The post-strategy cost of capital may be
different from the pre-strategy cost of capital due to the financing pattern of the additional funds requirement, or
due to a higher cost of raising finance. The PV of the residual value of the strategy is added to the present
value of these cash flows to arrive at the value of the firm. The residual value is the value of the steady
perpetual cash flows generated by the strategy, as at the end of the value growth duration. The post-strategy
market value of debt is then deducted from the value of the firm to arrive at the post-strategy value of equity.
The value of the strategy is given by the difference between the post-strategy value and the pre-strategy value
of the firm's equity. A strategy should be accepted if it generates a positive value.
MCKINSEY MODEL
The McKinsey model, developed by leading management consultants McKinsey & Company, is a
comprehensive approach to value-based management. It focuses on the identification of key value drivers at
various levels of the organization, and places emphasis on these value drivers in all the areas, i.e. in setting up
of targets, in the various management processes, in performance measurement, etc. According to Copeland,
Roller and Murrin, value-based management is "an approach to management whereby the company's overall
aspirations, analytical techniques, and management processes are all aligned to help the company maximize its
value by focusing management decision-making on the key drivers of value". According to this model, the key
steps in maximizing the value of a firm are as follows:
A firm may have many conflicting goals like maximization of PAT, maximization of market share, achieving
consumer satisfaction, etc. The first step in maximizing the value of a firm is to make it the most important goal
for the organization. It is generally reflected in maximized discounted cash flows. The other goals that a firm
may have are generally consistent with the goal of value maximization, but in case of a conflict, it should prevail
over all other objectives.
The important factors that affect the value of a business are referred to as key value drivers. It is necessary to
identify these variables for value-based management. The value drivers need to be identified at various levels
of an organization, so that the personnel at all levels can ensure that their performance is in accordance with
the overall objective. The other objectives of a firm mentioned above may act as value drivers at some level of
the organization. For example, degree of innovation in products may be identified as the value driver for the
design department. The three main levels at which the key value drivers need to be identified are
• The generic level: At this level, the variables that reflect the achievement or non-achievement of the value
maximization objective most directly are identified. These may be the return on capital employed or
operating margin or the net profit margin, etc.
• The department level: At this level, the variables that guide the department towards achieving the overall
objective are identified. For example, for the sales department, the key value drivers may be achieving the
optimum product mix, maximizing market share, etc.
• The grass roots level: At the grass roots level, the variables that reflect the performance at the operational
level are identified. These may be the level of capacity utilization, cost of managing inventory, etc.
Development of Strategy
The next step is to develop strategies at all levels of the organization, which are consistent with the goal of
value maximization, and lead to the achievement of the same. The strategies should be aimed at and give
13
directions for the achievement of the desired level of the key value drivers.
Setting of Targets
Development of strategies is followed by setting up of specific short-term and long-term targets. These should
be specified in terms of the desirable level of key value drivers. The short-term targets should be in tune with
the long-term targets. Similarly, the targets for the various levels of the organization should be in tune. They
should be set both for financial as well as non-financial variables.
Once the strategy is in place and the targets have been determined, there is a need to specify the particular
actions that are required to be undertaken to achieve the targets in a manner that is consistent with the
strategy. At this stage, the detailed action plans are laid out.
The future performance of personnel is affected by the way their performance is measured, to a large extent.
Hence, it is essential to set up a precise and unambiguous performance measurement system. A performance
measurement system should be linked to the achievement of targets and should reflect the characteristics of
each individual department.
14
Chapter 3
Strategic Wage Management
Preparation and Payment of Wages and Accounting
When wages are paid on the basis of time, Clock Cards form the basis of preparation of Payroll On the other
hand, when payments are made on the basis of results, Piece Work Cards form the basis of preparation of the
Payroll or Wages Sheet.
It is desirable that separate Payroll or Wages Sheet is prepared for each cost centre or department. This will
serve three purposes, viz.,
(b) The departmental labour rate can be calculated for each department, and
(c) The actual wages of a department can be compared with the budgeted wages so as to pin down
responsibility.
From the gross wages certain deductions are made to ascertain the net amount payable to the workers. For
instance, under the Payment of Wages Act, 1963, some of the authorized heads of deductions are:
3. Income-tax.
4. Provident Fund.
When the Wages Sheets are completed, they are passed to the cashier for payment. The cashier then makes
arrangement for paying out wages.
Prevention of fraud in wage payment: One of the problems associated with wage payment is the possibility of
fraud perpetuated by workers. The following types of frauds are more commonly seen:
2. Inclusion of wrong hours when payment is done on the basis of time or overstatement of work done
when payment is made on the basis of results.
4. Inclusion of overtime, bonus, etc. not entitled or due, or overstatement of the amount due,
5. Deliberate absenteeism on the date of wage payment to claim fraudulent payment later.
To prevent fraud in payment of wages, a number of steps should be taken. These are:
1. Payment of wages in a factory is to be made preferably at the same time and in all the
departments/sections in the presence of the departmental/sectional heads. All payments should be made
only on proper identification.
2. Attendance time should be reconciled with time booked and lost time. This will help in detecting
attendance of a dummy worker fraudulently marked in the time card.
15
3. The rate of wages (time or piece basis) should be verified from relevant schedule of wage rates. Any
change in the rate should be incorporated in the schedule only when it is approved by a responsible officer.
4. There should be proper authorization in advance for overtime work. Actual hours worked should not
exceed that authorized, Similarly, payment for idle time, scrap and defective production should be made
only on proper authorization. Payment for incentives should be made only on the basis of a certificate
issued and initialled by the inspector.
5. Certain necessary safeguards within the wages section are to be taken. For instance, those who
check the Clock Cards should not be concerned with the preparation of the payroll and those who do that
work should not be concerned with making up the pay, or in paying out the wages. Further, all calculations
of payroll should be verified by another clerk. The payroll should be signed by the individuals on preparation
and verification.
6. Unclaimed wages should be paid on particular dates under strict supervision. All payments in this
respect should be made after proper scrutiny of the reason for not drawing wages on the payment day.
7. The exact amount of wages should be drawn from the bank and each individual should be paid his
exact amount. Before handing over the pay packet, it should be recounted by another individual.
8. Outstation workers should be paid by the staff from Head or Main Cash office.
An analysis of the wages to the main control accounts is essential for accounting purposes. For this purpose, it
is necessary to make use of a Wages Analysis Book.
The above analysis is necessary for accounting. The deduction accounts relate to credit side for use in an
integral system of accounts.
From Figure 3.9 it will be seen that provision is made for entering the wages by departments, and. extending
them to Work-in-Progress Control Account, Factory Overhead Control Account, Administration Overhead
Control Account, and Selling and Distribution Overhead Control Account. The documents necessary for
compiling these extensions are:
(1) Payroll
Idle Facilities
It is the availability of facilities of plant and machinery and others which are not being utilized. It is not possible
to work a machine all the available time. Idle facilities may be unavoidable and avoidable. Unavoidable idle
facilities is the difference between maximum capacity and budgeted or standard capacity expected. Avoidable
idle facilities is the difference between budgeted or standard capacity expected and the aggregate of actual
time booked and the idle time.
16
No ................... Wages Analysis Sheet
Week Ending.
Job No. 10 Job No. 1 1 Job No. 13 Job No. 15 Job No, 16 Summary
Amount
Clock No.
Clock No.
Amount
Amount
No.Clock
No.Clock
Clock No.
Hrs.
Hrs.
Amount
Hrs.
Amount
Hrs.
Amount
Hrs.
Job
Hrs.
Ledger FolioCost
No.
10
11
13
15
16
Total
This type of analysts is done with the help of Job Card and Idle time Cards. The total shown in the summary
column must agree with the direct wages on the Payroll- The total will be posted to Work-in -Progress Account
and Factory Overhead Control Account via Wages Analysis Book.
Figure 3.1
Accounting of Labour Cost in a diagram.
While accounting for wages in the Cost Ledger, it is important to segregate the cost into direct and indirect—the
direct labour cost being charged to prime cost while indirect labour being included in product cost as overhead
(production, administration, selling and distribution, as the case may be) on some equitable basis. (For details
of accounting procedure see Chapter 7 on Cost Control Accounts.)
Illustration 3.1
Machine capacity: 48 hours per week
No. of working weeks in a year 50
Anticipated working hours: 80% of maximum possible hours
Actual hours worked: 1,800
Idle time (hours):
Waiting for instructions 40
Waiting for materials 20
Machine breakdown 30
17
90
Idle facilities may be calculated as follows:
Maximum possible capacity in the year (48 X 50) = 2,400 hours
Budgeted or standard capacity expected
for the period (2,400 x 80%) = 1,920 hours
(i) Unavoidable idle facilities (2,400 - 1,920) 480 hours
(ii) Avoidable idle facilities:
Standard capacity expected 1,920 hrs.
Less: Actual hours recorded 1,800
Add: Idle time 90
1,890 hrs.
30 hours
The main point of distinction between idle facilities and idle time is that the former is related with idleness of
plant, machinery and other facilities while the latter with idleness of labour.
Treatment in cost: The cost of idle facilities will include part of standing or fixed charges relating to the machine,
and share of general overhead. The labour cost of operator may be excluded on the assumption that the
operator has worked with another machine during the hours the machine was available for work.
The cost of idle facilities for reasons such as trade depression, shortage of demand, etc. should be written off to
Costing Profit and Loss Account. The remaining portion of the cost of idle facilities should be included in the
works overhead.
Idle Time
Idle lime may be defined as the time during which no production is obtained although wages are paid for that
period. In other words, it denotes payment made to a worker for a period during which he remains 'idle' and
does no work. This is represented by the difference between the time as per the attendance records and the
time booked to the various jobs or work orders. The various causes that lead workers to sit idle may be grouped
under three broad heads:
II. Administrative Causes which arise out of administrative decisions, e.g., when there is a surplus capacity of
plant and machinery which the management decide not to work, there may be some idle time. This is
represented by idle facilities.
III. Economic Causes, e.g., stoppage of production due to non-availability of raw materials, fall in demand, etc.
Some of the causes mentioned are controllable internally while others are beyond the control of management.
Therefore, from the standpoint of controllability, idle time may be of two types:
(i) controllable, i.e., idle time due to many of the productive causes is subject to control internally.
(ii) uncontrollable, e.g., idle time arising out of economic and administrative causes.
Treatment in costs: The cost of idle time includes wages of operators for 'lost hours', proportion of machine
standing charges and general overheads. The unproductive labour element is charged to a special standing
order number, or to a series of them, in order to analyze the cost by causes. The treatment of idle time in costs
is as follows:
(a) Cost for normal and controllable idle time: The costs should be segregated under separate
standing order numbers and charged to Factory Overhead. When responsibilities can be identified with a
department, they should be included in the departmental overhead.
(b) Cost for normal but uncontrollable idle time: Such a cost may be merged with wages of the
18
workers. As a result of merger of idle time cost, the wage rate of the workers gets inflated.
(c) Cost of abnormal and uncontrollable idle time: This represents the cost of idle time for such
reasons as strike, lockouts, fire, shortage of demand, etc. This should be charged directly to Costing Profit
and Loss Account. The object behind this is to keep the cost structure more or less comparable at different
times and not to allow this to be disturbed by any unforeseen contingencies.
Control of idle time: For effective control, each type of idle time should be allotted a separate Standing Order
Number1 and booking should be made against each of them. For example, the standing orders may be for:
Idle time due to productive causes are more or less subject to internal control. The procedure in this respect
may be outlined below:
(a) Waiting for work: All the jobs in hand should be properly planned so that machines can always take
up the jobs in sequence and workers do not have to wait for them.
(b) Waiting for tools, etc.: Considerable amount is being spent for idle time due to waiting for tools
and/or materials. This can be prevented by ensuring proper stores control and tool scheduling system.
(c) Waiting for instructions: Idle time due to waiting for instructions can be prevented if the production
control department issues clear instructions to the workers as to how to handle the job in sequence. The
instructions and drawings should be clearly laid down for all jobs taken in hand.
(d) Power failure: It may be due to internal causes, such as improper inspection and maintenance of
power plant, breakdown of the transmission wires or due to external reasons like failure from the main
power supply station. Idle time due to internal power failure may be reduced by keeping a proper inspection
and maintenance of the power plant, transmission wires, etc. But power failure due to external reason, e.g.,
load shedding, is generally uncontrollable.
(e) Machine breakdown: Machine breakdown can be prevented by keeping proper maintenance
system. In other words, a routine check of all the machines at periodical intervals is normally a cure for any
major breakdown.
Although the above items can be controlled by proper planning, some amount of idle time is bound to occur due
to the time taken in changing from one job to another, setting up the tools for a different job when the previous
one is complete. Therefore, it is advisable to prepare a report showing the analysis of lost time so that action
may be taken to control idle time where necessary. The report will enable the management to locate the
persons or departments responsible for any controllable lost time and to take effective remedial actions. A
suggested specimen of such a report of an engineering firm is given in Figure 3.12.
Overtime
Generally, overtime is paid at a higher rate than the normal time. The additional amount expended on overtime
work is known as overtime premium. The normal wages paid form part of direct labour cost while there is
considerable controversy as regards treatment of overtime premium. Before dealing with the treatment of
overtime premium, it is, therefore, necessary to give consideration to the circumstances under which overtime
work is generally required. They are:
(i) To complete a work or job within a specific date as requested by the customer.
(ii) To make up time lost due to breakdown of machinery, power failure or for any other unavoidable
reason,
(iii) To work as a matter of policy due to labour shortage or for any other reason.
In case of (i), the overtime premium should be directly charged to the job concerned and treated as direct
wages. But in case of (ii), the premium paid should be treated as an excess cost and, therefore, should be kept
out of prime cost. In other words, they should be treated as overhead which would be allocated and recovered
19
from jobs completed during the period. In case of (iii), the premium paid may be treated as part of labour cost
by spreading the overtime premium over various jobs completed. This may be done with the help of an average
rate calculated by dividing the total wages payable by the total clock hours worked. The logic behind it is that
jobs will not show disproportionate labour only because they are produced at different times, e.g., usual working
hours, evening overtime, holiday overtime, etc.
When overtime is worked on account of abnormal conditions, such as strike, flood, etc., the premium payable
should be charged to Costing Profit and Loss Account. For overtime on capital works, e.g., installation of
machinery, the entire cost of overtime should be charged to the Capital Order.
Usually, holiday work is paid at a higher rate than normal day's wages. Such additional payment is allocated to
overhead like overtime premium. However, if there is a special circumstance as visualized in case of overtime
(i.e., to meet the. requirements of the customer), the additional amount is charged directly to the job concerned.
The contribution made by the employer to Employees' State Insurance Corporation may be treated as follows:
For instance, in contract or process costing, it is possible to treat it as direct charge while in the case of a
general engineering works which is engaged on jobbing work, the amount contributed by the employer may be
treated as general overhead.
Learner's Wages
Generally, a worker takes more time to do a job during his training period than a trained worker. Therefore, in
order to avoid loading the job with excess labour cost, half of his wages may be charged to the job direct while
the other half allocated to overhead. However, when the wages cannot be identified with a job, they should be
treated as overhead. In many organizations, learners' wages are, as a matter of policy, treated as training cost
which forms part of overhead.
Dearness allowance is paid to the worker in addition to basic wages to cover increased cost of living. When a
worker cannot be provided with factory quarters, house rent allowance is also paid for the purpose. Sometimes,
compensatory allowance is paid to the workers for natural hardship in a locality. All these payments are made
with an idea to keep the basic pay structure of the workers unaltered. Payments made on account of dearness
allowance, etc. are treated in accounts as follows:
1. Charge directly to the work on which a worker is engaged. In other words, if payment made to each
worker and the work done by him is identifiable, the job or work order (in case of direct workers) or standing
order number (in case of indirect workers) is to be charged directly.
2. Charge to general overhead. (Separate standing order numbers are to be used for booking each type
of allowance.) Alternatively, it may be recovered as overhead by means of a separate percentage on basic
wages.
Fringe Benefits
These are payments for which direct efforts of the workers are not necessary. Fringe benefits, therefore,
include:
(i) Leave and sick pay;
(ii) Holiday pay;
(iii) State insurance and medical benefits;
(iv) Attendance bonus and shift allowance;
(v) Pension provision, retirement allowance, employer's contribution to provident fund; and
(vi) other cost representing a present or future return to an employee which is neither deducted on a payroll
nor paid for by the employee.
20
The cost of fringe benefits is included in the departmental overheads when department-wise identification is
possible. If not, it should form part of general overheads. Separate standing order numbers should be used for
each type of fringe benefits.
Problems and solutions
Calculate the normal and overtime wages payable to a workman from the following data:
Overtime rate up to 9 hours in a day at single rate and over 9 hours in a day at double rate;
or
Overtime wages :
Rs. 10
At double rate : 3 hours @ Rs. 2 = Rs. 6
Rs. 54
Total wages
or
21
A company's basic wages rate is £ 0.45 per hour and its overtime rates are:
You are required to calculate the labour cost chargeable to each job in each of the following circumstances:
(a) Where overtime is worked regularly throughout the year as company policy due to labour shortage.
(b) Where overtime is worked irregularly to meet spasmodic production requirement.
(c) Where overtime is worked specifically at the customer's request to expedite delivery.
Solution
Per hour £
Basic rate 0.45
1 0.60
Evening rate 0.45 + x 0.45
3
Weekend rate (2 x 0.45) 0.90
Particulars Hours worked Rate per hour (£) Wages paid (£)
Normal time 4,40,000 0.45 1,98,000
Evening overtime 40,000 0.60 24,000
Weekend overtime 20,000 0.90 18,000
Total 5,00,000 £ 2,40,000
£240000
Thus, average wage rate = = £ 0.48.
500000
(a) Since overtime is worked regularly throughout the year as a matter of company policy due to labour
shortage, jobs completed during overtime (evening or holiday) should not be overloaded by charging more
while those completed during normal time should not be under-loaded. This necessitates the application of
the average wage rate as follows.
(b) In this case, overtime is an abnormal and irregular feature and, therefore, overtime premium should be
treated as production overhead while the jobs should be charged at basic rate only.
Methods of Remuneration
Labour is one of the four factors of production. Remuneration for labour is wages as remuneration for capital is
interest, for land is rent and for organization is profit. Both direct and indirect labour employed in an organization
will have to be paid remuneration for the services rendered by them. Selection of a right person for the right job,
as we have seen in the previous Section, is crucial to labour cost management. The amount of remuneration or
wages payable to each of the employees depends on a number of factors. The terms of employment generally
specify the rate or scale of pay and other allowances payable to workers. In the modern industrial enterprise of
mass production, a worker's wages are Dased upon job evaluation, negotiated labour contracts, profit-sharing,
incentive and wages plans, etc. In this Section, we discuss methods of remuneration by grouping them under
two main headings, viz. (1) Time basis, e.g., by hour, day or week, (2) Results basis, e.g., straight piecework,
differential piecework. Besides all these, there are monetary and non-monetary incentive schemes which are
also discussed.
Low wages do not necessarily mean a low cost of production. On the other hand, high wages may ultimately
result in low cost of production. This is achieved in two ways:
1. High wages induce workers to produce more. Increase in productivity will result in lower labour cost
per unit.
2. Because of the greater number of units produced, the unit fixed cost will also tend to come down.
Further, in underdeveloped countries, one of the main needs of modern days is to raise the standard of living.
This again requires the larger output of a number of consumer goods, which by chain action needs increased
output all round. Sufficiency in production will also help to check inflation.
FACTORS TO BE CONSIDERED
The following factors must be given due consideration before selecting a system of payment.
(a) Simplicity: Unless the wage system is understood by the workers, the fullest advantage cannot be
obtained out of it. Therefore, the wage system should be simple and capable of being understood by
workers of average intelligence. Simplicity from the point of view of analysis and recording in the Cost
Accounts may also be considered.
(b) Quantity and quality of output: If quantity is more important than quality, the method of
remuneration should be such that it encourages increased production. On the other hand, when quality is
23
more important, wage payments should be preferably based upon time rather than on production
quantities.
(c) Incidence of overhead: In large manufacturing enterprises, heavy expenses of indirect nature
(overhead) are incurred. A major portion of overhead is again fixed, that is to say, they remain constant
even when volume of production fluctuates with a range. It should be emphasized that an increased
volume of production results in lower unit fixed cost whereas a decrease in production results in
increased cost of production per unit of output. Consequently, the factor 'incidence of overhead' is of
outstanding significance and lies at the basis of all schemes of remuneration. In this connection, two
things should be considered:
(a) Effect upon workers: High wages will attract efficient workers from outside, and retain those who are
already in employment; so the cost of labour turnover is less. The role of workers' union should also be
assessed and it should be taken into consideration in selecting the wage system.
(b) Statutory provisions: There may be legislative measures to protect the right of wage earners and to
emphasize managerial obligations in this regard. However, the government legislation, if any, generally sets
the floor, i.e., the minimum wages payable under a given situation. This aspect should not also be lost sight
of.
1. It should be based upon scientific time and motion study to ensure a fair output and a fair
remuneration.
3. The wages should be related to the effort put in by the employee. It should be fair to both the
employees and employer.
4. The scheme should be flexible to permit any necessary variations which may arise.
5. There must be continuous flow of work. After completing one piece, the workmen should be
able to go over to the next without waiting.
6. After a certain stage, the increase in production must yield decreasing rate so as to
discourage very high production which may involve heavy rejections.
7. The scheme should aim at increasing the morale of the workers and reducing labour
turnover.
8. The scheme should not be in violation of any local or national trade agreements.
9. The operating and administrative cost of the scheme should be kept at a minimum.
METHODS OF REMUNERATION
For convenience, the various methods of remuneration may be broken down into the following main heads:
1. Time Rates
2. Piece Rates
5. Group Bonuses
6. Others
24
The different methods included in each of the above groups can be diagrammatically shown. Various systems
may now be considered in greater detail.
The general characteristic of all the time rate systems is that the workers do not get anything beyond their time
wages, i.e., Time x Rate. It is the employer who may gain arising out of extra efficiency of his workers or lose
due to their inefficiency. We discuss below the features of three time rate systems.
(a) Time rate at ordinary levels: Under this method, payment is made on the basis of time which may be
hour, day, week or a month. The rate of pay should not be less than that prescribed by a tribunal, wage
board award or by the Government through Payment of Minimum Wages Act. When payment is made on
the basis of hours worked by the employees, wages are to be calculated as follows:
(b) Time rate at high wage levels: This system is similar to the previous one except that the day rates are
made high enough, so that in return a much higher standard of performance from the workers is ensured.
Henry Ford was of the opinion that time rates at high wage levels are equally effective like other incentive
plans. The features of a high-wages plan may be summarized below:
1. The hourly rate is higher than normal wage for the industry.
2. Standards of performance are set and there is stricter supervision to ensure the attainment of the
standards. The standards set should be capable of being accomplished by an efficient worker.
(c) Graduated time rates: Under this method, wages are paid at time rates which vary with changes in local
25
cost of living index.
In India, the basic wage rates normally remain fixed and it is the dearness allowance that varies with the cost of
living. Sometimes, wage rates are adjusted with changes in the selling price of the product.
There are many circumstances in which lime rate systems are suitable. They are:
1. Where the work demands a high degree of skill and quantity of production is less important,
e.g., tool-making, machine manufacturing, watch-making, etc.
2. Where it is difficult to measure the work done by workers. This is applicable in case of indirect
workers such as supervisors, cleaners and sweepers, night watchmen, etc.
3. Where machine performs the job and the workers have no control over the work, e.g., in
process industries the flow of work is regulated by the speed of the conveyor belt.
5. Where work is of such a nature that efficiency can be ensured by close supervision.
6. Where worker does a work in his own interest, e.g., construction of accommodation.
Advantages
2. The workers are more or less certain about the amount they will earn so long as they remain in
employment. This leads to contented body of workers which in turn improve the employer-employee
relationship.
3. For precision work (e.g., tool-making and pattern-making) where care is more important than
speed, the time rate systems will help in maintaining quality of products.
Disadvantages
1. Since the workers are certain about their wages, they may not care to improve their efficiency
to increase production. In other words, the workers tend to adopt 'go-slow' tactics. This leads to higher
cost of production inasmuch as more time means more labour cost and consequently more overheads.
2. Efficient workers' efforts are not rewarded. This will lead to frustration of efficient workers and
consequently more labour turnover.
Systems based on work are otherwise known as piece rate systems. According to these systems, the extent or
volume of work done forms the basis for determination of the wages payable to the workers. It is paid at a
certain rate per unit produced or job performed or operation completed irrespective of the duration of time taken
by the workers. Generally, workers stand to gain or lose as a result of a standard efficiency which they attain.
The slogan may be "produce more and earn more".
The advantages and disadvantages of the piece rate systems in general may be summarized as follows:
Advantages
1. Workers are paid only for the work they have done. Thus, the employer does not stand to
lose anything because of variation in the efficiency of the workers.
2. In their bid to earn more, workers will try to adopt better and more efficient methods in order
to increase production. As a result, the general dexterity and skill of the workers are enhanced.
26
3. Because of (2), a larger output will generally result. This will, in turn, lead to reduction in cost
and a greater margin of profit.
4. Change-over time, wasted time, etc. are not paid for, as the payment is made only for the
turnover of work and consequently idle time will be reduced to minimum.
5. Cost ascertainment becomes simplified to some extent because exact cost of labour for each
unit is available.
6. The operation of piece rate wage system provides a sound basis for standard costing and
production control, e.g., for ascertaining rates, a very careful time-study is necessary.
Disadvantages
1. The workers will always try to produce more to earn more. Where quality of the product is no less
important than the quantity, the increase in production may be achieved at the cost of quality.
2. Increased production does not necessarily mean reduced cost. For instance, if increase in production
is effected through more wastage of material, high tool cost, high cost of inspection and quality control, the
ultimate cost of production will be higher. Therefore, payment on the basis of piece rate system may
induce the workers to increase production disregarding all this which will affect costs adversely.
3. Over-strain on the part of workers will cause frequent absenteeism and bad health.
4. The fixation of piece rates on the basis of standard time required a considerable amount of work at the
outset and also during the operation of the scheme.
5. If day wages are net guaranteed, the workers will have to lose when there will be no work. Thus, if
flow of work cannot be maintained, opposition from the workers is bound to come.
(a) Straight Piece Rate: Under this method, payment is made on the basis of a fixed amount
per unit or per fixed number of units produced without regard to time taken. Thus,
(i) comparable time rate for the same class of workers, and
(ii) expected output, in a given time.
The piece rate is usually fixed with the help of work study Standard time for each job is ascertained first. Piece
rate is then ascertained with reference to hourly or daily rate of pay.
Illustration 3.3
(b) Piece Rates with graduated time rates: Under this system, workers are paid minimum
wages on the basis of time rates. A piece rate system with graduated time rate may include any one of the
following:
(i) If earning on the basis of piece rate is less than the guaranteed minimum wages, the
workers will be paid on the basis of time rate. On the other hand, if earning according to piece rate is
more, the workers will get more.
(ii) Guaranteed wages according to time rate plus a piece rate payment for units above a
required minimum,
(iii) Piece rate with a fixed dearness allowance or cost of living bonus.
27
(c) Differential Piece Rates: Under this system, there is more than one piece rate to reward
efficient workers and to encourage the less efficient workers or a trainee to improve. In other words,
earnings vary at different stages in the range of output. This scheme was first introduced in the U.S.A. by
F.W, Taylor, the father of scientific management, and was subsequently modified by Merrick. These are
now discussed below.
(i) Taylor Differential Piece Rate System: In the original Taylor differential system, piece rates were determined
by time and motion study. Day wages were not guaranteed. There were two rates: below the standard, a very
low piece rate and above the standard, a high piece rate was fixed. Thus, the system was designed to:
1. discourage below-average workers by providing no guaranteed wages and setting low piece
rate for low level production, and
2. reward the efficient workers by setting a high piece rate for high level production.
Illustration 3.4 A factory works 8 hours a day. The standard output is 100 units per hour and normal wage
rate is Rs. 5 per hour. The factory has introduced the following differentials in the matter of wage payment:
Rs 5
Normal piece rate = = Re. 0.05.
100 units
(a) When below standard, the piece rate will be Re. 0.04, i.e., 80% of Re. 0.05;
(b) When at or above standard, the piece rate will be: Re. 0.06, i.e., 120% of Re. 0.05.
The Taylor differential system is often criticized as "unfair" due to the fact that minimum wages of the worker are
not guaranteed. However, Taylor's system is suitable to those industries where products including the
processes and operations can be standardized.
(ii) Multiple Piece Rates or Merrick Differential System: Merrick afterwards modified the Taylor's Differential
Piece Rate. Under this plan, the punitive lower rate is not imposed for performance below standard. On the
other hand, performance above a certain level is rewarded by more than one higher differential rates. The rates
which are applied are:
Thus, this plan rewards the efficient workers and encourages the less efficient workers to increase their output
by not penalizing them for performance below 834%. This method also does not guarantee day wages.
(i) Emerson's Efficiency Plan: The main features of the plan are:
(b) A standard time is set for each job or operation, or a volume of output is taken as standard.
(c) Below 66 -|% efficiency, the worker is paid his hourly rate.
(d) From 66y% up to 100% efficiency, payments are made on the basis of step bonus rates.
28
(e) Above 100% efficiency, an additional bonus of 1% of the hourly rate is paid for each 1% increase in
efficiency.
Actual Production
Percentage Efficiency = x 100
Standard Production
(2) facilitate an easy transfer from time wages to payment by results scheme.
But this scheme is not meant for skilled and competent workers.
Illustration 4.5
With the foregoing basic data., Emerson's Plan may be illustrated below:
Bonus Labour
Clock Card Production Percentage Total cost per
No. per week Efficiency Percentage Amount wages unit
Rs. Rs. Re.
10 390 65 — — 320.00 0.82
11 400 67 1 3.20 323.20 0.81
12 480 80 4 12.80 332.80 0.69
17 530 88 10 32.00 352.00 0.66
19 550 92 10 32.00 352.00 0.64
20 570 95 10 32.00 352.00 0.62
26 580 97 20 64.00 384.00 0.66
28 600 100 20 64.00 384.00 0.64
30 620 103 23 73.60 393.60 0.63
(ii) Gantt Task and Bonus Scheme: This system combines time rales, high piece rates and bonus. Its main
features are:
2. Standards are set and bonus is paid if a work is completed within the standard time allowed.
The time and bonus rates are fixed for each job, and when a job is completed the worker goes on with the next.
The pay thus earned consists of (i) day wages plus (ii) the sum of all bonuses (i.e., quantity x high piece rate).
Thus, this plan provides an incentive for efficient worker to reach a high level of performance and also protects
and encourages the less efficient workers by ensuring the payment of their minimum wages in case their
performance is below the standard level. The Gantt Task scheme may be introduced in:
(iii) Bedaux Scheme or 'Points' Scheme: This system requires a very accurate time study and work study.
Under this scheme, each minute of standard time is called the Bedaux point or "B". Thus, each operation to be
performed can be expressed as being so many "Bs" and payment is made on the basis of the number of "Bs"
standing to the credit of a worker.
Time wages are paid until 100% efficiency rate is reached. Under the original plan, the worker received only
75% of the bonus while the 25% was received by supervisors. But, according to modified scheme, the workers
nowadays receive 100% of the bonus.
The limitations of the scheme are: high cost due to additional clerical work and inspection, lack of attempt to
control material costs, etc.
The various schemes under this method combine time wages with piece rates. As a result, the gains on labour
efficiency and losses on inefficiency are shared by employer and employee. There are three chief schemes
under this heading, viz.
(a) The Halsey Scheme: The main features of this scheme are:
2. Time rate is guaranteed and the worker receives the guaranteed wages irrespective of whether he or
she completes the work within the time allowed or takes more time to do it.
3. If the job is completed in less than standard time, a worker is paid a bonus of 50% of the time saved at
time rate in addition to his normal time wages.
- (Hours worked x Hourly Rate) -t- — (Time allowed - Time taken) x Hourly Rate.
Illustration 3.6
30
Normal hourly rate Rs. 2
1 (8 x Rs. 2) + ~ (10 - 8) x 2
Advantages
2. The more efficient workers will be able to increase hourly rate of earnings more rapidly with the
increase in hours saved.
3. Inefficient workers are not penalised as they get day wages for the hours worked.
4. The employer will share 50% of the bonus due to time saved by the workers. This may induce him to
introduce better equipments and methods.
Disadvantages
1. The earning per unit will come down with the increase in efficiency. This may make the workers feel
that it is the employer who gains more by their efficiency. The workers may, therefore, object to share their
bonus with the employer.
2. The incentive, as compared with other high incentive schemes, is not strong enough to induce the
more efficient workers to work harder.
(b) The Halsey-Weir Scheme: Under this scheme, a worker will get a bonus of 30% of time saved as against
50% in the case of previous scheme. In other respects, both Halsey and Halsey-Weir Schemes are similar.
Illustration 3.7 Continuing the previous illustration, the earnings under this scheme will be:
30
8 X Rs. 2 + (10 - 8) X Rs. 2
100
(c) Rowan Scheme: This scheme was introduced by David Rowan in Glasgow in 1901. As before, the bonus is
paid on the basis of time saved. But unlike a fixed percentage in the case of Halsey Scheme, it takes into
account a proportion as follows:
But whatever method may be followed, the final result will be the same.
Formulae:
31
Time saved
Bonus Ratio =
Time allowed
Illustration 3.8
10 − 8 2 1
Therefore, bonus ratio is = = =
10 10 5
Earnings : Method (i)
1
8 hrs. x Rs. 2 + Rs. 16 x
5
= Rs. 16 + 3.20 = Rs. 19.20
Method (ii)
1
8 hrs. x Rs .2 + 2 x = 8 hrs. X Rs. 2.40 = Rs. 19.20
5
The advantages and disadvantages of this method are mentioned below.
Advantages
3. It provides a safeguard against loose fixation of standard. For example, even if the rale setting
department being newly established in a factory sets erroneously the time allowed, the workers cannot take
undue advantage as only a proportion of the savings is passed on to them.
Disadvantages
2. A beginner and a more efficient worker may get the same amount of bonus. This will adversely affect
the morale of the efficient workers.
The following table may be of interest in making the comparison between these two premium bonus schemes:
Rate Time Time Time Time Bonus Total earnings Earnings per
per hour allowed taken saved wages hour
Halsey Rowan Halsey Rowan Halsey Rowan
(1) (2) (3) (4) (5) = (6) (7) (8) (9) (10) (11)
(1 x3) (5 +6) (5 + 7) (8-3) (9 + 3)
Rs. Rs. Rs. Rs. Rs. Rs. Rs. Rs.
2 10 10 Nil 20 — --- 20.00 20.00 2.00 2.00
2 10 8 2 16 2.00 3.20 18.00 19.20 2.25 2.40
2 10 6 4 12 4.00 4.80 16.00 16.80 2.67 2,80
2 10 5 5 10 5.00 5.00 15.00 15.00 3.00 3.00
32
2 10 4 6 8 6.00 4.80 14.00 12.80 3.50 3.20
2 10 2 8 4 8.00 3.20 12.00 7.20 6.00 3.60
(1) Bonus earned: A comparative position of bonus at different efficiency levels under both the schemes is
shown in Figure 3.3. The main points may be summarized below:
(i) In the Halsey scheme, the bonus increases steadily with increase in efficiency. But in the
Rowan scheme, the bonus increases up to a certain stage and then starts decreasing,
(ii) Rowan scheme provides better bonus than the Halsey scheme until the work is completed
in half the standard lime. Again, under Rowan scheme a less efficient worker may get the same bonus as a
more efficient one will get. As for instance, when the work is done in 6 hours, the bonus payable is Rs. 4.80
while the same amount is payable to another worker who takes only 4 hours to do it. Although this is unfair,
Rowan scheme may provide a safeguard against loose fixation of standard.
(iii) When the work is completed in half the standard lime, the bonus is the same under both
the schemes. 50% efficiency is the cut-off or break-even point for both the schemes. This can be proved as
follows:
50
Bonus under Halsey plan = Standard wage rate x x Time saved (i)
100
Time saved
Bonus under Rowan plan - Standard wage rate x x Time taken (ii)
Time allowed
Bonus under Halsey Plan will be equal to the Bonus under Rowan Plan when the following condition holds
good:
50
Standard wage rate x x Time saved
100
Time taken
= Standard wage rate x x Time taken
Time allowed
1 Time taken
or =
2 Time allowed
1
or Time taken = of Time allowed
2
Hence, when the time taken is 50% of the time allowed, the bonus under Halsey and Rowan plans is equal.
33
(iv) When the work is completed in less than half the standard time, bonus under Halsey scheme is greater.
(2) Total earnings: Time wages under both the schemes remaining constant at a particular level of efficiency,
total earnings will vary depending upon the amount of bonus. Therefore, the following points will emerge:
(i) Below 50% efficiency, earnings under Rowan method will be greater than that under Halsey.
(ii) At 50% efficiency level, earnings under both the schemes will be equal.
(iii) Beyond 50% efficiency level, earnings under Rowan scheme will be lower than that of Halsey.
(iv) In the Halsey scheme, total earnings or labour costs steadily decrease with increase in efficiency.
The decrease, in the Rowan scheme, is at an accelerating pace up to saving of 50% on the time allowed;
beyond that the labour costs are less than that under the Halsey scheme.
(3) Earnings per hour: In the Halsey scheme, the earnings per hour increase at an accelerating rale. Under the
Rowan scheme, it increases steadily.
A diagrammatic representation of the comparison between Halsey and Rowan schemes is shown in Fig. 4.6.
(d) Bank Scheme: Under this plan day wages are not guaranteed. Wages payable are arrived at by multiplying
the hourly rate by square root of the product of the time allowed and time taken. In other [words,
Illustration 3.9
Hourly rate Rs 2
Time allowed for a job 5 hours
Find wages payable when time taken is given to be 5 hours, 6 hours and 4 hours respectively by three different
workers.
Time allowed Time taken Wages payable Wages per
Worker (hours) (hours) Hourly rate x ST x AT hour Rs.
X 6
5 Rs. 2 5 x 6 - Rs. 11 (approx.) 1.83
Y 5 5 Rs. 2 5 x 5 = Rs. 10 2.00
Z 5 4 Rs. 2 5 x 4 = Rs. 9 (approx.) 2.50
It appears that when efficiency increases, the rate of increase in the total earnings falls. Another disadvantage
of this scheme is that because of complication involved in calculating wages, an average worker cannot himself
determine his own wages. But this plan is most useful for beginners and trainees and unskilled workers.
(e) Accelerating Premium Bonus: Under this scheme, bonus increases at a faster rate. For example, one may
get 175% of basic wages for 175% efficiency. This scheme is not suitable for machine operators, in that owing
to the high incentives the workers may rush through work to earn more, disregarding quality of production. But it
is suitable for foremen and supervisors, so that they may obtain the maximum possible production from workers
under them.
There is no simple formula for this scheme. Therefore, each firm has to devise its own formula. However, by
way of illustration, a graph of y = 0.8x2 may be given as a general picture of the scheme (where x is percentage
efficiency ÷ 100 and y = wages).
Thus,
Multiplying the values of x and y by 100, one gets percentage earnings against percentage efficiency.
34
In all the schemes discussed so far, the bonus payable has been ascertained on an individual basis. But bonus
scheme for a group of workers working together may also be introduced where:
Under these circumstances, a group bonus based on the results of the team effort may be introduced.
Illustration 3.10
Bonus—for every 25% increase in production, a bonus of Rs. 100 will be shared pro rata among the 10
members of the group.
Illustration 3.11 In a factory, Group Bonus system is in use which is calculated on the basis of earnings
under time rate.
Calculate the total of bonus and wages earned by each worker. Total Piece Earnings for the group =
Rs .2.50
x 16,000 = Rs. 400
100
35
R 80 hrs. @ Rs. 1.20 = 96
S 100 hrs. @ Re. 0.80 = 80
320
2. Harmonious working in a group leads to increased output and hence lower cost of production.
1. The effort of more efficient workers are not properly rewarded. Put in another way, the share of
inefficient workers may be the same as that received by more efficient members of the group.
2. It is difficult to fix the amount of incentive and its principle of distribution among the members.
Sometimes the idea of group bonus may be extended to the whole factory. The various schemes which may be
introduced for this purpose may include the following:
(a) Priestman's Production Bonus: Under this system, a standard is fixed in terms of units or points. If actual
output, measured similarly, exceeds standard, the workers will receive a bonus in proportion to the increase.
Therefore, this system can operate in a factory where there is mass production of a standard product with little
or no bottlenecks.
Illustration 3.12 In a mass production factory, 1,000 workers are employed. Standard output for a week is set
at 5,00,000 points. During a week actual output is valued at 6,50,000 points.
In addition to the basic wages, the employees will, therefore, receive a bonus calculated as follows:
Increase 1,50,000
36
or 30%
(b) Rucker, or "Share of Production" Plan: According to this plan, employees receive a constant proportion of
the 'added value' or 'value added1. The term value added is defined in the Terminology as follows:
The increase in realizable value result ing from an alteration inform, location or availability of a product or
service, excluding the cost of purchased materials and services.
The value added concept has become increasingly important in recent years. Many firms are using it both as a
measure of performance and as a labour incentive scheme. Value added measures the value added by an
enterprise to its product or the provision of a service. In other words,
or
VA = Profit before tax + Conversion costs + Other costs where conversion costs include manufacturing labour
and manufacturing overheads, and other costs include administration, selling and distribution costs (including
interest, depreciation, etc.)
In introducing an incentive scheme based on value-added, a ratio of labour cost to value added is set based on
normal relationships. Any reduction in the ratio entitles appropriate bonus payment. According to Rucker, labour
will receive a constant proportion of 'added-value'.
Illustration 3.13 A Ltd shows the following average pattern over the past five years:
Rs.
Labour cost 2,20,000
Production, Admn. and Selling & Dist. Overheads 1,40,000
Profit before tax 60,000
Value added Rs 4,00,000
.
200000
The ratio of labour cost to VA is: x100 = 50%
400000
Assume that bonus is payable for reduction in the ratio at 1% of the added value. In the following year, sales
and added value increased. Their results were as follows:
Rs.
Labour cost 2,00,000
Production, Admn. and Selling & Distribution
Overheads 1,50,000
Profit before tax 80,000
Value added Rs. 4,50,000
220000
Labour cost to VA is: x 100 = 48.5%
450000
The value-added scheme appears to be a more satisfactory method than the normal profit-sharing scheme for
many reasons. But profit-sharing schemes are still relatively widely used in industry. However, this system
presupposes a great deal of consultation between management and workers so as to make the effort more
effective.
37
(c) Scanlon Plan: This plan is similar to the Rucker plan except that it adopts the ratio between wages and
sales value of production.
(d) Towne Gain Sharing Plan: According to this plan, 50% of "gain" (savings in cost) is paid to individual
workers pro rata in addition to their basic wages. Here bonus is calculated on the basis of reduction in
labour cost vis-a-vis the standard set. The supervisory staff may also receive a share of the bonus.
Incentive Schemes tor Indirect Workers
One of the main conditions of the incentive systems is that actual output and/or time taken in relation to
standard set is determinable. In case of direct workers the measurement of performance does not involve any
problem. But in case of indirect workers, whose performance cannot be directly measured (e.g., supervisors,
machine maintenance staff, staff of stores, internal transport, packing, dispensing, canteen, etc.), introduction of
an incentive system may appear to be difficult. Still it is essential to provide for incentives to the indirect workers
for the following reasons:
1. If direct workers are rewarded for their efficiency, there is no reason why the indirect workers should
not be brought under some incentive schemes.
2. When only direct workers enjoy incentive schemes, indirect workers who work side by side with them
are dissatisfied with such discrimination. This, therefore, affects morale and hence efficiency of the indirect
workers. On the other hand, an incentive scheme for indirect workers will increase their efficiency and
promote team spirit.
3. When the work of the direct workers is related to or dependent upon that of the indirect workers, any
deficiency on the part of the latter due to lack of incentive schemes will also affect adversely the efficiency
of direct workers. As for example, if the plant and machinery is not properly and regularly maintained by the
staff concerned, the efficiency of machine operator is bound to decrease. Therefore, to attain all round
efficiency it is necessary to have incentive schemes both for direct and indirect workers.
For the purpose of incentive schemes, indirect workers may be grouped as under:
(a) Indirect workers working with direct workers, e.g., supervisors, inspectors, checkers, transport workers, etc.
In this case, bonus may be based on the output of direct workers whom the indirect workers serve.
(b) Indirect workers rendering general service, e.g., sweepers, canteen workers, dispensing staff, maintenance
staff, etc. Bonus to be paid will be determined on a wider basis, e.g., output of a department or of the whole
factory, a percentage of bonus payable to the direct workers, job evaluation, merit rating, etc.
In designing an incentive scheme for the indirect workers, the following points must be considered:
1. It should be guaranteed for a specific period, e.g., weekly, monthly, half-yearly, yearly, etc.
2. It should be so organized as to achieve all round efficiency.
3. It should be paid at regular intervals.
4. Rewards should be related to results.
(i) Bonus to foremen and supervisors: Supervisors and foremen may be paid a weekly or monthly bonus
based upon the following:
(ii) Banus to repairs and maintenance staff; For routine and repetitive maintenance, a group bonus system can
38
be established on the basis of reduction on the number of complaints or reduction in breakdown.
Alternatively, efficiency percentage can be evaluated for the purpose of payment of bonus.
(iii) Bonus to stores staff: It may be based on value of materials handled or number of requisitions. When
standards are set, efficiency percentage may be calculated for the purpose.
Of late, employees frequently receive additional remuneration based on the prosperity of the concern. The
principal schemes under this heading include:
These schemes are becoming more and more widespread and are growing in importance.
(i) Profit-sharing: Under this scheme, the employees are entitled, by virtue of an agreement, to a share of profits
at an agreed percentage in addition to their wages. Sometimes, a minimum period of service is a condition of
participation in the scheme. This type of scheme recognizes the principle that every worker contributes
something towards profits and hence he should be paid a percentage thereof.
In India, profit-sharing schemes take the form of an annual or other periodical bonus. In other words, the
"available surplus" is generally distributed amongst three parties:
There were considerable disputes as regards the quantum of bonus to be paid to the employees. The Govt. of
India set up a Bonus Commission and on the basis of its report the Payment of Bonus Act had been adopted in
1965. Under this Act, the minimum and maximum bonus payable is respectively 8^% and 20% of salary.
(ii) Co-partnership: Under this scheme, employees are allowed to have a share in the capital of the business
and thereby to have a share of the profit. The shares held by the employees may or may not carry voting rights.
When co-partnership operates in conjunction with profit-sharing, the employees are allowed to leave their
bonus with the company as shares or as a loan carrying lucrative interest.
Advantages
1. Schemes like Profit-sharing, Co-partnership, etc. will recognize the principle that every employee,
directly or indirectly, contributes something towards profit. This will increase employee morale and thereby
reduce labour turnover.
2. More profits may lead to more bonus to the employees. This induces them to increase their
efficiency to work hard. As a result, there will be increased productivity.
3. The employees feel a greater "sense of belonging" to the enterprise and this leads to careful
handling of costly materials and plants and machinery.
1. Employees are not paid bonus on the basis of output and hence efforts of more efficient employees are
not properly rewarded.
2. The employees do not have any access to the accounts of the enterprise and, therefore, they cannot
ascertain the propriety of the amount paid to them as bonus. This may, and very often does, lead to
disputes which may turn to strike, lockout, etc.
These types of incentives relate more to the conditions of employment rather than to job functions. The
objectives behind these schemes are two-fold:
1. Making the conditions of employment more and more attractive, and
2. Promoting better health amongst the employees so as to build up a happy and contented staff.
Non-monetary incentives may be entirely free or subsidized by the company. They are wide in number and may
include:
1. Canteen—free or subsidized
39
2. Health and safety
3. Recreational facilities
4. Housing facilities
5. Educational and training
4. Pension, Provident Fund schemes, etc.
40
Chapter 4
Financial aspect of supply chain management
A supply chain is a network of manufacturers, suppliers, distributors, transporters, storage facilities and retailers
that perform functions like procurement and acquisition of material, processing and transformation of the
material into intermediate and finished tangible goods, and finally, the physical distribution of the finished goods
to intermediate or final customers.
Components
A supply chain may consist of variety of components depending on the business model selected by a firm. A
typical supply chain consists of the following components:
• Customers
• Distributors
• Manufacturers
• Suppliers
Customers
The customer forms the focus of any supply chain. A customer activates the processes in a supply chain by
placing an order with the retailer. The customer order is filled by the retailer, either form the existing inventories,
or by placing a fresh order with the wholesaler/manufacturer. In some cases a customer bypasses all these
supply chain components by getting in touch with the manufacturers directly. For example in the case of an
online purchase of a computer from Dell Computers, the customer places an order directly with the
manufacturer.
M a n u f a c t u r e r sD i s t r ib u t i o n C e n t e r s
S u p p lie r s
M a r k e t s / C u s t
Retailers/Distributors
The retailer acts as a link between the customer and the distributor/manufacturer. He caters to the needs of the
customer by making the products available at his store. As part of this process, the retailer places orders with
the inanufacturer to replenish the stocks. In a typical supply chain, purchase orders originate at the retailer's
end, but in some cases where there is arrangement to share, the POS information with manufacturers the
manufacturer monitors the stock levels' and replenishes it automatically, Wal-Mart has such an arrangement
with P&G,
41
Manufacturers
The manufacturer plays a key role in deciding the structure of supply chain. Depending on the market situation,
the manufacturer either uses the pull or the push strategy to generate demand required for the movement of
products in the supply chain. The manufacturer then plans for a production schedule depending on the resultant
demand.
Suppliers
Suppliers facilitate the manufacturers', production process by ensuring continuous supply of raw materials.
Manufacturers1 place orders with suppliers on the basis of 'forecasted customer demand. Since it is very difficult
to forecast demand accurately, manufacturers try to integrate their processes with those of the suppliers to be
in a better position to respond to fluctuations in customer demands. Suppliers help manufacturers to decrease
their inventory levels by arranging for Just-in-time supplies.
Supply chain management involves the use of a set of approaches to integrate efficiently the activities of
suppliers, manufacturers, warehousing providers and retailers, so that goods are produced and distributed in
right quantities, to the right locations, and at the right time, in order to minimize system-wide costs while
meeting customer service expectations.
Although there are many views of supply chain management, at present, many practitioners look upon supply
chain management as the management of key business processes across the network of organizations that
form the supply chain. According to the definition given by the Global Supply Chain Forum, supply chain
management is the integration of key business processes from end-user,to original suppliers that provides
products, services, and information that-.add value for customers and other stakeholders. There are eight
business processes that are carried out across the supply chain. They are:
Each of the above processes consists of a set of activities from within various functions of the organizations
comprising the supply chain. These functions include marketing, production, finance, research and
development, logistics etc. Figure 4.1 shows the various business processes that are performed across the
supply drain.
Customer relationship management involves establishing a framework for building and maintaining
relationships with customers. This involves identifying the customer-groups who form the target for achieving
the firm's business objectives. Then the customer service teams design the product or service agreements
specifying the level of service that is to be offered to each of these customer groups. These teams work in close
coordination with the key account customers to reduce demand variability. Performance reports are designed in
order to measure levels of service made available to the customer and the profits resulting from serving each of
the customer groups.
Customer service management is concerned with providing the customer -with up-to-date information relating to
shipping dates, product availability, product application, etc. The customer service management teams act as
an interface between the customers and the functional departments like production and logistics in
administering product and service agreements. Various aspects of customer service management are
discussed at length in Chapter 12.
Demand Management
Demand management is the key to effective supply chain management. It plays a major role in balancing the
customer's requirements with the firm's supply capabilities. Demand management involves determining
42
forecasting methods to gauge customer demand, synchronizing demand with the supply capabilities of the firm,
and developing contingency management systems to handle variations in demand. Steps involved in planning
demand and supply in a supply chain are discussed in Chapter 4.
The effectiveness of a supply chain is determined by its ability to fill customer orders on time. A high order
fulfillment rate with low costs requires coordination between various organizations across -the supply chain and
their internal functions like manufacturing, distribution and transportation. The order fulfillment process includes
activities like, receiving orders, defining the for order fulfillment, evaluating the logistics network developing
plans for order fulfillment etc This topic is discussed in detail in Chapter 13.
Manufacturing flow management is concerned with ensuring the smooth production of goods and developing
flexible production processes that can respond to the demands of the target markets. This supply chain process
includes activities like determining the degree of manufacturing flexibility required, manufacturing and material
planning, determining manufacturing capabilities, synchronizing production and demand, etc.
Procurement
Supplier relationship management guides the interactions of the firm with its suppliers. This process aims at
developing long with suppliers to ensure uninterrupted flow of supplies for the firm's manufacturing
processes. Such relationships are essential for effective supply chain management.
Reducing the time to market is one of the objectives of supply chain management. The product development
and commercialization process involves establishing cross-functional product development teams, designing
and building prototypes, developing product rollout plans, etc. This requires the integration of customers and
suppliers into the product development process to ensure speedy rollout of new products.
Returns Management
Many companies are forced to recall products to rectify defects upgrade the products or recycle them. Thus the
returns management capability of a firm also plays a major role in providing a competitive edge to the firm.
There may be many environmental issues associated with, the way a firm handles its returns. Hence, managing
the products returned is also a major part of supply chain management. The returns management process is
discussed at length in Chapter 11.
OBJECTIVES OF SCM
One of the major objectives of supply chain management is to reduce the total amount of resources necessary
to provide the required level of customer service to a particular customer group. Some of the other objectives of
supply chain management are to:
Financial flow is an important flow in any supply chain [apart from material and information "flows]. Firms in the
past, focused mainly on improving the material flow in their supply chains. But with opportunities for saving cost
and making profits arising from improving the financial flow, firms have begun to streamline the financial flow as
well. Technological advances that facilitate automation, have enabled firms to improve the financial flow.
Traditionally, backward flow of cash from customers to the product manufacturer or service provider is
considered as the financial flow. This includes payments for goods and services to the suppliers and collection
43
of payments from the customers for providing goods and services. An efficient financial flow can help the firm in
reducing inventory, increasing cash flow, improving collaboration between the supply chain partners, and
enhancing customer satisfaction. In this chapter, we first discuss the components of a financial flow and how
they can be improved. Then, we examine the various options available for automating the financial flow in a
supply chain. Finally, we discuss the ways by which an integration of material and financial flows can be
achieved.
There are two key components that constitute the financial flow in a supply chain, viz., purchase-to-pay process
and order-to-cash process. Purchase-to-pay process consists of financial transactions with the suppliers and
order-to-cash process consists of financial transactions with the customers. Efficient management of cash flow
in these two processes can improve the profitability of the supply chain. This involves faster collection of the
accounts receivables and efficient management of accounts payable.
In this section, we discuss both the processes in detail and the ways to speed them up in order to achieve cost
savings and profitability.
Purchase-to-pay process starts with the buyer making the requisition and ends with the payment to the supplier.
The buyer makes a purchase requisition and it is passed on to the purchasing department for approval. After
getting the approval of the purchasing manager, a purchase order is sent to the supplier. On receiving the
purchase order the supplier dispatches the shipment along with the invoice. On receiving the goods, the firm
checks the shipment and the invoice to confirm whether the shipment matches the purchase order arid the
product quality and quantity is as desired. Upon confirmation, the accounts department pays the supplier.
Figure 4.2 describes the purchase-to-pay process.
R e q u i s i t i Ao np S e n d R e c e i v eI n v o i c e
p r o v a l
P u r c h a s eG o o d s P r o c e s s i n g
O r d e r
P a y m e n t
S u p p l i e r
Some of the measures to improve efficiency of purchasing transactions are discussed below.
There are various ways of reducing processing time and costs in order to expedite the purchasing process.
Firms should allow the buyers (an employee who is involved in purchase activities) to order goods, up to a
certain permissible limit, without approval. This reduces the time and costs involved in routing and approving
the purchase orders. In cases where the purchase requisitions are approved before the order is placed with the
supplier, the approval again at the time of payment to the supplier should be eliminated to reduce the delay in
the purchasing process.
Use of Evaluated Receipt Settlement (ERS). In ERS, the buyer makes the payment as and when he receives
the goods, thus eliminating the need for an invoice. On receipt of the goods, the buyer compares the packing
slip and the goods with the purchase order, and the amount is calculated based on the price quoted in the
purchase order. Then, the payment is made to the supplier. Thus, the firm pays the supplier for what it receives,
and this reduces the time and costs in matching the invoices and the errors that occur due to repeated data
entry.
Use of electronic invoicing. Electronic invoicing (EDI, Electronic Invoice Presentment and Payment etc.) can
reduce paperwork and processing costs. This can reduce the errors and disputes that arise due to manual
processing.
44
Performance management
A proper performance management process needs to be established to effectively measure the FTP process.
This can provide some inputs to make the PTP process more efficient. There are two types of performance
metrics in the PTP process: top down performance metrics, which measure the overall performance of the PTP
process, and bottom up performance metrics, which measure individual or team performance. Top down
metrics include percentage of payments made using checks and electronic payments, processing costs
incurred, purchase order error rates etc. Bottom up metrics include time taken for processing each payment
voucher, processing cost per invoice etc.
The metrics need to be aligned with the goals set by the firm. The metrics are measured against the goals set
to analyze the extent to which goals have been achieved. The goals need to be based on industry benchmarks.
A firm can enhance the efficiency of the PTP process by automating it. Firms can implement an E-procurement
system and streamline the purchasing process. But an E-procurement application can only improve the physical
process of purchasing and not the financial processes. Many ERP systems contain various modules of PTP
process that may help in the automation of financial components. Another application that aid the automation of
financial processes is Electronic Invoice Presentment and Payment (EIPP) systems. EIPP systems enable the
suppliers and buyers to exchange invoices, resolve disputes, and make payments electronically. Thus, these
systems enable collaboration between supply chain partners. EIPP systems need to be integrated with the
internal systems like procurement, ERP and accounts payable for effective functioning.
Outsourcing
Outsourcing some of the components of the PTP process to a third party is an option that a firm can consider,
to enhance the effectiveness of the PTP process. A firm has to identify the functions that can be outsourced so
that cost savings or faster processing can be achieved. Many financial institutions offer cash management
services like receiving invoices, check printing, dispute handling, reporting & analysis, managing international
payments, electronic funds transfer, supplier management etc. In some cases the entire FTP process is
outsourced. The benefits from outsourcing include reduction in processing costs. Outsourcing provides the firm
flexibility and the ability to scale up the operations as and when needed. By outsourcing time consuming and
routine activities, a firm can focus more on strategic functions and the personnel can be used for productive
purposes. The risks involved in FTP operations can be reduced by sharing the operations with a third party.
Order-to-Cash Process
Order-to-cash process starts with the customer placing the order and ends with receiving the payment from the
customer. The steps involved in the order-to-cash process are explained below.
The order is placed by the customer directly through phone, fax, or the Internet. Then, the inventory is checked
for the availability of the product in the quantity required by the customer. The firm then checks the customer
credit status to decide whether or not to extend credit to the customer. For this, the customer's credit limit and
the status of receivables from the customer are checked. If the customer has placed the order within the credit
limits and has nil or permissible receivables, then the product can be delivered to the customer. If not, the firm
has to evaluate whether to fulfill the order or to reject it or put it on hold. If it is a new customer, the firm has to
establish a new credit line for the customer. If the customer is an existing one and has high credit risk, then the
order may be rejected. If the order is placed by an existing customer having low credit risk, then the order may
be put on hold for farther analysis.
After delivering the goods, the customer is billed and the invoice is sent to the customer. The disputes that are
raised by the customer are then examined and resolved. Finally, the collection of the payment is done either at
the convenience of the customer or as per rules and norms set by the firm.
By expediting the order-to-cash process, cash flows can be improved. Some of the steps that can be carried out
to expedite the order-to -cash process are as follows:
B2B firms generally provide goods on credit to the customers. In most of the cases, credit is interest free and
the firm has to bear the credit costs. Firms, therefore, have to carefully evaluate and set guidelines for providing
credit to the customers. Firms may have to provide credit on liberal terms. Yet, at the same time, they have to
make sure that the bad debts generated on account of those liberal policies are kept under control. While
developing the credit policy and procedures, several factors have to be considered. Credit policy needs to take
into account the industry within which the firm is operating and its size. Big retailers wield more power, thus
forcing the suppliers to make their policies towards such firms liberal. The firm should also consider customer
preferences and requirements. It has to evaluate its competitive environment and develop a credit policy that
differentiates it from its competitors. It also needs to develop credit risk analysis guidelines, which enable it to
evaluate a customer while providing the credit.
There are four key types of credit policies which a firm can adopt. The first type of credit policy puts high credit
risk limits and stringent measures of collection. In such a policy, the firm only accepts those customers who
have a good credit history and high credit ratings. At the same time, the firm may also have strict collection
policies such as imposing penalties and fines, for late payments. Such a policy enables the firm to obtain
payments faster and reduces the risk of high bad debt. But such a policy is not customer friendly. The second
type of policy is to be liberal in providing credit but strict in collecting dues. In such a policy, the firm accepts
customers with even low credit ratings but the collection will be strict and no kind of lenience towards the
customers is allowed in the collection policy. Such a policy is customer friendly but it increases the collection
costs and the risk of bad debts. The third kind of credit policy allows only customers with high credit ratings, but
has liberal collection policies. In such a policy, only customers who have high credit ratings and 'good track
record are allowed. But the collections are made liberally. The idea behind such a policy is that a customer with
a good track record will pay the dues promptly; therefore, making the collection process liberal will not have any
impact on the receivables. But such a policy is not advisable for firms which handle large orders. The fourth
kind of credit policy allows customers with low credit ratings and a liberal collection policy. Such a policy may
increase the risk of bad debts and the collection process may take a long time and become tedious. Such a
policy is advisable when the firm wants to increase its market share. The firm has to choose an optimal credit
policy, which while being customer friendly, should not impact the collection and quality of the receivables.
Another important step in enhancing the efficiency of receivables management is the automation of a part, or
whole of the process. By automating receivables management a firm can track and monitor the receivables and
evaluate as to how the receivables process can be improved. Automation helps in faster and more accurate risk
assessment of customers. Firms can easily distinguish between the customers with low credit profiles and
customers with high credit profiles. This enables the firm to decide upon the customers to whom credit can
safely be extended. Activities like payments and credit analysis can be automated, to reduce time and costs
and to improve the receivables collection and management.
Developing relevant performance metrics helps the firm assess the effectiveness of the receivables
management process. It can also help the firm identify opportunities to improve the process. Performance
measures are needed for all the elements of order-to-cash process, and should be developed in line with
organizational objectives. They should also be based on industry standards. By matching the measures to the
industry standards, a firm can analyze its position in relation to its competitors, and take necessary action to
improve upon those measures. Days Sales Outstanding (DSO) is the key measure that is generally used to
evaluate the order-to-cash process. But there are other metrics, related to each step in the order-to-cash
process, that can be measured for better performance analysis. They are;
46
Developing an effective reporting system
Information needs to be shared between different departments for efficient receivables management. Proper
information sharing enables the departments to have accurate and up to date information, which in turn helps
them to take timely action. For example, suppose a customer holds back payment due to quality or quantity
issues. This information is first received by the accounts receivables department. If this information is
communicated to the manufacturing department then it can take timely action to improve the quality of the
products. This would help the firm collect receivables and also resolve customer grievances faster. This
information needs to be shared with the other supply chain partners like logistics service providers and financial
institutions as well. As customers expect timely and accurate order delivery, any deviations can delay the
payment process. Supply chain partners help in providing the right product to the right customer at the right
time. An effective reporting system would help provide accurate information to the supply chain partners, so that
the right order can be delivered to the customer, on time.
One of the key elements which helps in efficient financial flow in a supply chain is the use of IT solutions in the
purchase-to-pay and order-to-cash processes. By automating these processes firms can minimize inefficiencies
and improve the effectiveness of the supply chain. Some of the prominent IT solutions that are used in
automating the financial flow are:
Assigning ratings to the customers. By providing the required credit rules to these systems, a firm can obtain
the ratings of its customers as per the predefined credit rules. This would help the firm to distinguish between
the customers who need to be focused upon and the customers who need to be ignored.
Helps sales representatives during the sales process. By entering the customer data into these systems, these
systems provide instant credit analysis information about the customers, such as credit limit and credit terms.
This helps the sales representatives in taking faster decisions at the point of sale, to sell higher range products
to the customers to provide liberal credit terms to the customers with high credit limits and good credit history.
Sales representatives can also decide upon the pricing of the product based on the credit risk. A firm can
charge a higher price to the customers with high credit risk and a lower price to customers with low credit risk.
This helps the firm to improve its revenue as well as to reduce its credit risks.
Improved customer satisfaction. Faster credit processing enables the firm to process orders without much delay
thus increasing customer satisfaction. This may help in developing a long term relationship with the customers.
Three prominent firms Dun & Bradstreet, Coface and Equifax provide such applications. With the emergence of
the Internet, these firms have begun to provide these services on the worldwide web.
Firms in the past have mainly focused on improving the material flow in a supply chain using various innovative
methods like cross docking, Vendor Managed Inventory (VMI), Collaborative Planning, Forecasting and
Replenishment (CPFR) etc. Firms have also used IT solutions to automate the material flow. Today, they have
also begun to focus on improving the financial flow in the supply chain. Many firms have adopted best practices
of cash flow management to improve the financial flow. Many firms have automated the same or all of the
elements of the financial flow in a supply chain through implementing ERP systems and cash flow management
solutions. However, most firms have not focused much on integrating the material and the financial flow in a
supply chain. By integrating material and financial flows, firms can remove the inefficiencies in the supply chain.
Integration of these two flows can be done in three different ways.
Linking of functional systems with financial systems. For example, by linking the procurement system with the
accounts payable system or the ERP system, the physical order information can be matched with the financial
information, thus reducing the errors arising due to improper information flow between the two systems. This
linking can also be extended to the supply chain partners thus enabling the physical order information flow to
closely match with the payment information flow. This enables increased collaboration between supply chain
partners.
47
Linking supply chain partner's or customer's preferences and behavior with the financial elements. Firms can
track and analyze the behavior of supply chain partners and customers. Based upon the needs and
requirements, firms can provide financial options to the customers and supply chain partners. Suppose, a firm
orders a large consignment from a supplier. Then, the firm can provide the option of paying the amount through
traditional means like checks or through electronic means. The supplier can decide upon the payment option. If
the supplier wants a faster payment, he may opt for the electronic payment means.
Linking financial and physical flows based on business intelligence. Firms can set the pricing of the product and
payment options based on the customer's requirements and the existing market conditions. This may help the
firm in maximizing its revenue. This policy is well utilized by airline companies where flight ticket prices are
changed depending on supply and demand conditions.
In order to align financial and physical supply chains, firms need to reengineer the physical flow processes so
as to integrate them with the financial processes. Automation of financial processes is an area in which firms
have to focus. Integrating the financial flow with the material flow provides many benefits to the members of the
supply chain. Members can obtain the products as per their requirement and pay the supplier using a suitable
payment mode. With such integration, members share a common and full view of all their transactions,
increasing efficiency in the supply chain. Specific benefits for the members of the supply chain are:
• Suppliers can make accurate forecasts about working capital requirements and also product demand. Thus
inventory levels and working capital can be reduced as they have a better view about the situation. They
can resolve disputes easily as both the supplier and the customer share the same information about the
transaction Payment processing can become faster. The processing costs due to personnel and paperwork
are reduced. Errors are minimized, thus helping the supplier to obtain correct payment.
• Buyers can benefit from perfect order delivery. This helps the buyer to forecast and plan effectively. Thus
the buyer can reduce working capital requirements to deal with the payables. With the automation of the
processes, buyers can reduce the time and costs in processing the invoices like routing for approval,
matching the invoices and payments.
Trade terms can be negotiated more effectively between the buyer and the supplier because of the
availability of precise information about a transaction. Buyers and suppliers have an accurate view about
the risk involved. Hence, the buyer and the seller can negotiate financing options like insurance, supplier
credit etc., more optimally.
INVESTMENT CENTRE
Cost Centre
It is a location, person or item of equipment in respect of which costs may be ascertained and related to cost
units for control purpose. Broadly speaking, a cost centre may be of two types: Personal cost centre which
consists of a person or group of persons; and Impersonal cost centre which consists of a location or item of
equipment (or group of these). From the standpoint of functions, a cost centre may be of two types: Production
cost centre, i.e., a cost centre in which production is carried on (this may embrace one specific operation, e.g.,
machining, or a continuous process, e.g., distillation), and Service cost centre, i.e., a cost centre which renders
services to the production cost centres.
When the output of an organization is a service rather than goods, it is usual to use some alternative term such
as support cost centre or utility cost centre for supporting services,1 If machine and/or persons carrying out
similar operations are brought together, a cost centre is known as operation cost centre. Again, when machines
and/or persons are grouped according to a specific process or a continuous sequence of operations, a cost
centre is termed as process cost centre.
Division of production, administration, selling and distribution and other functions into cost centres is necessary
for two purposes; (i) cost ascertainment, and (ii) cost control. Costs are ascertained by cost centres or cost
units or by both. For example, direct costs can be identified with the cost centres or cost units easily. Indirect
costs are allocated to the cost centres based on volume (e.g., direct labour hours, machine hours, etc.) or
activity (e.g., number of set-ups, inspections, material movements, etc.). Similarly, cost control is facilitated by
pinpointing responsibility through cost centres. In other words, different persons are allotted different cost
centres and a person is held responsible for the control of cost of the cost centre or centres running under him
only. It is in this sense that cost centres are also termed as responsibility centres.
48
The type, size and number of cost centres in an undertaking will depend upon the nature and size of the
business, attitude of the management towards cost ascertainment and cost control, and so on. However, it
should be noted that too many cost centres tend to be expensive while too few cost centres tend to defeat the
very purpose of accurate cost ascertainment and cost control.
Cost Unit
It is a quantitative unit of product or service in relation to which costs are ascertained. For ascertainment of
costs, it is necessary to express them in terms of physical measurement like number, weight, volume, area,
length or any other convenient unit. When single type unit does not serve the desired purpose, composite units
may be used for the purpose of cost measurement. For example, in transport costing, ton-miles or passenger-
miles are better measures than only tons or passengers as the latter do not take into account distance carried
or distance travelled. A few examples of cost units applicable to different industries are given below.
Illustration 4.1 Specify the methods of costing and cost units applicable to the following industries:
(i) Toy-making
(ii) Cement
(iii) Radio
(iv) Bicycle
(v) Ship-building
(vi) Hospital
Industry Method of costing Cost units
(i) Toy-making Batch Per batch
(ii) Cement Unit Per tonne or per bag
(iii) Radio Multiple Per radio or per batch
(iv) Bicycle Multiple Per bicycle
(v) Ship-building Contract Per ship
(vi) Hospital Service Per bed per day or per
patient per day.
Profit Centre
Profit is the difference between revenues and costs. Therefore, a profit centre represents segment of a
business that is responsible for both revenues and costs. This may also be called a business centre, business
unit, or strategic business unit, depending upon the concept of management responsibility prevailing in the
entity concerned.
49
Investment Centre
An investment centre is a responsibility centre that is accountable for revenues, costs and investments. It is
defined as "a profit centre in which inputs are measured in terms of expenses and outputs are measured in
terms of revenues, and in which assets employed are also measured, the excess of revenue over expenditure
then being related to assets employed."
Thus, the relationship between cost, profit and investment centres may be stated as follows:
MATERIAL CONTROL
Material constitutes a substantial portion of the production cost in many industries. Sometimes, it may be the
major item of all the items constituting total cost. Therefore, it is natural that large amounts will be invested in it.
But there should be optimum level of investment for any asset, whether it is a plant, cash or inventories.
Inadequate inventories will disrupt production and result in loss of safes. All this calls for an effective material
control programme. The main objectives of material control will, therefore, be:
(a) for use in production and production services, as and when required;
(b) for delivery to customers to fulfil orders for supplies from purchasers, if any.
ORGANIZATION
In order to exercise effective control on material, there should be proper co-operation and co-ordination in the
following departments:
1. Purchase
2. Receiving and Inspection
3. Stores
4. Production
5. Stock Control
6. Sales
7. Accounts
For instance, the Sales Manager will intimate as to the number or quantity of finished goods to be kept in hand
so that no customer is ever turned away or forced to wait because of lack of stock. Similarly, the Accountant or
the financial manager should ensure that only optimum stock of materials is kept in stock so that additional
funds may be channelled into more profitable investment.
The various steps involved in introducing a material control system can be broadly grouped under two heads
mentioned below:
Primary Steps
1. Classification and codification of material and fixation of stock levels in respect of each item of stock.
After making the classification in the above line, the following stock levels in terms of quantities are to be fixed
in respect of each type of material:
2. Consulting and advising engineer about current and proposed product design, programmes of production,
schedules of materials, tools and jigs, packaging, etc. to achieve desired quality specifications.
Standardization and simplification of material are to be done after giving due importance to substitution, if
necessary.
Operational Steps
After the primary steps are taken, it is necessary to ensure: 1. Control over:
(a) Purchasing
(b) Receiving and Inspection
(c) Storing and Issues
We have discussed earlier the role of EOQ and various stock levels under conditions of certainty and
uncertainty in controlling costs of inventory. We now discuss the other aspects of purchasing. Control over
purchasing means that the Purchase Requisition and the Purchase Order should be in writing in prescribed
forms and shall be duly authorized by the executive concerned. Further, the quality of materials should be
according to specification or design and price should relate to quality and market condition. In short, it should
ensure materials of right quality and quantity at the right price from the right source and at the right time.
(a) Materials received are checked with the Delivery Note and copy of Purchase Order.
(b) Quantity as revealed by physical verification agrees with that shown in the Delivery Note, and
(c) Quality is as per specification mentioned in the Purchase Order.
On the other hand, control on storing and issues will include the following:
(a) To ensure that the goods received are in accordance with the instruction detailed in the Purchase
Order and Goods Received Note.
(b) To ensure that the material received are placed in appropriate bins, racks, etc. and quantities are
entered in the respective Bin Cards to facilitate easy location and perpetual record of stores received.
(c) Material to be issued only against properly authorized requisitions and appropriate Bin Card should
be credited with the quantity issued.
(d) Material returned from shops should be checked with properly authorized Shop Credit Note and
appropriate Bin Card to be debited with the quantity received.
51
(e) Checking the Bin Card balance with that shown by Stores Ledger and physical verification.
2. Proper implementation of the Perpetual Inventory System. The perpetual inventory system is an aid to
material control. It represents a system of records which reflects the physical movement of stocks and their
current balance.
4. Establishment of standards for materials and analysis of material variance according to their originating
causes. Standards for materials should be fixed after considering factors like specification, size, quality of
materials, effect on labour cost, tools, machines, etc. A standard for scrap, waste, spoilage, etc. should also
be laid down for all major materials used in production. Detailed analysis in respect of minor materials may
not be feasible and in such a case a monetary limit on consumption may be laid down.
Material consumed should be properly recorded and compared with standards fixed in order to develop material
variances, viz.,
The analysis of variance will pin down responsibilities so that proper action can be taken where necessary.
5. Comparison of material costs of different periods with the help of different ratios. The selection of proper
ratios will depend upon their suitability to the requirements for control purposes. However, as a general
guide, the following may be mentioned:
JIT purchasing is considered to be one of the modern techniques used for management of costs associated
with inventories. JIT purchasing is the purchase of materials or goods such that delivery immediately precedes
use or demand. In an extreme case, no inventories (raw materials, work-in-progress or finished goods in case
of a manufacturing firm; goods for resale for a retailer) are held.
Very often it is difficult to estimate the relevant inventory carrying costs probably because the accounting
system does not routinely collect such costs. It is easy to overlook some spoilage, obsolescence, warehousing,
tax, insurance, and opportunity cost of capital. When management estimates these costs correctly, carrying
costs of inventories become higher than expected and consequently, EOQ declines and JIT becomes more
attractive.
The success of JIT purchasing depends on costs of quality and timely deliveries. Defective materials and late
deliveries may disrupt the operation. So companies adopting JIT purchasing must select their suppliers very
carefully and pay attention to developing long-run relationship with them. It should be emphasized that in the
evaluation of suppliers, price is only one of the components.
1. JIT reduces the inventory carrying costs, e.g., costs of spoilage and obsolescence, materials handling
and breakage, warehousing, tax, insurance and opportunity cost of capital.
2. Due to frequent purchase of materials or goods, the issue price is likely to be closer to the
replacement price. This facilitates pricing decision.
52
3. It helps to develop a long-run relationship with the suppliers. This will reduce the cost of quality and
stock-out costs.
4. In retail business using JIT purchasing, purchasing is now attempted to extend daily deliveries to as
many items as possible, so that the goods are stored in the warehouse or on store shelves for a minimum
period before they are sold to the customers. As for example, milk and breads are supplied daily to the
retailers. As a result, customers get better quality products.
JIT purchasing, however, may result in stock-out costs, i.e., cost of not having materials or goods. Examples of
such costs are loss of contribution, loss of goodwill, loss of customers, etc.
STOCK TURNOVER
It has been stated earlier that to minimize the amount of investment, raw materials stocks may be classified
The Stock Turnover Ratio will facilitate such a classification and it will act as a tool for exercising control on raw
material inventories.
The turnover ratio8 should normally be 2. A low ratio indicates bad buying, accumulation of obsolete stock,
carrying of loo much stock, etc. On the other hand, a high ratio is an indicator of fast-moving stock and,
therefore, speaks of better inventory management.
Illustration 4.2 The following information is available from the books of a company for 2005:
Material Material
A B
Rs. Rs.
Opening Stock 1,400 2,000
Purchases 23,000 3,600
Closing Stock 1,000 2,400
Calculate the material turnover ratio of the above types of materials and determine which of the two materials is
more fast-moving.
53
days days
A stock turnover of 19.5 times p.a. shows that an average stock is being held for less than one month (i.e., 19
days). On the other hand, a stock turnover of 1.5 times p.a. shows that an average stock is being held for 8
months (or for 243 days). Therefore, material B is very slow-moving material while material A is very fast-
moving.
An alternative method of measuring stock turnover is one involving the use of maximum and minimum stock
levels. This is measured as follows:
However, the formula that uses re-order or economic order quantity is considered a more refined method of
measuring stock turnover. The formula is:
Material consumed
1
Minimum stock level + EOQ
2
Illustration 4.3 The bin card for Material M 27 shows the following position:
In examination, the method to be used depends on the availability of information. Information permitting, the
students may use the last-mentioned formula which uses the economic order quantity. On the other hand, if
stock control system involving stock levels is not in operation, one has to depend on the first-mentioned
formula.
PERPETUAL INVENTORY
It represents a system of records maintained by the controlling department, which reflects the physical
movement of stocks and their current balance. Under this method, stores balances are recorded after every
receipt and issue.
A perpetual inventory is usually checked by a programme of continuous stock-taking. But the two terms
'perpetual inventory' and 'continuous stock-taking' should not be considered synonymous. Perpetual inventory
means the system of records, whereas continuous stock-taking means the physical checking of those records
with actual stocks.
The perpetual inventory system is intended as an aid to material control. In other words, when the records are
maintained up-to-date, the balance shown by the Bin Card or Stores Ledger should agree with ground or
physical balance. When there are discrepancies, proper investigation will have to be made and a report will
have to be submitted (see Fig. 3.27). If the physical balance is greater than the balance shown by the Bin Card
or Stores Ledger, a Debit Note is to be prepared and stock records adjusted accordingly. Similarly, in case of
shortage of stock, a Credit Note is to be prepared. (The treatment of surplus or deficiency of stock in accounts
has already been explained elsewhere in accordance with originating causes.)
1. The stock-taking programme is divided into a number of functions such as counting, weighing, measuring,
listing, etc., and work is distributed to different members of the team.
2. Different sections of the Store are taken up by rotation. For this purpose, a list showing priority of sections
54
or stock items or both are prepared. Advance notice is given to storekeeping staff concerned whenever a
particular stock item is verified each day.
3. Stores received but awaiting inspection is not mixed up with regular stores at the time of verification.
4. The physical stock, after counting, weighing or measuring, as the case may be, is properly recorded. Any
one of the following documents may be used for this purpose:
(i) Bin Card: The balance as per physical verification together with date of verification is entered usually in
different ink (preferably red) in the line below the last entry in the balance column.
(ii) Inventory Tag (see Fig. 3.26): It consists of two portions. The top portion is fastened to bins to indicate
that the item has been verified. Any bin not having an inventory tag would indicate that the item is yet to
be verified. The lower portions of inventory tags are torn off and collected together to constitute
inventory records. The lower portions are detached at the end of counting and checking of inventory.
Inventory Tag
(iii) Stock Verification Sheets : Separate sheets may be maintained to record the results of stock verification.
When this method of recording is followed, the sheets are maintained date-wise so as to indicate a
chronological list of items verified. The balance as per bin card is also entered in it by the stock verifier for
comparison.
The physical balance and issues and receipts during stock verification are recorded in it.
7. As a result of (6) above, the investment in stock cannot exceed the amount arranged for. Thus, it avoids the
disadvantage of carrying excessive stocks.
55
Figure Stores Audit Note
The usual reasons for discrepancy are breakage, pilferage, evaporation, breaking bulk, absorption of moisture,
short or overissue, etc. Sometimes, discrepancy may arise due to clerical errors, viz. wrong posting or non-
posting of entries. In case of clerical errors, corrections are made without any difficulty.
PERIODIC INVENTORY
This refers to a system where stock-taking is usually done periodically, say once or twice in a year. In case of
materials of small value, the periodic inventory system is adopted for determining the physical movement of
stock and its closing balance as on a particular date. Thus, companies even adopting ABC Analysis and
Perpetual Inventory System for some of stock items, may follow periodic inventory system for others. Again,
when the Perpetual Inventory System becomes very costly (say, for slow-moving items of low value), periodic
inventory is the only alternative. But the oft-quoted disadvantages of the system are;
1. In the absence of a continuous check, there is possibility of greater fraud, discrepancy, etc.
2. The discrepancy, fraud, if any, are revealed only after stock-counting at the end of a certain period and,
therefore, there is little scope for taking preventive action.
3. Stock-taking will take a considerable time and this may affect production and other important work. Interim
Profit and Loss Accounts and Balance Sheets cannot also be prepared for want of stock figures.
INTRODUCTION
Inventory Management involves the control of assets being produced for the purposes of sale in the normal
course of the company's operations. Inventories include raw material inventory, work-in process inventory and
finished goods inventory. The goal of effective inventory management is to minimize the total costs - direct and
indirect - that are associated with holding inventories. However, the importance of inventory management to the
company depends upon the extent of investment in inventory. It is industry-specific.
Inventories are a component of the firm's working capital and, as such, represent a current asset. Some
characteristics that are important in the broad context of working capital management, include:
1. Current Asset: It is assumed that inventories will be converted to cash in the current accounting cycle,
which is normally, one year. In some cases, this is not entirely true, for example, a vintner may require that
the wine be aged in casks or bottles for many years. Or, a manufacturer of fine pianos may have a
production process that exceeds one year. In spite of these and similar problems, we will view all
inventories as being convertible into cash in a single year.
2. Level of Liquidity: Inventories are viewed as a source of near cash. For most products, this description is
accurate. At the same time, most firms hold some slow-moving items that may not be sold for a long time.
With economic slowdowns or changes in the market for goods, the prospects for sale of entire product lines
may be diminished. In these cases, the liquidity aspects of inventories become highly important to the
manager of working capital. At a minimum, the analyst must recognize that inventories are the least liquid of
current assets. For firms with highly uncertain operating environments, the analyst must discount the
56
liquidity value of inventories significantly.
3. Liquidity Lags: Inventories are tied to the firm's pool of working capital in a process that involves three
specific lags, namely:
a. Creation Lag: In most cases, inventories are purchased on credit, creating an account payable. When
the raw materials are processed in the factory, the cash to pay production expenses is transferred at
future times, perhaps a week, month, or more. Labor is paid on payday. The utility that provided the
electricity for manufacturing is paid after it submits its bill. Or for goods purchased for resale, the firm
may have 30 or more days to hold the goods before payment is due. Whether manufactured or
purchased, the firm will hold inventories for a certain time period before payment is made. This liquidity
lag offers a benefit to the firm
b. Storage Lag: Once goods are available for resale, they will not be immediately converted into cash.
First, the item must be sold. Even when sales are moving briskly, a firm will hold inventory as a back-
up. Thus, the firm will usually pay suppliers, workers, and overhead expenses before the goods are
actually sold. This lag represents a cost to the firm.
c. Sale Lag: Once goods have been sold, they normally do not create cash immediately. Most sales
occur on credit and become accounts receivable. The firm must wait to collect its receivables.
This lag also represents a cost to the firm.
4. Circulating Activity: Inventories are in a rotating pattern with other current assets. They get converted into
receivables which generate cash and invested again in inventory to continue the operating cycle.
PURPOSE OF INVENTORIES
The purpose of holding inventories is to allow the firm to separate the processes of purchasing, manufacturing,
and marketing of its primary products. The goal is to achieve efficiencies in areas where costs are involved and
to achieve sales at competitive prices in the market place. Within this broad statement of purpose, we can
identify specific benefits that accrue from holding inventories.
1. Avoiding Lost Sales: Without goods on hand which are ready to be sold, most firms would lose business.
Some customers are willing to wait, particularly when an item must be made to order or is not widely
available from competitors. In most cases, however, a firm must be prepared to deliver goods on demand.
Shelf stock refers to items that are stored by the firm and sold with little or no modification to customers. An
automobile is an item of shelf stock. Even though customers may specify minor variations, the basic item
leaves a factory and is sold as a standard item. The same situation exists for many items of heavy
machinery, consumer products, and light industrial goods.
2. Gaining Quantity Discounts: In return for making bulk purchases, many suppliers will reduce the price
of supplies and component parts. The willingness to place large orders may allow the firm to achieve
discounts on regular prices. These discounts will reduce the cost of goods sold and increase the profits
earned on a sale.
3. Reducing Order Costs: Each time a firm places an order, it incurs certain expenses. Forms have to be
completed, approvals have to be obtained, and goods that arrive must be accepted, inspected, and
counted. Later, an invoice must be processed and payment made, Each of these costs will vary with the
number of orders placed. By placing fewer orders, the firm will pay less to process each order.
4. Achieving Efficient Production Runs: Each time a firm sets up workers and machines to produce an
item, startup costs are incurred. These are then absorbed as production begins. The longer the run, the
smaller the costs to begin production of the goods. As an example, suppose it costs Rs. 12,000 to move
machinery and begin an assembly line to produce electronic printers. If 1,200 printers are produced in a
57
single three-day run, the cost of absorbing the startup expenses is Rs.10 per unit (12,000/1,200). If the run
could be doubled to 2,400 units, the absorption cost would drop to Rs.5 per unit (12,000/2,400). Frequent
setups produce high startup costs; longer runs involve lower costs.
These benefits arise because inventories provide a "buffer" between purchasing, producing, and marketing
goods. Raw materials and other inventory items can be purchased at appropriate times and in proper amounts
to take advantage of economic conditions and price incentives. The manufacturing process can occur in
sufficiently long production runs and with pre-planned schedules to achieve efficiency and economies. The
sales force can respond to customer needs and demands based on existing finished products. To allow each
area to function effectively, inventory separates the three functional areas and facilitates the interaction among
them.
Figure 4.3
A V O I D L O S S E S
O F S A L E S
P U R C H A S E
G A I N Q U A N T I T Y
D I S C O U N T S
F I R M S
H O L D I N GP R O D U CW E H I C H
I N V E N T O R I E S
H E L P S
T O R E D U C E O R D E R
C O S T S
S E L L
S E P A R A T E A C
I E V E H
E F F I C I E N T
P R O D U C T I O N
5. Reducing Risk of Production Shortages: Manufacturing firms frequently produce goods with hundreds or
even thousands of components. If any of these are missing, the entire production operation can be halted,
with consequent heavy expenses. To avoid starting a production run and then discovering the shortage of a
vital raw material or other component, the firm can maintain larger than needed inventories.
TYPES OF INVENTORY
1. Raw Materials Inventory: This consists of basic materials that have not yet been committed to production
in a manufacturing firm. Raw materials that are purchased from firms to be used in the firm's production
operations range from iron ore awaiting processing into steel to electronic components to be
incorporated into stereo amplifiers. The purpose of maintaining raw material inventory is to uncouple the
production function from the purchasing function so that delays in shipment of raw materials do not cause
production delays.
2. Stores and Spares: This category includes those products which are accessories to the main products
produced for the purpose of sale. Examples of stores and spares items are bolts, nuts, clamps, screws, etc.
58
These spare parts are usually bought from outside or sometimes they are manufactured in the company
also.
3. Work-in-Process Inventory: This category includes those materials that have been committed to the
production process but have not been completed. The more complex and lengthy the production process,
the larger will be the investment in work-in-process inventory.
Its purpose is to uncouple the various operations in the production process so •that machine failures and
work stoppages in one operation will not affect the other operations.
4. Finished Goods Inventory: These are completed products awaiting sale. The purpose of a finished goods
inventory is to uncouple the productions and sales functions so that it no longer is necessary to produce the
goods before a sale can occur.
Table 4.1 provides the details of the investment in inventories in confectionery industry.
The effective management of inventory involves a trade off between having too little and too much inventory. In
achieving this trade off, the Finance Manager should realize that costs may be closely related. To examine
inventory from the cost side, five categories of costs can be identified of which three are direct costs that are
immediately connected to buying and holding goods and the last two are indirect costs which are losses of
revenues that vary with differing inventory management decisions.
Material Costs: These are the costs of purchasing the goods including transportation and handling costs.
Ordering Costs: Any manufacturing organization has to purchase materials. In that event, the ordering costs
refer to the costs associated with the preparation of purchase requisition by the user department, preparation of
purchase order and follow-up measures taken by the purchase department, transportation of materials ordered
for, inspection and handling at the warehouse for storing. At times even demurrage charges for not lifting the
goods in time are included as part of ordering costs. Sometimes, some of the components and/or material
required for production may have facilities for manufacture internally. If it is found to be more economical to
manufacture such items internally, then ordering costs refer to the costs associated with the preparation of
requisition forms by the user department, set-up costs to be incurred by the manufacturing department and
transport, inspection and handling at the warehouse of the user department. By and large, ordering costs
remain more or less constant irrespective of the size of the order although transportation and inspection costs
may vary to a certain extent depending upon order size. But this is not going to significantly affect the behavior
of ordering costs. As ordering costs are considered invariant to the order size, the total ordering costs can be
reduced by increasing the size of the orders, Suppose, the cost per order is Rs.100 and the company uses
1200 units of a material during the year. The size of the order and the total ordering costs to be incurred by the
company are given below.
From the above example, it can be easily seen that a company can reduce its total ordering costs by increasing
59
the order size which in turn will reduce the number of orders. However, reduction in ordering costs is usually
followed by an increase in carrying costs to be discussed now.
Carrying Costs: These are the expenses of storing goods. Once the goods have been accepted, they become
part of the firm's inventories. These costs include insurance, rent/depreciation of warehouse, salaries of
storekeeper, his assistants and security personnel, financing cost of money locked-up in inventories,
obsolescence, spoilage and taxes. By and large, carrying costs are considered to be a given percentage of the
value of inventory held in the warehouse, despite some fixed elements of costs which comprise only a small
portion of total carrying costs. Approximately, carrying costs are considered to be around 25 percent of the
value of inventory held in storage. The greater the investment in inventory, greater is the carrying costs. In the
example considered in the case of ordering costs, let us assume that (he price per unit of material is Rs.40 and
that on an average about half-of the inventory will be held in storage. Then, the average values of inventory for
sizes of order 100, 150 and 200 along with carrying cost @ 25 percent of the inventory held in storage are given
below.
From the above calculations, it can be easily seen that as the order size increases, the carrying cost also
increasing in a directly proportionate manner.
Cost of Funds Tied up with Inventory: Whenever a firm commits its resources to inventory, it is using funds
that otherwise might have been available for other purposes. The firm has lost the use of funds for other profit
making purposes. This is its opportunity cost. Whatever the source of funds, inventory has a cost in terms of
financial resources. Excess inventory represents unnecessary cost.
Cost of Running out of Goods: These are costs associated with the inability to provide materials to the
production department and/or inability to provide finished goods to the marketing department as the requisite
inventories are not available. In other words, the requisite items have run out of stock for want of timely
replenishment. These costs have both quantitative and qualitative dimensions. These are, in the case of raw
materials, the loss of production due to stoppage of work, the uneconomical prices associated with 'cash'
purchases and the set-up costs which can be quantified in monetary terms with a reasonable degree of
precision, As a consequence of this, the production department may not be able to reach its target in providing
finished goods for sale. Its cost has qualitative dimensions as discussed below.
When marketing personnel are unable to honour their commitment to the customers in making finished goods
available for sale, the sale may be lost. This can be quantified to a certain extent. However, the erosion of the
good customer relations and the consequent damage done to the image and goodwill of the company fall into
the qualitative dimension and elude quantification. Even if the stock-out cost cannot be fully quantified, a
reasonable measure based on the loss of sales for want of finished goods inventory can be used with the
understanding that the amount so measured cannot capture the qualitative aspects.
As explained above, while the total ordering costs can be decreased by increasing the size of order, the
carrying costs increase with the increase in order size indicating the need for a proper balancing of these two
types of costs behaving in opposite directions with changes in order size.
Again, if a company wants to avert stock-out costs it may have to maintain larger inventories of materials and
finished goods which will result in higher carrying costs. Here also proper balancing of the costs becomes
important.
Thus, the importance of effective inventory management is directly related to the size of the investment in
inventory. To manage its inventories effectively, a firm should use a systems approach to inventory
management. A systems approach considers in a single model all the factors that affect the inventory,
A system for effective inventory management involves three subsystems namely economic order quantity,
reorder point and stock level.
As the lead time (i.e., time required for procurement of material) is assumed to the zero an order for
replenishment is made when the inventory level reduces to zero. The level of inventory over time follows the
pattern shown in figure 4.4
As the lead time (i.e., time required for procurement of material) is assumed to be zero an order for
replenishment is made when the inventory level reduces to zero. The level of inventory over time follows the
pattern shown in figure 4.4;
From figure 4.4 it can be noticed that the level of inventory will be equal to the order quantity (Q units) to start
with. It progressively declines (though in a discrete manner) to level O by the end of period 1. At that point an
order for replenishment will be made for Q units. In view of zero lead time, the inventory level jumps to Q and a
similar procedure occurs in the subsequent periods. As a result of this the average level of inventory will remain
61
at (Q/2) units, the simple average of the two end points Q and Zero.
From the above discussion the average level of inventory is known to be (Q/2) units.
From the previous discussion, we know that as order quantity (Q) increases, the total ordering costs will
decrease while the total carrying costs will increase.
The economic order quantity, denoted by Q*, is that value at which the total cost of both ordering and carrying
will be minimized, It should be noted that total costs associated with inventory
where the first expression of the equation represents the total ordering costs and the second expression the
total carrying costs. The behavior of ordering costs, carrying costs and total costs for different levels of order
Quantity (Q) is depicted in figure 4.5.
Figure 4.5
From figure, it can be seen that the total cost curve reaches its minimum at the point of intersection between the
ordering costs curve and the carrying costs line. The value of Q corresponding to it will be the economic order
quantity Q*. We can calculate the EOQ formula.
The order quantity Q becomes EOQ when the total ordering costs at Q is equal to the total carrying costs.
Using the notation, it amounts to stating:
In the above formula, when 'U' is considered as the annual usage of material, the value of Q* indicates the size
of the order to be placed for the material which minimizes the total inventory-related costs. When 'U' is
considered as the annual demand Q* denotes the size of production run.
Suppose a firm expects a total demand for its product over the planning period to be 10,000 units, while the
ordering cost per order is Rs.100 and the carrying cost per unit is Rs.2. Substituting these values,
62
Thus if the firm orders in 1,000 unit lot sizes, it will minimize its total inventory costs.
The major weaknesses of the EOQ model are associated with several of its assumptions, in spite of which the
model tends to yield quite good results. Where its assumptions have been dramatically violated, the EOQ
model can generally be easily modified to accommodate the situation. The model's assumptions are as follows:
1. Constant or uniform demand: Although the EOQ model assumes constant demand, demand may vary
from day-to-day. If demand is stochastic that is, not known in advance - the model must be modified
through the inclusion of a safety stock.
2. Constant unit price: The EOQ formula derived is based on the assumption that the purchase price Rs.P
per unit of material will remain unaltered irrespective of the order size. Quite often, bulk purchase discounts
or quantity discounts are offered by suppliers to induce customers for buying in larger quantities.
The inclusion of variable prices resulting from quantity discounts can be handled quite easily through a
modification of the original EOQ model, redefining total costs and solving for the optimum order quantity.
3. Constant carrying costs: Unit carrying costs may vary substantially as the size of the inventory rises,
perhaps decreasing because of economies of scale or storage efficiency or increasing as storage space
runs out and new warehouses have to be rented. This situation can be handled through a modification in
the original model similar to the one used for variable unit price.
4. Constant ordering costs: While this assumption is generally valid, its violation can be accommodated by
modifying the original EOQ model in a manner similar to the one used for variable unit price.
5. Instantaneous delivery: If delivery is not instantaneous, which is generally the case, the original EOQ
model must be modified by including of a safety stock.
6. Independent orders: If multiple orders result in cost savings by reducing paperwork and transportation
cost, the original EOQ model must be further modified. While this modification is somewhat complicated,
special EOQ models have been developed to deal with this.
These assumptions have been pointed out to illustrate the limitations of the basic EOQ model and the ways in
which it can be easily modified to compensate for them. Moreover, an understanding of the limitations and
assumptions of the EOQ model will provide the Finance Manager with a strong base for making inventory
decisions.
Inflation affects the EOQ model in two major ways. First, while the EOQ model can be modified to assume
constant price increases, many times major price increases occur only once or twice a year and are announced
ahead of time. If this is the case, the EOQ model may lose its applicability and may be replaced with
anticipatory buying - that is buying in anticipation of a price increase in order to secure the goods at a lower
cost. Of course, as with most decisions, there are trade offs associated with anticipatory buying. The costs are
the added carrying costs associated with the inventory that you would not normally be holding. The benefits of
course, come from buying the inventory at a lower price. The second way inflation affects the EOQ model is
through increased carrying costs. As inflation pushes interest rates up, the cost of carrying inventory increases.
In the EOQ model this means that C increases, which results in a decline in the optimal economic order
quantity.
Determination of Optimum Production Quantity: The EOQ Model can be extended to production runs to
determine the optimum production quantity. The two costs involved in this process are: (i) set up cost and (ii)
inventory carrying cost. The set-up cost is of the nature of fixed cost and is to be incurred at the time of
commencement of each production run. The larger the size of the production run, the lower will be the set-up
cost per unit. However, the carrying cost will increase with an increase in the size of the production run. Thus,
there is an inverse relationship between the set-up cost and inventory carrying cost. The optimum production
size is at that level where the total of the set-up cost and the inventory carrying cost is the minimum. In other
words, at this level the two costs will be equal.
The formula for EOQ can also be used for determining the optimum production quantity as given below:
63
Illustration 4.4
Arvee Industries desires an annual output of 25,000 units. The set-up cost for each production run is Rs.80.
The cost of carrying inventory per unit per annum is Rs.4. The optimum production quantity per production run
(E) is
Modified EOQ to include Varying Unit Prices: Bulk purchase discount is offered when the size of the order is
at least equal to some minimum quantity specified by the supplier. The question may arise whether Q*, EOQ
calculated on the basis of a price without discount will still remain valid even after reckoning with the discount.
While no general answer can be given to such a question we can certainly say that a general approach using
the EOQ framework will prove useful in decision-making - whether to avail oneself of the discount offered and if
so what should be the optimal size of the order.
The first step under the general approach is to calculate Q*, EOQ without considering the discount. Let us
suppose Q' is the minimum order-size stipulated by the supplier for utilizing discount. After calculating Q* the
same will be compared to Q'. Only three possibilities can arise out of the comparison.
In case Q* is greater than or equal to Q', then Q* will remain valid even in the changed situation caused by the
quantity discount offered. This is so because the company can avail itself of the benefit of quantity discount with
an order-size of Q* as it is at least equal to Q', the minimum stipulated order size for utilizing discount.
Only in the case of Q* being less than Q' the need for the calculation of an optimal order size arises as the
company cannot avail itself of the discount with the order size of Q*. An incremental analysis can be carried out
to consider the financial consequences of availing oneself of discount by increasing the order-size to Q'. A
decision to increase the order-size is warranted only when the incremental benefits exceed the incremental
costs arising out of the increased order-size.
The incremental benefits will have two components: First, the total amount of discount available on the amount
of material is to be used. If we assume Rs. D of discount per unit of material, then the total discount on the
annual usage of material of U units amounts to:
Secondly, with an increase in order-size from Q* to Q', the number of orders will be reduced. As the ordering
cost is assumed to be Rs.F per order irrespective of the order size, there will be a reduction in the total ordering
cost. Thus, the reduction in ordering cost.
= (The difference between the number of orders with sizes of Q* and Q') x (the cost per order of Rs. F)
U U
= Rs. − xF
Q' Q'
64
Thus, the total incremental benefits will be the sum of the above two expressions and is given by
With an increase in the order-size, there is likely to be an increase in the average value of inventory even after
reckoning with the discount per unit of material of Rs.D which will go to reduce the price per unit for the
valuation of inventory. The increase in the average value of inventory will result in higher incidence of carrying
cost, assumed to be C percent of the average value of inventory.
Q' (P −D )C Q' PC
Incremental carrying cost = −
2 2
The net incremental benefit can be obtained by subtracting the incremental carrying cost from the total
incremental benefits. This is given by the expression.
U U Q' (P −D ) C −Q' PC
= Rs. U x D + Rs. − F −Rs .
Q' Q' 2
If the net incremental benefits are positive, then the optimal order quantity becomes Q'. Otherwise Q* will
continue to remain valid even in a situation of bulk purchase discount. A numerical illustration is given below to
illustrate the procedure to be adopted in a situation of bulk purchase discount.
Illustration 4.5
The annual usage of a raw material is 40,000 units for the Hy Fly Co., Ltd. The price of the raw material is
Rs,50 per unit. The ordering cost is Rs.200 per order and the carrying cost 20 percent of the average value of
inventory. The supplier has recently introduced a discount of 4 percent on the price of material for orders of
1,500 units and above. What was the company's E.O.Q. prior to the introduction of discount? Should the
company opt for availing the discount? What would be the optimal order size if the company opts to avail for
itself the discount offered?
Let us first arrange the data contained in the problem in accordance with the notation familiar to us by now.
U = 40,000 units
F = Rs.200 per order
P = Rs.50 per unit
D = Rs.2 per unit
C = 0.20
For utilizing discount the minimum order size Q' = 1,500 units. As Q* is less than Q', we have to calculate the
incremental benefits and incremental costs.
= Rs.80,000 .......(1)
= (32-27) x Rs.200
= Rs.1,000 ......(2)
Incremental carrying cost
= Rs.7,200-Rs.6,325
= Rs.875 ......(3)
As the net incremental benefits is a positive sum of Rs.80,125, the company should opt for availing the discount
offered. The optimal order-size will be 1,500 units, the minimum order size required for availing of the discount.
From the illustration, it is clear that although EOQ value of 1,265 units (Q*) is not relevant in the present
situation of bulk purchase discount, the general framework of the EOQ model has provided the necessary basis
for subsequent calculations and the decision reached therefrom.
In the EOQ model discussed we have made the assumption that the lead time for procuring material is zero.
Consequently, the reorder point for replenishment of stock occurs when the level of inventory drops down to
zero. In view of instantaneous replenishment of stock, the level of inventory jumps to the original level from zero
level. In real life situations one never encounters a zero lead time. There is always a time lag from the date of
placing an order for material and the date on which materials are received. As a result the reorder level is
always at a level higher than zero, and if the firm places the order when the inventory reaches the reorder point,
the new goods will arrive before the firm runs out of goods to sell. The decision on how much stock to hold is
generally referred to as the order point problem, that is, how low should the inventory be depleted before it is
reordered.
The two factors that determine the appropriate order point are the procurement or delivery time stock which is
the inventory needed during the lead time (i.e., the difference between the order date and the receipt of the
inventory ordered) and the safety stock which is the minimum level of inventory that is held as a protection
against shortages.
.', Reorder Point = Normal consumption during lead time + Safety Stock.
Several factors determine how much the delivery time stock and safety stock should be held. In summary, the
efficiency of a replenishment system affects amount of much delivery time needed. Since the delivery time
stock is the expected inventory usage between ordering and receiving inventory, efficient replenishment of
inventory would reduce the need for delivery time stock. And the determination of level of safety stock involves
a basic trade-off between the risk of stock-out, resulting in possible customer dissatisfaction and lost sales, and
the increased costs associated with carrying additional inventory.
66
Another method of calculating reorder level involves the calculation of usage rate per day, lead time which is
the amount of time between placing an order and receiving the goods and the safety stock level expressed in
terms of several days' sales.
From the above formula it can be easily deduced that an order for replenishment of materials be made when
the level of inventory is just adequate to meet the needs of production during lead time.
If the average daily usage rate of a material is 50 units and the lead time is seven days, then
Reorder level = Average daily usage rate x Lead time in days = 50 units x 7 days = 350 units
When the inventory level reaches 350 units an order should be placed for material. By the time the inventory
level reaches zero towards the end of the seventh day from placing the order materials will reach and there is
no cause for concern.
Safety Stock
Once again in real life situations one rarely comes across lead times and usage rates that are known with
certainty. When usage rate and/or lead time vary, then ' the reorder level should naturally be at a level high
enough to cater to the production needs during the procurement period and also to provide some measures of
safety for at least partially neutralizing the degree of uncertainty.
The question will naturally arise as to the magnitude of safety stock. There is no specific answer to this
question. However, it depends, inter alia, upon the degree of uncertainty surrounding the usage rate and lead
time. It is possible to a certain extent to quantify the values that usage rate and lead time can take along with
the corresponding chances of occurrence, known as probabilities. These probabilities can be ascertained based
on previous experiences and/or the judgemental ability of astute executives. Based on the above values and
estimates of stock-out costs and carrying costs of inventory, it is possible to work out the total cost associated
with different levels of safety stock.
Once we realize that higher the quantity of safety stock, lower will be the stock-out cost and higher will be the
incidence of carrying costs, the formula for estimating the reorder level will call for a trade-off between stock-out
costs and carrying costs. The reorder level will then become one at which the total stock-out costs (to be more
precise, the expected stock-out costs) and the carrying costs will be at their its minimum. We consider below
through an illustration the way of arriving at the reorder level in a situation where both usage rate and lead time
are subject to variation.
Illustration 4.6
Below are presented the daily usage rate of a material and the lead time required to procure the material along
with their respective probabilities (which are independent) for Sigma Company Ltd. The probabilities and the
values of usage rate and lead time are based on optimistic, realistic and pessimistic perceptions of the
executives concerned.
The stock-out cost is estimated to be Rs.10 per unit while carrying cost for the period under consideration is
Rs.3 per unit. What should be the reorder level based on financial considerations?
From the data contained in the table we can calculate the expected usage rate and expected lead time.
The expected usage rate is nothing but the weighted average daily usage rate, where the weights are taken to
be the corresponding probability values. Thus, expected daily usage rate
= 3 + 8 + 5 = 16 days
Normal consumption during lead time can be obtained by multiplying the above two values.
Since normal consumption during lead time has been obtained as 8000 units, stock-outs can occur only if the
consumption during lead time is more than 8,000 units.
Let us enumerate the situations with lead time consumption of more than 8,000 units, along with their
respective probabilities of occurrence. This can be achieved by considering the possible levels of usage.
From the above table it is clear that the situations with the lead time consumption of more than 8,000 units
(normal usage) are 10,000 units with a probability of 0.1250, 9,600 units with 0.0625, 12,800 units with 0.1250
and 16,000 units with 0.0625 probability. And the levels of stock-out are 2,000 units, 1,600 units, 4,800 units
and 8,000 units respectively.
Thus, safety stock level can be maintained at any of the above levels, and the stock-out cost and carrying cost
associated with these various levels are shown in the table.
68
0 8,000 units 0.0625 500 units Rs. 14,500 0 Rs, 14,500
4,800 units 0.1250 600 units
2,000 units 0.1250 250 units
1,600 units 0.0625 100 units
1,450 units
If the safety stock of the firm is 8,000 units, there is no chance of the firm being out of stock. The probability of
stock-out is, therefore zero. If the safety stock of the firm is 4,800 units, there is 0.0625 chance that the firm will
be short of inventory.
If the safety stock of the firm is 2,000 units, there is stock-out of 6,000 units with a probability of 0.0625 and
2,800 units with a probability of 0.125 based on the possible usage of 16,000 units with probability of 0.0625
and 12,800 with a probability of 0.125 stock-out and the probability of occurrence of stock-out at other levels are
calculated in the same way.
Even in a relatively simple situation considered in the illustration above, the amount o'f calculations involved for
arriving at the reorder level is large. In real life situations the assumption of independence in the probability
distributions made in the illustration above may not be valid and the number of time periods may also be large.
In such cases the approach adopted earlier can become much more complex. That is the reason why one can
adopt a much simpler formula which gives reasonably reliable results in calculating at what point in the level of
inventory a reorder has to be placed for replenishment of stock. The formula along with its application is given
below, using the notation developed earlier.
Reorder pint = S X L + F (S x R x L )
Where,
The stock-out acceptance factor, 'F', depends on the stock-out percentage rate specified and the probability
distribution of usage (which is assumed to follow a Poisson distribution). For any specified acceptable stockout
percentage the value of 'F' can be obtained from the figure presented below.
Illustration 4.7
For Apex company the average daily usage of a material is 100 units, lead time for procuring material is 20
days and the average number of units per order is 2000 units, The stockout acceptance factor is considered to
be 1.3. What is the reorder level for the company?
69
From the data contained in the problem we have J
S = 100 units
L = 20 days
R = 2,000 units
F = -1.3
Reorder level = S x L + F (S x R x L )
Reorder for replenishment of stock should be placed when the inventory level reaches 4,600 units.
Stock-level Subsystem
This stock level subsystem keeps track of the goods held by the firm, the issuance of goods, and the arrival of
orders. It maintains records of the current level of inventory. For any period of time, the current level is
calculated by taking the beginning inventory, adding the inventory received, and subtracting the cost of goods
sold. Whenever this subsystem reports that an item is at or below the reorder point level, the firm will begin to
place an order for the item.
TOTAL SYSTEM
The three subsystems are tied together in a single inventory management system. The inventory management
system can also be illustrated in terms of the three subsystems that comprise it. The figure No. 4.7 below ties
each subsystem together and shows the three items of information needed for the decision to order additional
inventory.
Inventory Planning
An important task of working-capital management is to ensure that inventories are incorporated into the firm's
planning and budgeting process. Sometimes, the level of inventory reflects the orders received by the general
manager of the plant without serious analysis as to the need for the materials or parts. This lack of planning can
be costly for the firm, either because of the carrying and financing costs of excess inventory or the lost sales
from inadequate inventory. The inventory requirements to support production and marketing should be
incorporated into the firm's planning process in an orderly fashion.
The first step in inventory planning deals with the manufacturing mix of inventory items and end products. Every
product is made up of a specified list of components. The analyst must recognize the different mix of
components in each finished product. Each item maintained in inventory will have a cost. This cost may vary
based on volume purchases, lead time for an order, historical agreements, or other factors. For the purpose of
preparing a budget, each item must be assigned a unit cost.
70
Once the mix of components is known and each component has been assigned a value, the analyst can
calculate the materials cost for each product which is the weighted average of the components and the
individual products.
The second step in inventory planning involves a forecast of unit requirements during the future period. Both a
sales forecast and an estimate of the safety level to support unexpected sales opportunities are required. The
Marketing Department should also provide pricing information so that higher profit items receive more attention.
An important component of inventory planning involves access to an inventory data base. A data base is a
collection of data items arranged in files, fields and records. Essentially, we are working with a structured
framework that contains the information needed to effectively manage all items of inventory, from raw materials
to finished goods. This information includes the classification and amount of inventories, demand for the items,
cost to the firm for each item, ordering costs, carrying costs, and other data.
The first component of an inventory data base deals with the movement of individual items and the second
component of inventory management data involves information needed to make decisions on rendering or
replenishing the items.
In the case of a manufacturing company of reasonable size the number of items of inventory runs into
hundreds, if not more. From the point of view of monitoring information for control it becomes extremely difficult
to consider each one of these items. The ABC analysis comes in quite handy and enables the management to
concentrate attention and keep a close watch on a relatively less number of items which account for a high
percentage of the value of annual usage of all items of inventory.
A firm using the ABC system segregates its inventory into three groups - A, B and C. The A items are those in
which it has the largest rupee investment. In the Figure 4.7 which depicts the typical distribution of inventory
items, A group consists about 10 percent of the inventory items that account approximately for 70 percent of the
firm's rupee investment. These are the most costly or the slowest turning items of inventory. The B group
consists of the items accounting for the next largest investment. This group consists approximately 20 percent
of the items accounting for about 20 percent of the firm's rupee investment. The C group typically consists of a
71
large number of items accounting for a small rupee investment. C group consists of approximately 70 percent of
all the items of inventory but accounts for only about 10 percent of the firm's rupee investment. Items such as
screws, nails, and washers would be in this group.
Dividing its inventory into A, B, and C items allows the firm to determine the level and types of inventory control
procedures needed. Control of the A items should be most intensive due to the high rupee investments
involved, while the B and C items would be subject to correspondingly less sophisticated control procedures.
The important principles underlying measurement of costs (outflows) and inflows (benefits) are as follows:
• All costs and benefits must be measured in terms of cash flows. This implies that all non-cash charges
(expenses) like depreciation which are considered for the purpose of determining the profit after tax must
be added back to arrive at the net cash flows for our purpose. (Illustrations 1, 2 and 3 of this chapter clarify
this aspect.)
• Since the net cash flows relevant from the firm's point of view are what that accrue to the firm after paying
tax, cash flows for the purpose of appraisal must be defined in post-tax terms.
• Usually the net cash flows are defined from the point of view of the suppliers of long-term funds4 (i.e.,
suppliers of equity capital plus long-term loans).
• Interest on long-term loans must not be included for determining the net cash flows. The rationale for this
principle is as follows: Since the net cash flows are defined from the point of view of suppliers of long-term
funds, the post-tax cost of long-term funds will be used as the interest rate for discounting. The post-tax
cost of long-term funds obviously includes the post-tax cost of long-term debt. Therefore if interest on long-
term debt is considered for the purpose of determining the net cash flows, there will be an error of double-
counting.
• The cash flows must be measured in incremental terms. In other words, the increments in the present
levels of costs and benefits that occur on account of the adoption of the project are alone relevant for the
purpose of determining the net cash flows.
• If the proposed project has a beneficial or detrimental impact on say, the other product lines of the firm,
then such impact must be quantified and considered for ascertaining the net cash flows.
• Sunk costs must be ignored. For example, the cost of existing land must be ignored because money has
already been sunk in it and no additional or incremental money is spent on it for the purposes of this
project.
• Opportunity costs associated with the utilization of the resources available with the firm must be considered
even though such utilization does not entail explicit cash outflows. Example, while the sunk cost of land is
ignored, its opportunity cost i.e., the income it would have generated if it bad been utilized for some other
purpose or project must be considered.
• The share of the existing overhead costs which is to be borne by the end product(s) of the proposed project
must be ignored.
The application of these principles in the measurement of the cash flows of a project are illustrated by the
following illustrations:
Illustration 4.8
Anand, a chemical engineer with 15 years of experience, and Prakash, a pharmacy graduate with 18 years of
experience, are evaluating a pharmaceutical formulation. They have estimated the total outlay on the project to
be as follows:
72
The project has an expected life of 10 years. Plant & Machinery will be depreciated at the rate of 33 1/3 percent
per annum as per the written down value method. The expected annual sales would be Rs.80 lakh, and the
cost of sales (including depreciation but excluding interest) is expected to be Rs.50 lakh per year. The tax rate
of the company will be 50 percent. Term-loan will carry 14 percent interest and will be repayable in 5 equal
annual installments, beginning from the end of the first year. Working capital advance will carry an interest rate
of 17 percent and, thanks to the 'rollover' phenomenon, will have an indefinite maturity.
Define the cash flows for the first three years from the long-term funds point of view.
Solution
Net Cash Flows Relating to Long-term Funds
(Rs. in lakh)
Year 01 2 3
A Investment (42.00)
B Sales 80.00 80.00 80.00
C Operating costs (excluding depreciation) 38.00 42.00 44.67
D Depreciation 12.00 8.00 5.33
E Interest on working capital advance 1.70 1.70 1.70
F Profit before tax 28.30 28,30 28.30
G Tax 14.15 14,15 14.15
H Profit after tax 14,15 14.15 14.15
I Initial flow (42.00)
K Operating flow (= H + D) + 1(1 - t) 26.15 22.15 19.48
L Net cash flow (= 1 + K) (42.00) 26.15 22.15 19.48
Explanatory Notes
The investment outlay has to be considered from the point of view of the suppliers of long-term funds. In the
given Illustration, we find that Rs.18 lakh out of the investment of Rs.24 lakh in current assets is financed by
way of trade-credit and working capital advance. The difference of Rs.6 lakh is called the working-capital
margin i.e., the contribution of the suppliers of long-term funds towards working capital. Therefore, the
investment outlay relevant from the long-term funds point of view will be equal to investment in plant and
machinery + working capital margin = Rs.42 lakh.
Since depreciation is a non-cash charge which has to be added to the profit after tax, this charge must be
disclosed separately in the cash flow statement and not clubbed with other operating costs. Further, the
depreciation charge to be considered here will be the tax-relevant charge. In other words, the depreciation must
be computed in accordance with the method and rate(s) prescribed by the Income Tax Act, 1961.
While interest on long-term debt must be excluded for reasons discussed earlier, interest on short-term bank
borrowings must be included in the cash flow statement.
In the Illustration discussed above, we have defined the cash flows only over the first three years of the project's
life. But in practice cash flows are defined over the entire project life or over a specified time horizon (if the
project life is too long). If the cash flows are defined over the entire life of the project, then the estimated
salvage value of the investment in plant and machinery and the working capital must be considered for
determining the net cash flow in the terminal year. If the cash flows are defined over a specified time horizon, a
notional salvage value is taken into account in the final year of the time horizon,
Illustration 4.9
73
A capital project involves the following outlays:
(Rs. in lakh)
Plant and machinery 180
Working Capital 120
(Rs. in lakh)
Equity 100
Long-term loans 104
Trade credit 36
Commercial banks 60
The project has a life of 10 years. Plant and machinery are depreciated at the rate of 15 percent per annum as
per the written down value method. The expected annual net sales is Rs.350 lakh. Cost of sales (including
depreciation, but excluding interest) is expected to be Rs.190 lakh a year. The tax rate of the company is 60
percent. At the end of 10 years plant and machinery will fetch a value equal to their book value and the
investment in working capital will be fully recovered. The long-term loan carries an interest of 14 percent per
annum. It is repayable in eight equal annual installments starting from the end of the third year. Short-term
advance from commercial banks will be maintained at Rs.60 lakh; and will carry interest at IS percent per
annum. It will be fully liquidated after 10 years. Trade credit will also be maintained uniformly at Rs.36 lakh and
will be fully paid back at the end of the tenth year.
Calculate the cash flow stream from the long-term funds point of view.
Solution
(Rs in lakh)
0 1 2 3 4 5 6 7 8 9 10
A. Investment
(204.00)
B. Sales 350.00 350.00 350.00 350.00 350.00 350.00 350.00 350.00 350,00 350.00
C. Cost of sales 163,00 167.05 170.49 173.42 175.91 178.02 179.82 181.34 182.64 183.75
D. Depreciation 27.00 22.95 19.51 16.58 14.09 11,98 10.18 8.68 7.36 6.25
E. Profit before interest 160.00 160.00 160,00 160-00 160.00 160.00 160.00 160.00 160.00 160.00
and taxes
F. Interest on ST bank 10.80 10,80 10.80 10.80 10.80 10.80 10.80 10.80 10,80 10.80
borrowing
G. Profit before taxes 149.20 149.20 149,20 149.20 149.20 149.20 149-20 149.20 149.20 149.20
H. Tax 89.52 89.52 89.S2 89.52 89.52 89.52 89,52 89.52 89,52 89.52
I. Profit after tax 59.68 59.68 59,68 59.68 59.68 59.68 59.68 59.68 59.68 59.68
J. Net salvage value of 35.44
fixed assets
K. Net salvage of current 120.00
assets
l. Retirement of trade (36.00)
credit
M. Payment of ST bank (60.00)
borrowing
N. Net Cash Flow
= -A + I + D + J +J+K-L- 86.68 82.63 79.19 76,26 73,77 71.66 69.86 68.34 67.04 125.37
M (204.00)
Explanatory Notes
• Net salvage value of fixed assets will be equal to the salvage value of fixed assets less any income tax that
74
may be payable on the excess of the salvage value over the book value. Likewise there will be a tax shield
on the loss, if any, incurred at the time of disposing of the fixed assets. According to tax laws, the net
salvage value of any individual item off plant and machinery has lost its significance and therefore for our
purposes, we will ignore the impact of tax on the salvage value. In other words, we will take only the gross
salvage value into consideration.
• The depreciation rate assumed in this problem is not indicative of the current rates in force, (The
depreciation rates currently applicable to plant and machinery under the Income Tax Act are 25%, 40%,
and 100%).
• In working out the cash flows, deduction available for a new project under Section 80 I of the Income Tax
Act has been ignored.
• Our Illustrations have so far been focused on estimating cash flows for a new project The following
illustration illustrates estimation of cash flows for a replacement project.
Illustration 4.10
Sandals Inc. is considering the purchase of a new leather cutting machine to replace an existing machine that
has a book value of Rs.3,000 and can be sold for Rs. 1,500. The estimated salvage value of the old machine
in.four years would be zero, and it is depreciated on a straight-line basis. The new machine will reduce costs
(before tax) by Rs.7,000 per year, i.e., Rs.7,000 cash savings over the old machine. The new machine has a
four year life, costs Rs.14,000 and can be sold for an expected amount of Rs.2,000 at the end of the fourth
year. Assuming straight-line depreciation, and a 40% tax rate, define the cash flows associated with the
investment. Assume that the straight-line method of depreciation is used for tax purposes.
Solution
(in Rs.)
Year 0 1 2 3 4
1. Net investment in new machine (12,500)
2. Savings in costs 7,000 7,000 7,000 7,000
3. Incremetal depreciation 2,250 2,250 2,250 2,250
4. Pre-tax profits 4,750 4,750 4,750 4,750
5. Taxes 1,900 1,900 1,900 1,900
6. Post-tax profits 2,850 2,850 2,850 2,850
• 7. Initial flow ( = (1)) (12,500)
8. Operating flow ( = (6) + (3» 5,100 5,100 5,100 5,100
9. Terminal flow 2,000
10. Net cash flow ( = (7) + (8) + (9)) (12,500) 5,100 5,100 5,100 7,100
Working Notes
Computation of depreciation:
75
Chapter 5
Organization profitability analysis
Cost Accounting is a quantitative method that accumulates, classifies, summarizes and interprets financial and
non-financial information for three major purposes, viz.
Cost Accounting is one of the branches of Accounting and is predominantly meant for meeting the informational
needs of the management. Managers need cost information for informed decision-making.
Of late, the boundaries of Cost Accounting have increased tremendously. It now refers to the gathering and
providing of information for decision needs of all sorts. Modern Cost Accounting is often called Management
Accounting.1 The official terminology of the Chartered Institute of Management Accountants
(CIMA), England, defines Cost Accounting as "that part of management accounting which establishes budgets
and standard costs and actual costs of operations, processes, departments or products and the analysis of
variances, profitability or social use of funds'
Management Accounting serves a business to be operated more efficiently and effectively. When cost
Accounting is used as a synonym of Management Accounting, the emphasis is clearly put on decision-making.
Cost Accounting can provide financial and non-financial information that help decision-making across all
functions of the organization. Accounting is regarded us an information system and cost and Management
Accounting are two sub-systems of the same.
Cost is defined as the amount of expenditure (actual or notional) incurred on, or attributable to, a specified thing
or activity (CIMA).
It also represents a sacrifice, a foregoing or a release of something of value. The Committee on Cost Concepts
and Standards of the American Accounting Association (Accounting Review, vol. 31) supports the view that
business cost is a release of value for the acquisition or creation of economic resources and is measured in
terms of a monetary sacrifice involved. For example, for materials used for production the cost is measured by
the amount of money that had to be paid to procure the materials. This is, no doubt, past or historical cost.
Costs are, therefore, resources sacrificed or foregone to achieve a specific object. Objects of Cost Accounting
are always activities. We want to know the cost of doing something. Cost object is, therefore, any activity or
item for which a separate measurement of costs in desired.3 A synonym is cost objective. The cost object may
be an activity or operation, a product or service, a project, a department "or a programme. Cost measurement
must be tied to at least one cost object—the term cost by itself is meaningless.
Why does management need cost information? Some of the purposes for which managers need cost
information are:
3. Cost control;
4. Pricing; and
Historical costs are required for item (1) mentioned above as it involves the matching of costs and revenues
on some consistent basis. For item (2), estimated costs, revenue, volume, etc. are relevant while cost
control requires the adoption of standards for comparison and involves the measurement of actual costs
against the standard costs. Pricing requires a different set of figures, viz., marginal costs plus expected
contribution or estimated total cost plus profit. Day-to-day applications of plans and policies may require
almost any combination of the above and other types of costs
76
Marginal Costing
Marginal Costing is a technique of ascertaining cost used in any particular method of costing. According to this
technique:
variable costs are charged to cost units and the fixed cost attributable to the relevant period is written-off in full
against the contribution for that period.'
Under this technique, all costs are classified into two groups: fixed and variable. For this purpose, the fixed and
variable elements are also separated from semi-variable or semi-fixed costs and included in respective groups.
Since fixed costs are not included in product costs, it becomes easy to find out directly the effect on profit due to
changes in volume or type of output.
The term marginal costing is generally used in the U.K. while in the U.S.A., direct costing is the more popular
term. Although both marginal costing and direct costing may mean one and the same thing, a distinction
between the two may also be made.2 The term cost-volume-profit (CVP) analysis is also frequently used3 in this
context. CVP analysis refers to the study of the effects on future profits of changes in fixed cost, variable cost,
sales price, quantity and mix (C1MA). Break-even analysis is one of the important tools under CVP analysis or
marginal costing and is used by managers for decision-making.
Marginal costing or CVP analysis is based on certain assumptions. They are as follows:
1. Fixed costs will tend to remain constant. In other words, there will not be any change in cost factor, such as
change in property tax rate, insurance rate, salaries of staff etc., or in management policy.
2. Price of variable cost factors, i.e., wage rates, price of materials, supplies, services, etc., will remain
unchanged, so that variable costs are truly variable.
3. Semi-variable costs can be segregated into variable and fixed elements.
4. Product specifications and methods of manufacturing and selling will not undergo a change.
5. Operating efficiency will not increase or decrease.
6. There will not be any change in pricing policy due to change in volume, competition, etc.
7. The number of units of sales will coincide with the units produced, so that there is no closing or opening
stock. Alternatively, the changes in opening and closing stocks are insignificant and that they are valued at
the same prices, or at variable cost.
8. Product-mix will remain unchanged.
Marginal Cost
Economists define marginal cost as "the amount at any given volume of output by which aggregate costs are
changed if the volume of output is increased or decreased by one unit". Suppose the cost of production of
10,000 units of a product is Rs. 1,00,000 and that of 10,001 units is Rs. 1,00,008. Therefore, cost of producing
an additional unit is Rs. 8 (i.e., Rs. 1,00,008 - 1,00,000). This is the marginal cost of one unit and this marginal
cost is the direct cost, comprising the cost of materials used, the labour employed and the variable overhead
expenses that would not have been incurred but for producing this additional unit. If the production of the
additional unit involves increase in fixed cost along with variable items as above, such increase will be included
in marginal cost.
But accountants define4 marginal cost as "the variable cost of one unit of a product or a service, i.e., a cost
which would be avoided if the unit was not produced or provided". Thus, marginal cost is the aggregate of
variable costs. For example, if Rs. 40,000 for direct materials, Rs. 20,000 for direct labour, Rs. 20,000 for
variable overheads and Rs, 20,000 for fixed overheads are incurred for producing 10,000 units of a product, the
marginal cost can be ascertained as follows:
The economist's marginal cost curve is J or U-shaped; the accountant's per unit variable cost is a constant,
which, when plotted, is a flat, horizontal line.
77
The theory of marginal costing is based upon the assumption that some elements of cost tend to vary directly
with variations in volume of output while others do not. That is why, only variable costs form part of product
costs. On the other hand, fixed costs are written-off to Marginal Profit and Loss Account being treated as period
costs inasmuch as costs such as supervision, rent, rates, fire insurance, depreciation, etc., are to be incurred
during a period irrespective of the volume of production. Since variable costs are included in product costs,
under stable conditions, such product costs will be a constant ratio whereas fixed costs will be a constant
amount. Even if fixed cost increases due to increase in volume, it will not affect marginal or variable cost, as
defined by the accountants.
In the above example, direct labour costs are included in marginal cost on the assumption that they are
'variable'. If they are not variable, they should be excluded from marginal cost. It should be noted that even fixed
costs may be directly identifiable with the cost object. While all variable costs are generally direct, all direct
costs need not be variable. The most important aspect of the direct cost is the cost object. A cost item may be
direct cost for one cost object, while it may be an indirect cost for another cost object. In India, labour costs are
not variable—they are fixed; only an insignificant portion, say 5 to 10 percentage, is variable (e.g., casual labour
engaged to cope with additional volume, overtime premium, salesmen's commission, etc.). Accordingly, cost of
labour should be excluded from the computation of marginal or variable cost: it should be added to other fixed
costs and treated as such, However, in solving the problems in this Section as well as those in Section II that
follows, we have taken labour cost as 'direct and variable' as most of these questions set in various
examinations assumed labour cost as such. In reality, the decision-maker should keep in mind the limitation of
such textbook approach and analyze cost data according to real-life situation for correct decision.
In computing marginal costs, a distinction between variable costs and fixed costs has to be made because only
variable costs are treated as product costs and fixed costs are treated as period costs. Semi-variable costs are
segregated into variable and fixed elements and included in the respective groups. These costs are discussed
in detail in the Overheads chapter (pp. 203-205). A comparison between the two with respect to a few criteria
may, however, be summed up as follows:
Variable costs, such as direct materials, direct labour, direct expense, etc., can be ascertained without any
difficulty and these costs will tend to be a constant amount per unit. In determining these costs, past actuals will
be the basis for estimate. Where, however, budgetary control is in operation, company's detailed budget will be
a guide. Variable overheads can be ascertained from previous ledger postings or the budget. Fixed costs can
also be picked up individually without difficulty. The basic sources of data are the same, i.e., Material
Requisition Notes for direct and indirect material, Job Cards or Wages Analysis Sheets for labour booking,
Expense Analysis Sheet for expenses, etc.
Semi-variable items should be segregated into variable and fixed elements and be included in the respective
groups. Semi-variable group frequently represents a significant portion of the total costs incurred. Therefore, the
accuracy of marginal costs will depend to a large extent upon the accuracy with which semi-variable costs are
segregated into variable and fixed elements.
The following methods are generally used in segregating semi-variable costs into their variable and fixed parts:
Intelligent Estimates
In estimating fixed and variable portions of semi-variable overheads, past overhead expenses at various levels
of activity will be analyzed and a tabulation will show the pattern of overhead expenses in relation to volume.
Adjustments are to be made for anticipating changes in price, rate, etc. Although this method is not accurate, it
is simple to operate.
This is also known as Range Method. Segregation with the help of this_ method is not difficult. In this method,
the levels of highest and lowest expenses are compared with one another and related to output attained in
those periods. Since fixed portion of the costs is expected to remain fixed for the two periods, it becomes clear
that the change in the level of the expenses must be due to variable portion of the overheads. From this, the
variable cost per unit is easy to ascertain as follows:
Output Semi-variable
Month (units) overheads Rs.
January 2,500 12,500
February 3,000 14,000
March 3,500 15,500
April 4,700 19,100
May 3,700 16,100
June 4,400 18,200
July 4,500 18,500
August 4,200 17,600
September 4,000 17,000
October 4,300 17,900
November 3,800 16,400
December 2,700 13,100
Now, from the above table, taking highest and lowest output with relative overhead costs, one can segregate
79
the fixed and variable portions as follows:
Output Semi-variable
(units) overheads Rs.
Highest (April) 4,700 19,100
Lowest (January) 2,500 12,500
Change 2,200 6,600
Since variable costs will only change, variable overheads cost per unit will be:
= Rs. 5,0007-
Equation Method
y = mx + c (5.1)
where y - total semi-variable cost, c - fixed cost included in semi-variable cost, m = variable cost per unit, and x
- output.
It is now possible to segregate the fixed and variable portions with the help of the equations with respect to two
periods.
Illustration 5.2 Taking the figures for January and February from Illustration 5.2:
m = Rs. 37-
Method of Averages
Under this method, average of two selected groups should be taken out first and then the method of high and
low points or the equation method may be used in arriving at variable and fixed portions of semi-variable costs.
Illustration 5.3
Average output Average cost
Last four months 3,700 units Rs. 16,100
First four months Change 3,425 units Rs. 15,275
275 units Rs. 825
Variable overheads: Rs. 825 4 275 = Rs. 3 per unit
Fixed overheads: Rs. 16,100 - (3,700 x Rs. 3) = Rs. 5,000
Graphical Method
Semi-variable overheads at various levels of activity will be plotted on a graph paper whose abscissa will
represent output at various levels of activity and the ordinate will represent respective semi-variable overheads.
The line of regression drawn on the graph paper will show the relation between the variable overheads and
80
output and the point where regression line will cut the ordinate will represent the fixed overheads. The slope of
the regression line will show the degree of variability. This method is widely used in practice.
This method is possibly the most accurate of those discussed so far. This is based on finding out a 'line of best
fit' for a number of observations with the help of statistical method.
We know the straight line equation y = mx + c. Thus, for each period we have an equation in the following form:
y1 = mx1 + c
y2 = mx2 + c
yn ~ mxn + c
Adding
∑` y = m ∑ x + N.c (5.2)
[N = number of observations]
∑ xy - m ∑ x + c. ∑ x2
(5.3)
With the help of Eqs. (5.2) and (5.3), the values 'm* and V can be obtained and the pattern of cost line
determined accordingly.
Illustration 5.3
Month Output (units) Semi-variable
overheads
Rs.
x Y
January 2,500 12,500
February 3,000 14,000
March 3,500 15,500
X Y
To reduce labour, let: x = ; and y =
500 500
Then: x y xy x2
5 25 125 25
6 28 168 36
7 31 217 49
18 84 510 110
Sx Zy "Lxy Xx2
81
510= 110m + 18c (4)
6 = 2m
∴ m=3
84 = 54 + 3c
c = 10.
We now have: y = 3x + 10.
Y X
Putting y = and x = , the equation becomes:
500 500
Y 3X
= +10
500 500
Y = 3X + 5.000
Thus, the above straight line equation shows that Rs. 5,000 is the amount of fixed overheads present in the
total semi-variable and the variable overheads are Rs 3/- per unit.
Note: The equation Y = 3X + 5,000 gives the pattern of semi-variable overheads line. Thus, if production for any
month is, say, 4,000 units, the total overheads will be:
Concept of Profit
Profit is known as 'Net Margin'. Net Margin5 is arrived at after deducting fixed costs from total contribution or
'Gross Margin'. It may be noted that contribution is the difference between sales value and the variable cost of
those sales. In short, fixed costs are not included in cost of goods sold or closing stock; they are written-off to
Marginal Profit and Loss Account of the period. The argument in favour of this procedure is that no one makes
profit per unit manufactured; but profit is made out of total activity during a period. It is generally said that
products make contribution and business makes profit. Units produced and sold will, therefore, contribute to a
'profit pool' which will pay for the fixed charges and whatever will be left thereafter will represent net profit.
Assuming that a manufacturing company manufactures four products, the diagram overleaf will demonstrate
how profit is made.
Figure 5.1
It is also clear that no part of the fixed overheads is transferred to the next period by the addition of some
82
arbitrary amount to the value of closing work-in-progress and finished goods.
It follows from the earlier discussion that marginal costing and absorption costing are based on different
concepts of profit. The crucial question is whether the total fixed costs incurred during a period should be
charged against sales of the period (as is done in marginal costing) or should be spread over more than one
accounting period by means of inclusion in closing stocks (as is done in absorption costing). Under marginal
costing, the entire amount of fixed costs is charged to Profit and Loss Account in the year in which the costs are
incurred, so that closing work-in-progress and finished goods are valued at marginal or variable costs only.
Consequently, under marginal costing, profits tend to vary with volume of sales irrespective of movements -in
inventory. Therefore, profit and loss statement prepared under marginal costing is more intelligible to
management. Under absorption costing, on the other hand, product costs include fixed costs and as a result, a
portion of fixed costs is carried forward to the next year by means of inclusion in closing work-in-progress and
finished goods. Thus, under absorption costing, periodic profit is affected by changes in inventory as well as in
the volume of sales and profit may be shifted from one accounting period to another by increasing or reducing
inventories. The effect upon profit under absorption and marginal costing under a number of possibilities may
be studied as follows:
Illustration 5.4
Duo Ltd makes and sells Two products, Alpha and Beta. The following information is
available:
Period 1 Period 2
Production (units):
83
Variable production overheads 12 8
Fixed costs for the company in total were £ 1,10,000 in period 1 and £ 82,000 in period 2. Fixed costs are
recovered on direct labour hours. Requirements:
(a) Prepare profit and loss accounts for period 1 and for period 2 based on marginal cost principles.
(b) Prepare profit and loss accounts for period 1 and for period 2 based on absorption cost principles.
(c) Comment on the position shown by your statements.
(a) Profit and Loss Accounts for Periods 1 and 2 under Marginal Costing
Peri od 1 Period 2
£ £ £ £
Sales 3,27,000 2,46,750
Less: Cost of sales:
Opening stock (Wl) — 13,800
Production 1,68,500 1,25,500
1,39,300
13,800 22,800
1,54,700 1,16,500
Less: Closing stock (Wl)
Contribution 1,72,300 1,30,250
Less: Fixed costs 1,10,000 82,000
Profit 62,300 48,250
(b) Profit and Loss Period 1 and 2 Absorption
Accounts for Based on Costing
Period 1 Period 2
£ £ £
Sales 3,27,000 2,46,750
Less: Cost of sales:
Opening stock (W2) — 22,800
Production 2, 78,500 2,07,500
2,30,300
Less: Closing stock (W2) 22,800 37,800
2,55,700 1,92,500
Profit 71,300 54,250
(c) Stock levels are rising over period 1 and 2. Absorption costing, which includes a share of fixed costs in the
stock valuation, therefore gives a higher reported profit than marginal costing which charges the fixed costs
against profit in the period in which they are incurred. Accordingly, the reported profit under absorption costing
is £ 15,000 more over the two periods as £ 9,000 of fixed costs are 'carried forward' from period 1 to 2, and a
net £ 6,000 are 'carried forward' from period 2 to 3.
Workings:
It has been pointed out earlier that the difference between sales value and the variable cost of those sales is
known as contribution. In other words, products sold will contribute to a fund to meet first the fixed costs and the
balance represents the profits of the undertaking. Therefore, contribution is equal to fixed costs and profit (or
loss). From this concept, the following Marginal Cost Equation is developed:
S-V=F+P [S - V = C; C = F + P] (5.4)
Thus, if any three factors of the above equation are known, the fourth can be easily found out. Again, this
equation is used for ascertainment of break-even point, i.e., at break-even point, there will be no profit or no
loss (Total Costs = Total Sales), so that P = 0.
(5.5)
PROFIT/VOLUME RATIO
The Profit/Volume Ratio, popularly known as the P/V Ratio, expresses the relation of contribution to sales. This
ratio is also known as Contribution to Sales (C/S) or the Marginal Income Ratio. Symbolically,
85
C S −V
P/V or C/S Ratio = = (5.6)
S S
So long as unit selling price and unit variable cost remain constant, P/V ratio can also be found out by
expressing change in contribution in relation to change in sales. Similarly, when unit selling price, unit variable
cost and fixed cost (total) remain constant, P/V ratio can be determined by expressing change in profit or loss in
relation to change in sales. Thus,
The above ratio is generally expressed in percentage form multiplying it by 100. C/S or P/V Ratio determines
the increase or decrease in contribution which can be expected from increase or decrease in volume provided
that there is no change in any other factors. In normal circumstances, C/S or P/V ratio will indicate relative
profitability of different products, processes or departments, so that development of sales strategy is facilitated.
For example, a high C/S or P/V ratio indicates that comparatively large amount may be spent by way of
advertising and sales promotion for obtaining additional sales inasmuch as the contribution from such sales will
be adequate to recover fixed costs and 'contribute further towards profit. Again, for price reduction due to acute
competition, the C/S ratio may be used by the management. The effect on profit due to changes in volume may
be ascertained with the help of this ratio. The effect on profit may be summed up as follows:
(i) If the firm is operating at or above BEP, the increase in net profit will be equal to increase in
contribution provided fixed costs remain constant.
Illustration 5.5
Sales Variable costs Present volume Rs. 10,000 Additional volume Rs. 1,000
6,000 600
Fixed costs Total costs Net Profit 2,000 _
8,000 600
2,000 400
Thus, due to additional sales, net profit will be increased by Rs. 400. The same result can be obtained by
multiplying the additional sales figure by P/V ratio (here P/V ratio is 40%), i.e.,
However, it is to be assumed here that selling price and variable costs, the constituents of the ratio, will remain
unchanged even for the additional volume. But if they change, the P/V ratio will also change.
(ii) If the firm is operating below the BEP, the addition to contribution reduces the loss or changes the loss into
profit.
CIS or P/V ratio is the function of sales (value and/or volume) and variable costs, Therefore, an improvement of
the ratio will mean increasing the gap between sales and variable costs. This can be done by;
86
BREAK-EVEN CHART
A Break-even Chart (BEC) is a graphical representation of marginal costing or CVP analysis. It is an important
aid to profit planning. It has been defined as 'a chart which shows the profitability or otherwise of an undertaking
at various levels of activity and as a result indicates the point at which neither profit nor loss is made'. The BEC,
therefore, depicts the following information at various levels of activity:
At different activity levels, the interaction of volume, selling price, variable costs and fixed costs, the relevant
variables and their impact upon profit are considered simultaneously. Perhaps, in this context, a name for the
break-even graph that more clearly describes its function would be profit planning chart.7
The most important use of the BEC is the ascertainment of a break-even point (BEP) from the chart, which is a
valuable guide to the management. The BEP can be determined from a BEC or can be calculated as follows:
Therefore, when P/V ratio is calculated using unit contribution and unit selling price, we can write:
But if P/V ratio is calculated at a given level of activity, i.e., taking total sales and total contribution at that level,
the BEP is computed as follows:
The fixed costs for the year are Rs. 80,000, variable cost per unit for the single product being made Rs. 4.
Estimated sales (at 100% capacity) for the period are 10,000 units. The number of units involved coincides with
the expected volume of output. Each unit sells at Rs. 20.
Rs .80 ,000
BEP (units) = =5,000 units
20 −4
Rs . 80000
The same result can be obtained by using the last formula: x 200000 =Rs .100000
160000
Check
Rs.
Sales (5,000 units) 1,00,000
Variable cost @ Rs. 4 20,000
Contribution 80,000
87
Fixed costs 80,000
Profit/Loss Nil
It should be noted that in case of a multi-product firm, P/V ratio stands for combined P/V ratio for all the
products, at a particular level of activity, by applying which the break-even sales of the firm has to be computed
as above. In such a case, formula (i) cannot be applied.
Illustration 5.6 From the following data, calculate the break-even sales for a company producing three
products.
Product Sales Variable cost
Rs. Rs.
X 10,000 6,000
Y 5,000 2,500
Z 5,000 2,000
20,000 10,500
Fixed Cost
Break –even Sales = x Total Sales
Total Contributi on
5700
= Rs. x 20000 =Rs .12000
9500
Check :
Product Sales ratio BE Sales in Variable Variable cost Contribution
previous ratio cost ratio
Rs. Rs. Rs. Rs.
X 50% 6,000 60% 3,600 2,400
Y 25% 3,000 50% 1,500 1,500
Z 25% 3,000 40% 1,200 1,800
Total 100% 12,000 6,300 5,700
Fixed costs 5,700
Profit or Loss Nil
Once we know the various components of break-even point, it is possible to find out missing information.
Illustration 5.7 What will be the C/S ratio and the profit in the following case?
PC
We know: BEP =
C / S ratio
FC Rs .20000
or, C/S ratio = = = 0.50 or 50%
BEP Rs .40000
∴Profit = Contribution – Fixed Cost
= Rs.(100000 x 0.50) – Rs.20000 = rs.30000
88
Construction of a Break-even Chart
A Break-even Chart is drawn on a graph paper. Costs and revenue are plotted on the 'Y' axis and activity or
volume is plotted on the 'X' axis. 'X' axis may be expressed in a number of ways, e.g.,
Where, however, there are a number of products of different measuring units which require different plant
capacity, it is difficult to plot them on the basis of percentage activity or volume. Here, Standard Hours may be
the appropriate expression. In other cases, sales value is widely used because profit is not realized unless
goods are sold. However, when sufficient data are available, it is desirable that a combination of methods of
expression is used.
The following is an example of EEC drawn from the schedule below on the basis of data as in previous
illustration.
Procedure
1. Represent fixed cost Rs. 80,000 by a line parallel to 'X' axis. Plot the variable costs for different levels of
activity over fixed cost line. Join the variable cost line to fixed cost line at zero activity level. The resultant
line will represent total cost line—variable cost having been added to fixed cost.
2. Similarly, determine sales value at various levels of activity and plot them on the graph paper and join to
zero in the graph. This line will represent sales value.
3. The sales 1'ne will cut the total cost line at a point which is known as break-even point. The breakeven
sales will be determined by dropping a perpendicular to the 'X' axis from the point of intersection and
measuring the horizontal distance from the zero point to the point at which the perpendicular is drawn.
Another perpendicular to the 'Y' axis from the point of intersection will indicate (vertically) the break-even
sales value.
Interpretation
The break-even chart will give a vivid picture of profit or loss at different levels of activity. For instance, where
the sales line is above the total cost line, there is profit; where it is below the total cost line, there is a loss; and
where total cost equals total sales, there is no profit or loss.
In Figure 5.2, fixed cost line has been plotted first. Alternatively, variable cost line may be plotted first and then
fixed cost line over the variable cost line. This type of presentation is more helpful to the management for
decision-making inasmuch as it shows clearly the contribution margin at any volume of sales. Further, it
89
appears from the chart that below the break-even point, it is the fixed cost which is not being covered fully.
Thus, it is in line with the concept of marginal costing. Unless, therefore, there is a contrary instruction, this form
of presentation, known as the contribution break-even chart (Fig. 14.2), should be followed.
This is represented by excess sales over and above the break-even point. In the preparation of a BEC, one of
the assumptions made is that production will coincide sales. Therefore, it may be said that margin of safely is
also the excess production over break-even point. In the chart, it is the distance between the BEP and present
sales or production. Margin of Safety (M/S) may be expressed in sales volume or value or in percentage, e.g.,
Present Break-even
M/S
sales sales
Rs. 2,00,000 Rs. 1,00,000 Rs. 1,00,000
or 10,000 units or 5,000 units or 5,000 units
1,00,000 or x l00
2,00,000
= 50%
90
The percentage form of expression is generally used. The Margin of Safety can also be calculated with the help
of the formula:
M/S is an indicator of the strength of a business, i.e., a high margin will indicate that profit will be made even if
there is a substantial falling off in sales of production. For instance, if the sales, in the illustration, falls by even,
say, 40%, the company will still be making profits since its margin of safety is very high, i.e., 50%. On the other
hand, if the margin is small, a small drop in sales or production will be a serious matter. In such a case,
management may take many valuable decisions, such as:
In inter-firm comparison, margin of safety may be used to indicate relative position of firms. For instance,
consider the following statement:
Company A Company B
Rs. Rs.
Total Sales 2,00,000 1,00,000
Break-even Sales M/S 1,00,000 80,000
Rs. 1,00,000 20,000
or 50% or 20%
Therefore, it may be concluded that if the rate of profit earned above break-even sales is the same for company
A and B, the first mentioned company is in a much stronger position than Company B.
Angle of Incidence
This is the angle between sales and total cost line (see Fig. 14.1). This angle is an indicator of profit-earning
capacity over the break-even point. Therefore, the aim of the management will be to have a large angle which
will indicate earning of high margin of profit once fixed overheads are covered. On the other hand, a small angle
will mean that even if profits are being made, they are being made at a low rate. This in turn suggests that
variable costs form a major part of cost of sales.
However, if Margin of Safety and Angle of Incidence are considered together, they will be more informative. For
example, a high margin of safety with a large angle of incidence will indicate the most favourable conditions of a
business or even the existence of monopoly position.
This shows the relationship between profit and volume. The P/V graph is a simplified form of Breakeven Chart
and requires the same basic data for its construction and suffers from the same limitations, (see Fig. 14,3.) A
P/V graph can be constructed if any two of the following data are known:
91
(ii) Profit at a given level of activity, and (iii) Break-even point.
Illustration 5.9 The following data relate to a company for the year ended 31st December, 2005: Units
produced: 20,000 Fixed Overheads: Rs. 50,000
Selling price per unit: Rs. 10 Prepare a P/V graph. In order to construct a graph, it is necessary to ascertain
profit at the present level of activity. Thus,
Procedure
1. The graph is divided into two areas—the vertical axis above the zero line represents profit area and the
vertical axis below the zero line represents the loss or fixed cost area.
2. A scale for sales on the horizontal (zero) axis is selected.
3. A scale for profit and fixed cost on the vertical axis is also selected.
4. Points are plotted for profits and fixed cost and they are then connected by a straight line which intersects
the sales line at the horizontal axis. The point of intersection is the break-even point.
A profit-volume graph may be used for a variety of purposes, viz., determining break-even point and showing
the impact on profits of selling at different prices for a product, forecasting costs and profits resulting from
changes in sales volume, showing the deviations of actual profit from anticipated profit, relative profitability
under conditions of high or low demand for a product, etc. Two such uses are shown below.
(a) Relative profitability under conditions of high and low demands: When two firms are identical in some
respects and operate in the same marketplace facing the same kind of competition, their profitability under
conditions of changing demand may be compared using the profit graph. In reality, 'high-tech' company will
have a higher amount of fixed cost and hence would be exposed to a greater degree of operating risk. The 'low-
tech' company, having lower amount of fixed cost, would have to face lower degree of operating risk in the
event of fall in demand.
92
Illustration 5.10 Two businesses A. B. Ltd and C. D. Ltd sell the same type of product in the same type of
market. Their budgeted profit and loss accounts for the year 2005 are as follows:
P/Vratio(§x100 V S ) 20% 33 1%
Break-even Sales (Fixed cost + P/V ratio) Rs. 15,000 Rs. 35,000
20% 33--%
= Rs. 75,000 = Rs. 1,05,000
Margin of Safety (Present sales
- Break-even sales or Profit 4- P/V ratio) Rs. 75,000 Rs. 45,000
(b) Although total costs of both the firms are the same, the fixed costs of A. B. Ltd are lower than that of C. D.
Ltd. As a result, the break-even point in the former case is reached sooner (i.e., at a lower level of activity),
as will be evident from the following graph.
It appears from the graph that for sales below Rs. 1,50,000, A. B. Ltd will earn greater profit than C. D. Ltd once
break-even point has been reached. At volume of Rs. 1,50,000, both will earn same profit. Since the rate of
profit-earning in case of C. D. Ltd is greater than that of A. B. Ltd (vide angles of incidence), for volume above
Rs. 1,50,000, C. D. Ltd will earn more profit than A. B. Ltd. Thus, in case of heavy demand, C, D. Ltd will earn
greater profits while A. B. Ltd will earn greater profits in condition of low demand for the product.
(b) Profit chart for different product prices: The effect on break-even point and profit of charging different prices
for a product can be seen from the profit chart. Since different prices are being compared, the use of units is
desirable.
93
A profit chart is shown in Figure 5.6. The chart is based on the following information:
When a company deals in a number of products, it is possible, and indeed desirable, to draw a break-even
chart for the company as a whole (i.e., considering all the products in one chart). In such a case, the breakeven
point is where the average contribution line cuts the fixed cost-line (Fig. 14.6), assuming proportions of sales-
mix remain unchanged.
The procedure for drawing up a multi-product break-even chart may be summarized as follows:
1. Calculate P/V ratio for each product and arrange the products in descending order on the basis of P/V
ratios.
94
2. 'X'-axis would represent sales value while Y-axis would represent contribution and fixed cost,
4. Take the product having the highest P/V ratio and plot its contribution against sales; then take the product
having second highest P/V ratio and plot cumulative contribution against cumulative sales; the process will
end with plotting by the product having the lowest P/V ratio.
5. Obtain the average contribution slope by joining the origin to the end of the last line plotted. The break-even
point is the point of intersection of average contribution line and fixed cost line.
Illustration 5.11 ABC Co. Ltd produces and sells three products—Y, X and Z. From the following information
relating to these products for a period, draw up a break-even chart to determine the breakeven point:
Y X Z Total
Sales (Rs.) 25,000 40,000 35,000 1,00,000
Variable costs (Rs.) 15,000 20,000 28,000 63,000
Fixed costs (Rs.) 18,500
The P/V ratio of each product should be calculated first, and then in order of importance of P/V ratios a table for
cumulative sales and contribution should be prepared and plotted on the graph paper.
(S −V )
P/V ratio : X 100
S
Sales Contribution
Product P/V ratio Productwise Cumulative Productwise Cumulative
Rs. Rs. Rs. Rs.
X 50% 40,000 40,000 20,000 20,000
Y 40% 25,000 65,000 30,000 30,000
Z 20% 35,000 1,00,000 7,000 37,000
Thus, the break-even sales can be read from the chart as Rs. 50,000.
Note: The break-even sales determined graphically can be verified by applying the formula:
Fixed cos t
B/E sales = x Total sales
Total contributi on
Rs . 18500
= x 100000 =Rs .50000
37000
Different break-even charts may be prepared to suit different purposes. Some of the most common types of
charts (in addition to those already discussed) are:
Break-even charts with the exception of the last-mentioned one are discussed below, one by one, in a nutshell.
Determination of optimum volume and selling price through break-even chart is shown later on (vide Section II).
If the chart contains only details of appropriations of profit, it is called Profit Appropriations Breakeven Chart.
Detailed analysis, particularly of various elements of variable costs, helps management in both policy decisions
and control functions.
A detailed break-even chart is shown in Figure 5.8. The chart is based on the following information:
1. Fixed costs requiring cash outlay during the period covered by the chart (e.g., salaries, rent, rates,
insurance, etc.)
2. Fixed costs not requiring immediate cash, e.g., depreciation, deferred expenses such as research and
development, advertisement, etc.
96
The former (type 1) is shown at the base, i.e., parallel to X-axis like the conventional break-even chart while
fixed costs not requiring immediate cash outlay (type 2) are shown last, i.e., after variable costs. Variable costs,
which are assumed to be payable in cash during the period, are plotted as usual. If, however, credit
transactions are involved here, the portion of variable costs not requiring immediate payment should be treated
like type 2 fixed costs.
Cash break-even charts are used in cash flow analysis and are extremely useful to enterprises running short of
required solvency. It is a valuable guide in both short-run investment and financing decisions.
A cash break-ever chart is shown in the figure. The chart is based on the information given overleaf.
When Budgetary Control and Marginal Costing are combined, break-even chart comparing budgeted and actual
costs, sales, profits and break-even points is prepared. By pinpointing deviations between budgeted/standard
and actual figures, it serves as an extremely useful tool in management control and is known as control break-
even chart. But detailed analysis of deviations or variances according to originating causes and also into
controllable and non-controllable portions is not possible graphically. Such analysis should, however,
supplement the control chart. A control break-even chart is shown in Figure 5.10.
97
Figure 5.10 Control Break-even chart
Advantages
1. It is simple to compile and understand. Facts presented to the management in a graphical way are
understood by them more easily than those contained in the Profit and Loss Account, Operating
Statements, Cost Schedules, etc.
2. A break-even chart is a useful tool to guide management in studying the relationships of cost, volume and
profits. The chart may depict the effect on profits of changes in
3. The strength of the business and the profit-earning capacity can be ascertained from the break-even chart
by studying margin of safety and angle of incidence together. Many policy decisions, such as
may be taken on the basis of margin of safety and angle of incidence in the break-even chart,
4. The effect of alternative product mixes on profits can also be shown in break-even charts. This will help in
selecting the most profitable product-mix.
Limitations
In the simple chart, we have seen that the cost line and sales line look like straight lines. This is possibly due to
a number of assumptions mentioned earlier. But, in practice, a break-even chart is unlikely to be a series of
straight lines. If that is so, there might be several break-even points at different levels of activity. Therefore, a
break-even chart can be used only if the following limitations are kept in mind.
98
1. The break-even chart shows a static picture and hence may become out-of-date if the assumptions or
conditions prevailing change after it is being made. For instance, it is assumed that selling price, fixed costs
and unit variable costs will remain constant at different levels of activity. But competition, demand factor,
efficiency in production, policy decisions, etc., may bring about changes in selling price, unit variable cost
and total fixed costs. In actual practice, therefore, a break-even chart is quite unlikely to look like a straight-
line graph; it may take the form of Figure 5.11. Thus, a typical breakeven chart (i.e., where unit variable cost
and selling price do not remain constant and fixed cost rises in steps) may show a number of break-even
points. But there will be only one optimum production level where profits will be higher than at any other
level. This optimum level is that point where the gap between the sales line and the total cost line is
maximum. At this point, marginal costs equal marginal revenue.
2. Break-even analysis related to the total costs and sales of a company which manufactures a variety of
products will not be explanatory of the position in regard to any one product, The effect of various product-
mixes on profits cannot be studied from a single break-even chart. But a profit graph can overcome this
objection.
3. A break-even chart does not generally take into consideration capital employed which is one of the vital
factors in many policy decisions, Therefore, policy decisions dependent wholly on break-even
When fact rather than theory is considered, a break-even chart is unlikely to be a series of straight lines. It
would look like the above chart and it would not be surprising to see even several break-even points at different
levels of output and sales.
chart may not be safe and reliable. But even in break-even analysis, it is possible to include, for the purpose of
policy decisions, additional calculation showing the interaction of the following ratios:
Marginal costing or CVP analysis is used for studying the results of various changes in factors other than
volume, such as:
1. Break-even Point,
2. Margin of Safety, and
3. Profit Volume Ratio.
A change in fixed costs will change the break-even point by an equal percentage provided variable costs and
selling price remain constant. Once break-even point is changed, it will also affect the margin of safety but the
effect will be reverse as compared to break-even point, i.e., if BEP comes down, it will increase the M/S and
vice versa. But a change in fixed costs will have nothing to do with the P/V Ratio.
Illustration 5.12 (Data same as in Illustration above) Fixed costs increase by 10%.
16
P/V Ratio = x 100 = 80 %
20
A change in variable cost without any corresponding change in selling price and fixed costs will change the
BEP, M/S and P/V ratio.
Rs .80000
BEP = x 20 = Rs.102564. M/S = Rs.200000 – 102564 = Rs.97436 or 49%.
15 .60 (i.e., 20 −4.40 )
15 .60
P/V Ratio = x 100 = 78%.
20 .00
Similarly, a change in selling price, without a corresponding change in variable costs and fixed costs, will
change the BEP, M/S and P/V ratio.
Rs .80000
BEP = x18 =Rs .102857
Rs .14
Thus, 10% decrease in selling price has a greater effect on BEP. M/S and P/V Ratio than 10% increase in
variable cost.
Where sales revenue is a composite figure consisting of sales of several types of products having different
individual P/V ratios, the overall P/V ratio will change with changes in the sales mixture. In such a case, even if
budgeted sales volume is met, the actual profit will be lower than that budgeted if the proportion of low-margin
products sold exceeds that anticipated. Thus, change in sales mixture will also change the BEP and hence the
M/S.
100
Illustration 5.15 Assuming the budgeted sales of Rs. 6,000 represent sales of four products—A, B, C and D—
which are expected to be sold in the mixture below, profit of Rs. 630, a break-even point of Rs. 4,200, a margin
of safety of Rs. 1,800 (or 30%) and a P/V ratio of 35% will result as follows;
A B C D Total
Rs. Rs. Rs. Rs. Rs.
Sales 2,000 2,500 1,000 500 6,000
Marginal Cost 1,200 1,700 800 200 3,900
Contribution 800 800 200 300 2,100
Fixed Costs 1,470
Profit 630
P/V Ratio {%) 40 32 20 60 35
% of total sales 33 100
If, however, sales should shift towards a larger proportion of the products carrying lower P/V ratios, the result
would be as follows:
A B C D Total
% of sales changed to 25% 36 1% 33-% 5% 100%
3 3
Rs. Rs. Rs. Rs. Rs.
Sales 1,500 2,200 2,000 300 6,000
Marginal Cost 900 1,496 1,600 120 4,116
Contribution 600 704 400 180 1,884
Fixed Costs 1,470
Profit 414
P/V Ratio (%) 40 32 20 60 31.4
Very often, management may fix up sales target for a period based on a desired amount of profit. In such a
case, the desired sales volume would be determined as follows:
101
Illustration 5.16
What would be volume of sales for a desired profit (before tax) of Rs. 6,000 p.a.?
When desired profit is taken after tax, the above formula has to be modified as follows:
Illustration 5.17 Assuming 40% tax rate in Illustration 14.18, the required sales volume would be computed
thus:
It may be mentioned that to find out sales value needed to achieve a profit (before or after tax), the above
formulae should be adjusted to divide by the P/V ratio instead of unit contribution.
The management of a firm may sometimes be confronted with multiple changes in its environment. It may be by
way of reduction in unit selling price to take advantage of increased sales volume, some changes in production
methods which may again reduce unit variable cost but increase fixed cost substantially, and so on. Even in
such cases, the required sales to earn a desired amount of after-tax profit may be computed by bringing all
these changes together simultaneously by the formula:
But when desired profit is given as a percentage of total sales which are required to be determined, we have to
form an equation based on the basic principles of marginal costing and solve it to arrive at the results.
Illustration 5.18
Unit selling price Rs. 20
Unit variable cost
Fixed cost p. a. Rs. 33.750
Corporate tax rate 40%
Required;
(a) What will be sales to earn a 15% return on sales before tax?
(b) What will be sales to earn a 15% return on sales after tax?
(a) Sales = Variable expenses + Fixed expenses + Target profit Let desired sales = X
102
Statement of Profit
Sales Rs. 96,429
Less: Variable Costs (40%) 38,572
Contribution 57,857
Less: Fixed cost 33,750
Profit before tax 24,107
Less: Tax (40%) 9,643
Profit after tax (15% of sales) 14,464
USE OF PROBABILITIES
Under conditions of risk and uncertainty, probabilities may be used to estimate the likelihood that a 'critical'
outcome might or might not happen. One of the obvious examples of this is to estimate the probability that an
organization will at least break-even with its sales.
The possible advantages of marginal costing, as compared to that of absorption costing, are stated below.
1. Greater control over costs is possible. This is so because fixed costs are excluded from product costs and
management can concentrate on marginal cost which is a constant ratio.
2. It is an aid to management in taking many valuable decisions. Under marginal costing, data are presented
in a manner revealing marginal costs and contribution that it facilitates making policy decisions in many
problems, such as:
(i) introduction of a product;
(ii) quoting selling prices and tendering for contracts in times of competition;
(iii) whether to make or buy;
(iv) reduction of prices in times of competition or depression;
(v) selecting the most profitable product or sales-mix;
(vi) alternative methods to be employed in manufacturing;
(vii) limiting factors;
(viii) utilization of spare capacity;
(ix) profit planning—break-even charts, profit-volume graphs may be used in profit planning;
(x) assessment of capital projects to be undertaken; and
(xi) selection of the most profitable level of activity, etc.
3. For all practical purposes, marginal costs will be the product costs and hence there will be no vitiation of
costs due to change in level of performance as marginal costs will tend to be a constant ratio. Under
absorption costing, unit cost will vary depending upon the level of activity and this may lead to confusion.
4. Closing stocks of finished goods and work-in-progress are valued at marginal costs. Apart from simplicity in
the valuation of stocks, this will lead to greater accuracy in arriving at profits.
5. The marginal cost statements are understood by management more easily than those produced under
absorption costing. For instance, the foremen will be more interested in those costs which are variable and
which can be controlled by their actions. It is, therefore, very simple to understand and can be combined
with Standard Costing.
103
6. Since fixed costs are excluded, it eliminates the strenuous task of allocating, apportioning and absorbing
overheads. As a result, there will be no under- or over-recovery of fixed overheads.
1. It is difficult to analyze overheads into fixed and variable elements because many expenses considered to
be variable or fixed may not be exactly the same at various levels of activity. Moreover, in marginal costing,
there is no place of semi-variable or semi-fixed overheads which are to be segregated into fixed and
variable elements. The segregation of semi-variable costs is also a difficult task.
2. There is the danger of taking policy decisions on the basis of information presented under marginal costing
technique. For example, in the long run, selling price should not be fixed simply by looking at contribution as
it may result in losses or low profits. The other important factors such as fixed costs, capital employed, etc.,
should also be taken into consideration in fixing selling prices.
3. There is also the danger of valuing finished stocks, work-in-progress, transfer from one process to another,
etc. at marginal costs only. The arguments against valuing stocks at marginal costs may be summarized as
follows:
(a) In case of loss by fire, full loss cannot be recovered from the insurance company.
(b) Profits will be lower than that shown under absorption costing and hence may be objected to by the tax
authorities.
(c) For Balance Sheet purpose, closing stocks are to be valued at lower of market price and cost. Marginal
costs may not be acceptable to the auditor as true costs for this purpose.
(d) Circulating assets will be understated in the Balance Sheet and thus the Balance Sheet will not exhibit
a 'true and fair view' of the state of affairs.
4. Cost control can also be achieved with the help of other techniques such as Standard Costing and
Budgetary Control. In Standard Costing, volume variance will show the effect of change in output on fixed
costs and hence there will be no vitiation of costs.
The following data have been extracted from the budgets and standard costs of Hewitson Ltd, a company which
manufactures and sells a single product.
£ per unit
Selling price 45.00
Direct materials cost 10.00
Direct wages cost 4.00
Variable overheads cost 2.50
Fixed production overhead costs are budgeted at £ 400,000 per annum. Normal production levels are thought
to be 320,000 units per annum.
104
The following pattern of sales and production is expected during the first six months of 2005:
(a) prepare profit statements for each of the two quarters, in a columnar format, using
(b) reconcile the profits for the quarter January-March 2005 in your answer to (a) above.
(c) write up the fixed production overhead control account for the quarter to 31 March, 2005, using absorption
costing principles. Assume that the fixed production overhead costs incurred amounted to £ 102,400 and
the actual production was 74,000 units.
Solution
(a) (i)
Production costs
(@ 10 + 4 + 2.5 = 16.5) 1155 1,650
1,815
Less: Closing stock (@ 16.5) 165 330
990 1,485
Variable selling and
distribution costs 90 1,080 135 1,620
Contribution 1,620 2,430
MA Less: Fixed costs X (400 +80+120) \12
150 150
)
Workings:
(Wl)
£
Total absorption cost per unit = Direct materials 10.00
+ Direct wages 4.00
(£400,000
+ Variable overheads [320,000 units 2.50
+ Fixed overheads 1.25
17.75
(b) Reconciliation of Profits Reported for the Quarter Ended 31 March, 2005
£'000
Profit as per marginal costing 1,470
Add: Fixed production overheads c/f in closing stock (written-off in marginal
costing, deferred in absorption costing) 10,000 units x £ 1.25/unit 12.5
Profit as per absorption costing 1,482.5
£'000 £'000
Actual overheads incurred 102.4 Fixed overheads to
work-in-progress 92.5
(74,000 x 1.25)
- Variance to Profit and Loss A/c 9.9
102.4 102.4
Raj Ltd manufactures three products—X, Y and Z. The unit selling prices of these products are Rs. 100, Rs.
160 and Rs. 75 respectively. The corresponding unit variable costs are Rs. 50, Rs. 80 and Rs. 30. The
proportions (quantitywise) in which these products are manufactured and sold are 20%, 30% and 50%
respectively. The total fixed costs are Rs. 14,80,000.
106
Calculate overall break-even quantity and the productwise break-up of such quantity.
Solution
Then the productwise break-up of overall break-even point in units would be:
From productwise unit contribution and total contribution at break-even point (= fixed cost), we can find
productwise break-up of overall break-even quantity as follows:
X Y Z
Rs. Rs. Rs.
Unit selling price 100 160 75
Less: Unit variable cost 50 80 30
Unit contribution 50 80 45
Contribution at break-even point 10 Q 24Q 22.5Q
(Rs. 50 x 0.202) (Rs. 80 x 0.300 (Rs. 45 x 0.500
At BEP : Contribution = Fixed Cost Hence, 100 + 24Q + 22.50 = Rs. 14,80,000
Problem 3
The following details are obtained from XYZ Co. Ltd for a calendar year:
(a) Calculate P/V ratio, break-even point and margin of safety from the above data.
(b) Find the effect on P/V ratio, break-even point and margin of safety of changes in
each of the following:
(i) 10% increase in selling price;
107
Solution
or 50%
Break-even Rs. 40,000 Rs. 40,000 Rs. 40,000 Rs. 36,000 No effect on
BEP
X
Sales (Fixed 6 20 _ x = Rs. 72,000
Cost P/V Ratio) Rs. 80,000
Rs. 73,333 Rs. 88,888
Margin of Safety Rs. 1,60,000 - Rs. 1,76,000 - Rs. 1,60,000 Rs. 1,60,000 Rs. (7,200 x 20)
(Total Sales - 80,000 = Rs. 73,333 = Rs. - 80,000 = Rs,
BE Sales) 80,000 or 1,02,667 or - 88,888 = Rs. - 72,000 = Rs. 64,000 or
50% 58.33% 71,112 or 88,000 44.44%
44.44%
or 55%
Total Surveys Limited conducts market research surveys for a variety of clients. Extracts from its records are as
follows:
2004 2005
Total costs £ 6 million £ 6.615 million
Activity in 2005 was 20% greater than in 2004 and there was general cost inflation of 5%. Activity in 2006 is
expected to be 25% greater than in 2005 and general cost inflation is expected to be 4%.
Requirements:
(a) Derive the expected variable and fixed costs for 2006.
(b) Calculate the target sales required for 2006 if Total Surveys Limited wishes to achieve a contribution to
sales ratio of 80%.
Solution
(a) Before using the 'high and low' method on the data given to estimate the variable element due to change in
activity level, the data must be adjusted onto a comparable inflation basis. Thus, the 2004 cost will initially
be inflated by 5% to convert it to '2005 £s':
Change in total (adjusted) cost from 2004 to 2005 = £(6.615 - 6.3)m = £ 315,000
This is attributable to a 20% increase in activity, i.e., the variable cost for 100% activity
(2004 level) = £ 315,000/0.2 = £ 1.575 million (in 2005 £s), giving a variable cost for 2005 =
Thus, the variable cost for 2006, taking account of a further 25% increase in activity and 4% inflation, would.be:
Fixed cost for 2005 = Total cost - Variable cost = £(6.615 - £ 1.89) m = £ 4.725 million. Thus, the fixed cost for
2006, taking account of further 4% inflation, would be:
Calculate:
Solution
Sales Profit
Rs. Rs.
(a) Second year 90,000 14,000
First year 80,000 10,000
Change 10,000 4,000
Assuming that the change in fixed cost is nil, the marginal cost equation can be used as follows:
C=F+P
or, F=C-P
= Rs. 32,000 - 10,000 = Rs. 22,000 Alternatively, F = 40% of Rs. 90,000 - 14,000
109
(c) Sales Rs. 50,000 P/V ratio 40%
C =F + P
Find the cost break-even points between each pair of plants whose cost functions are:
Solution It is assumed that the selling price per unit is the same between each pair of plants. Then, cost
break-even points between each pair of plant are as follows:
Plants A and B:
X= 1,50,000 units
Plant A is better for the output below 1,50,000 units since its fixed cost is lower than that of Plant B.
Plants B and C:
X = 3,00,000 units
Plant B is better for the output below 3,00,000 units. Plants A and C:
Rs. 6,00,000 + Rs. 12X= Rs. 15,00,000 + Rs. 8X 12X - 8X= 15,00,000 - 6,00,000 X = 2,25,000 units
110
Plant A is better for the output below 2,25,000 units and Plant C is better beyond 2,25,000 units. Reconciliation
—Plants A and B:
Workings:
∴ Sales for 1,50,000 units = Rs. 6,00,000 + Rs, 12 x 1,50,000 = Rs. 24,00,000. Selling Price per unit = Rs.
24,00,000 -f 1,50,000 = Rs. 16.
A company has the option of buying one machine. Two machines are available, Machine E and Machine F.
From the information given below, calculate (a) the break-even point for each; (b) the level of sales at which
both are equally profitable, and (c) the range of sales at which one is more profitable than the other:
Machine E Machine F
Output p.a. (units) 10,000 10,000
Fixed costs p.a. (Rs.) 30,000 16,000
Profit at full capacity (Rs.) 30,000 24,000
Both the machines will produce identical products. The annual market demand for such product is 10,000 units
@ Rs. 10 per unit.
Solution
Machine E Machine F
(C = F + P)
units units
(b) Unit selling price of the products produced by either of the machines being the same, both machines will be
equally profitable at that level of activity where total cost (fixed plus variable) of production produced by each
machine exactly equals.
Let X be the number of units where both the machines are equally profitable. .'. In case of Machine E, total
costs would be;
4X + 30,000
6X + 16,000
Since at this level of output, total cost of production by each machine will be the same,
4X + 30,000 = 6X + 16,000
X = 7,000 units.
(c) The break-even point of Machine F is 4,000 units while it is 5,000 for Machine E. At 7,000 units, both the
machines are equally profitable. Thus, Machine F is more profitable at an output range of 4,000 to 6,999. The
P/V ratio of Machine E is greater than that of F. Therefore, above 7,000 units, the rate of profit-earning by E
would be greater than that of F. Thus, E would be more profitable at an output range of 7,001 to 10,000 units.
SECTION II
The concepts of marginal costing have been discussed in the previous section. How the various concepts may
be applied to serve the day-to-day needs of management in taking many strategic decisions will be illustrated in
this section. The following are some of such important areas.
Diversification of Products
Sometimes, a product may be proposed to be introduced to the existing product or products to utilize idle
facilities, to capture a new market, or for any other purpose. A decision has, therefore, to be taken as to the
profitability of the new product.
The new product may be manufactured if it is capable of contributing something towards fixed costs and profit
after meeting its variable costs of sales. Fixed costs are not taken into consideration on the assumption that
these costs will not change or, in other words, the product can be manufactured by the existing resources,
manpower, etc. But for taking decision in this matter, if the cost data are presented under total cost method, it
may appear that the new product is not at all profitable; instead, the old product may appear to be more
profitable owing to arbitrary apportionment of fixed costs.
Illustration 5.19 The following data are available in respect of Product X produced by ABC Co. Ltd.
112
Rs.
Sales 50,000
Direct Materials 20,000
Direct Labour 10,000
Variable 5,000
Overheads
Fixed Overheads 10,000
The company now proposes to introduce a new Product Z so that sales may be increased by Rs. 10,000, There
will be no increase in fixed costs and the estimated variable costs of Product Z are: Materials Rs. 4,800; Labour
Rs. 2,200; and Overheads Rs. 1,400. Advise whether Product Z will be profitable or not
The above statement will show that Product X has now become more profitable (its cost having been reduced
from Rs. 45,000 to Rs. 43,333) and that the entire profit is earned by Product X while Product Z will incur a loss
of Rs. 67. But if the data are presented under marginal costing technique as follows, the position will be quite
different.
Thus, it is clear that with the introduction of Product Z there will be no change in the profitability of Product X
and that Product Z is also yielding a contribution of Rs. 1,600 towards fixed costs and profit. Therefore, Product
Z may be introduced assuming that the capacity that will be utilized for Product Z cannot otherwise be more
profitably utilized.
Where, however, introduction of a product is associated with 'specific' or 'identifiable fixed costs', such costs
113
should be deducted from contribution of the proposed product for the purpose of taking decisions. Thus, in such
a case, fixed costs will be divided into two groups: specific, i.e., which will have bearing on the decision, and
general, i.e., which is expected to remain constant and hence has nothing to do with the proposed decision
(see Closing down or suspending activities post).
In majority of the cases, marginal costing will be of great help to the price-fixer. However, price under normal
circumstances for a long period should be preferably based upon total costs. Of course, it can also be based
upon marginal costs if a 'high margin' is added to marginal cost to contribute towards fixed costs and profits.
When marginal costing technique is used for pricing, the principle that should govern is that price should be
equal to marginal costs plus a certain amount; the amount to be added will vary depending upon demand and
supply, competition, nature and variety of products, policy of pricing, and other related factors. If the price is
equal to marginal costs, the amount of loss will be equivalent to total fixed costs. The figure for loss will be the
same, or even lower, if production is discontinued. Therefore, even for a short period, selling price should be
ordinarily higher than marginal cost. But pricing at or below marginal costs may be considered desirable for a
shorter period in certain special circumstances, such as:
Illustration 5.20 C Ltd has been working well below normal capacity due to recession. The directors of C Ltd
have been approached by a company with an enquiry for a special purpose job. The costing department
estimated the following in respect of the job:
The directors ask you to advise them on the minimum price to be charged, Assume that there are no production
difficulties regarding the job.
Here the absolute minimum price is Rs. 11,250, i.e., total of marginal costs. As this will not make any
114
contribution, a proportion of fixed costs of Rs. 500 may be added to make the job worthwhile. The amount to be
added will depend upon the circumstances of the case.
(b) Accepting additional orders, exporting, exploring additional markets, etc.: When additional orders are quoted
below normal price, it should be ensured that they will not affect the normal market or the goodwill of the
company or the relationship with its customers. So far as foreign markets are concerned, the effect of direct and
indirect benefits8, such as prestige of exporting, import entitlements, subsidies or any other special favours from
government, should not be lost sight of in fixing the price. It may also be assumed that the capacity proposed to
be utilized for the purpose cannot be otherwise more profitably utilized.
Illustration: A factory produces 1,000 articles for home consumption at the following costs:
Rs. Rs.
Materials 40,000
Wages 36,000
Factory Overheads
Fixed 12,000
Variable 20,000 32,000
Administration Overheads (Fixed) 18,000
Selling and Distribution Overheads:
Fixed 10,000
Variable 16,000 26,000
Total 1,52,000
The home market can consume only 1,000 articles at a selling price of Rs. 155 per article: it can consume no
more articles. The foreign market for this product can however consume additional 4,000 articles if the price is
reduced to Rs. 125.
The foreign market will yield an additional contribution of Rs. 52,000 (@ Rs. 13 for 4,000 units), Since the
factory is operating above the break-even point, an increase in contribution will lead to similar increase in profit.
Hence, the export order is worth trying.
Note: It is assumed that the additional 4,000 articles can be produced without any rise in fixed costs and that
the articles will not be re-exported to home market. It is also assumed that the capacity which is utilized for
producing 4,000 additional articles cannot be otherwise more profitably utilized.
Suitable product-mix will denote the ratios in which various products are produced and/or sold. The technique of
marginal costing may be applied in the determination of most profitable product or sales-mix. In the absence of
any limiting factor, contribution under each mix will be considered and the mix that will give the highest
contribution will be the most profitable one. So long as fixed costs remain constant, the most profitable sales-
mix is deterrninable on the basis of contribution only. But when changes in product-mix are associated with
changes in fixed costs, relative profitability of mixes will have to be assessed on the basis of 'net profit' and not
on 'contribution basis'. Of course, the management should study various effects and problems arising out of a
change in the mix. Some of them are:
115
2. Expected change in the labour composition or labour training programme;
3. Change in the machine load;
4. Requirements of additional space for production, storage, etc.; and
5. Change in the sales programming.
In short, the effect on all physical and financial programmes due to change in product-mix should be
considered.
Illustration 5.21 The directors of a company are considering sales budget for the next budget period.
From the following information, you are required to show clearly to management: (i) the marginal product cost
and the contribution per unit; and (ii) the total contributions resulting from each of following sales mixtures.
Product A Product B
Rs. Rs.
Direct materials 10 9
Direct wages 3 2
Fixed expenses (total) Rs. 800
(Variable expenses are allotted to products as 100% of
direct wages)
Selling price 20 15
Sales mixture
Product A Product B
Rs. Rs.
(0 Direct materials 10 9
Direct wages 3 2
Variable expenses 3 2
Marginal cost 16 13
Selling price 20 15
(iii) Since the P/V ratio of Product A is higher than that of B, Product A is more profitable and therefore
the mixture that takes into account the maximum number of Product A would be the most profitable one.
This is evident from the following statement:
(iv)
Products Contribution Sales Mixtures
per unit
(a)Contributi (b) (c)
Rs. Units
on Rs. Units Contribution Units Contribution
Rs. Rs.
A 4 100 400 150 600 200 800
B 200 400 150 300 100 200
300 800 300 900 300 1,000
Total 2
Sales Mix (c) will yield highest contribution. Therefore, it should be adopted.
The problem of product or sales-mix is generally linked up with the problem of limiting factor. For principles
116
underlying the selection of sales-mix of this nature, refer to the discussion under next heading.
Limiting factor is a factor that limits production and/or sales. This is also known as the key factor. It may
represent shortage of materials, labour, plant capacity or sales demand. (For an illustrative list of limiting
factors, see Principal Budget Factor, pp. 659-660). In such a case, a decision has to be taken on whether to
make one product or another instead. Ordinarily, when there is no limiting factor, product selection will be on
the basis of P/V ratio, i.e., one having the highest P/V ratio will be selected. But when resources are scarce,
selection of profitable product will be on the basis of contribution per unit of limiting factor. This is applicable
when there is one limiting factor. In short, the higher the contribution per unit of limiting factor, the more
profitable is the product or product line and vice versa.
When an optimum safes-mix has to be determined in the context of limiting factor, the product preference, for
the purpose of such sales-mix, should be strictly according to relative profitability of products based on
contribution in relation to the limiting factor.
Illustration 5.22
(a) The following particulars are extracted from the records of a company;
Per unit
Product A Product B
Sales (Rs.) 100 120
Consumption of material (kg) 2 3
Material cost (Rs.) 10 15
Direct wages cost (Rs.) 15 10
Direct expenses (Rs.) 5 6
Machine hours used 3 2
Overhead expenses:
Fixed (Rs.) 5 10
Variable (Rs.) 15 20
Direct wages per hour is Rs. 5. Comment on profitability of each product (both use the same raw material)
when—
(i) Total Sales potential in units is limited;
(ii) Total Sales potential in value is limited;
(iii) Raw Material is in short supply;
(iv) Production capacity (in terms of machine hours) is the limiting factor.
(b) Assuming Raw Material as the Key factor, availability of which is 10,000 kg, and maximum sales potential of
each product being 3,500 units, find out the product mix which will yield the maximum profit,
(a)
Per unit
Product A Product B
Rs. Rs.
Direct materials 10 15
Direct wages 15 10
Direct expenses 5 6
Prime cost 30 31
Variable overhead 15 20
Marginal cost 45 51
Sales 100 120
Contribution Rs. 55 Rs. 69
P/V ratio 0.55 0.575
Contribution per kg of materials Rs. 27.50 Rs. 23
Contribution per machine hoar Rs. 18.33 Rs. 34.50
Thus, profitability of each product will be determined on the basis of the principle: the higher the contribution per
unit of limiting factor, the more profitable is the product. Accordingly, a statement of profitability under different
117
conditions may be prepared thus:
(c) Product preference will be in the same order as (a) (iii) subject to the condition that maximum demand for
each of the two products is 3,500 units. In other words, 3,500 units of more profitable product will be
produced first. The balance of available raw materials will then have to be utilized for the production of less
profitable product. Thus, the optimum product-mix would be:
(d)
Total
Product Units Raw Materials Raw Materials
per unit required
(kg) (kg)
A 3,500 2 7,000
B 1,000* 3 3,000
10,000
1(10,000 - 7,000) -s- 3 = 1,000 units]
In addition to one limiting factor from the production side, limitation may also come from the market in the form
of demand. Here, ranking will be based on relative contribution per unit of limiting factor and product selection
will be done in that order. But the number of units of a product to be selected in the mix will be restricted to the
number as per demand for that product.
Illustration 5.23
(a) A chemical company manufactures five different products from a single raw material. There is an abundant
supply of raw material at a rate of Rs. 1.50 per kg. The labour rate is Rs. 2 per hour for all products. For a
certain budget period, the plant has an effective capacity of 21,000 labour hours. Present equipment can
produce all the products. The factory overhead rate also is Rs. 2 per hour (Rs. 1.40 fixed and Re. 0.60
variable). The selling commission is 10 per cent of the product price.
With the following data as basis, you are required to suggest a suitable sales-mix which will maximize the
company's profits. What will be the maximum profit?
(b) Suppose, in the above situation (a), overtime working up to a maximum of 3,500 hours is possible.
Overtime will add Rs. 5,000 to fixed overheads, a doubling of labour rates and a 50% increase in variable
overheads. Do you recommend overtime working?
(a) For suggesting an optimum sales-mix, product profitability is to be determined first on the basis of
contribution per hour which is the limiting factor. This is done in the following statement.
118
Direct Direct Variable Selling Marginal Selling Contribution Contribution Rank
Product Material Labour Fy. Commissi Cost of price per unit per hour
O.H. on sales
Rs. Rs. Rs. Rs. Rs. Rs. Rs. Rs.
A 1.05 2.00 0.60 0.80 4.45 8.00 3.55 3.55 2
B 0.75 1.60 0.48 0.75 3.58 7.50 3.92 4.90 1
C 2.25 3.00 0.90 1.20 7.35 12.00 4.65 3.10 3
D 1.95 2.20 0.66 0.90 5.71 9.00 3.29 2.90 4
E 2.25 2.80 0.84 1.10 6.99 11.00 4.01 2.86 5
Thus, under condition of limited plant capacity (labour hours), product preference would be in the following
order:
In determining the sales-mix based on above preference, another limiting factor, i.e., demand, has to be given
due consideration. In other words, the product preference according to profitability analysis would be the same
subject to the number of units equivalent to market demand. The optimum sales-mix, and the amount of profit
thereof, accordingly, would be:
('Market demand 5,000 units. Hours available 770, which can produce 770 •*• 1.40 - 550 units only. Therefore,
for E the target should be 550 units.)
(b) 3,500 additional hours can produce 2,500 units (i.e., 3,500 •*• 1.4) of E. The financial effect of the
proposal is shown below:
Rs.
Sales (2,500 units @ Rs. 11) 27,500
Less: Marginal cost of sales:
Cost per unit as per (a) Rs.
6.99
If the number of resources in limited supply increases to more than just one in a particular decision situation,
the ranking given by contribution per unit of one limiting factor may conflict with that given by the contribution
per unit of another limiting factor and consequently the decision rule slated earlier in this context becomes
ambiguous. The complexity increases with the increase in the number of limited resources. In such a case, the
measurement of the effect on alternatives must cope with the complexities introduced by the interactions
between scarcities. Mathematical techniques like Linear Programming are to be applied for handling such
119
problems.
Marginal costing techniques are often used in comparing the alternative methods of manufacture, i.e., whether
one machine is to be employed instead of another; number of operators to work with a machine; machine work
or hand work, etc. When fixed costs remain constant, the basis of selection will be the relative contribution
available from each method. In short, the method of manufacture that will give the largest contribution is to be
selected. Where, however, fixed costs change, the decision will be taken on the basis of relative amount of
profit. In the process of selection, limiting factor, if any, should not be lost sight of. Where, however, time taken
in production is stated, weight should be given to time factor.
Illustration 5.24: Product A can be produced either by Machine X or by Machine Y. Machine X can produce 10
units of A per hour and Y, 20 units per hour. Total machine hours available are 3,000 hours per annum. Taking
into account the following comparative costs and selling price, determine the profitable method of manufacture:
Profitability Statement
Machine X Machine Y
Machine hours p.a. 3,000 3,000
Output per hour 10 units 20 units
Per unit Rs. Rs.
Direct materials 20 20
Direct labour 10 13
Variable overheads 12 14
Marginal costs 42 "47
Selling price 60 60
Contribution Ts 73
Contribution per hour Rs. 180 Rs. 260
Annual Contribution Rs. 5,40,000 7,80,000
A company may have unused capacity which may be utilized for making component parts or similar items
instead of buying them from market. Decisions about whether a firm will make or buy are also known as
insourcing versus outsourcing decisions. Outsourcing is the process of purchasing goods and services from
outside vendors/producers rather than producing the same goods or providing the same services within the
organization, which is called insourcing. In taking such 'make or buy' decisions, the marginal cost of
manufacturing the component part(s) should be compared with price quoted by outside vendors. If (he variable
or marginal costs are lower than purchase price, it will be more profitable to manufacture the component parts
in the factory. Fixed costs are excluded on the assumption that they having been already incurred, the
manufacture involves only variable costs. Fixed costs are not relevant9 here. If manufacture involves increase in
fixed costs (avoidable), it is necessary to include them in product cost. Under such a situation, one may
ascertain the minimum volume which would justify 'making' as compared to 'buying'. At this volume, both the
alternatives are equally profitable. This volume is determined as follows:
120
[* Purchase price less Variable cost of production.]
Nevertheless, in a 'make or buy1 decision, the qualitative factors should also be taken into consideration. For
example, quality, and dependability of suppliers are very important factors that need to be considered.
Illustration 5.25 A manufacturing company traditionally purchases its component part No. A-104 for its final
product. During any one year, the company will require 10,000 units that can be acquired for Rs. 30 per unit.
The company currently has underutilized capacity that can be used 10 manufacture the component part. Total
manufacturing costs of Rs. 32 per unit include Rs. 16 raw material, Rs. 6 direct labour, Rs. 3 variable
overheads, Rs. 3 fixed overheads (avoidable) and Rs. 4 other fixed overheads (allocated on the basis of
capacity utilized).
(i) Should the company make or buy these parts? (ii) Determine the range of production at which one is more
profitable than the other.
The company should make component part No. A-104 as cost of manufacture per unit of A-104 is lower than its
purchase price.
Rs.
(ii) (a) Bought-out price per unit 30
(b) Marginal cost per unit 25
Savings per unit (a - b) 5
On comparison of bought-out price and marginal cost of manufacture, it appears that making is, as if, more
profitable than buying and in that case there will be a saving of Rs. 5 per unit. But making will involve an
additional fixed overhead (avoidable) of Rs. 30,000, i.e., Rs 3 x 10,000. Therefore, inclusion of avoidable fixed
cost will change the profitability, i.e., buying will be more profitable until additional fixed cost is recovered. But
once the increase in fixed cost is recovered, making will be more profitable than buying.
We now determine the volume at which both the alternatives are equally profitable:
Hence, below this volume, buying will be more profitable and above it, making will be more profitable. In other
words, from 1 to 5,999 units, buying will be more profitable and from 6,001 to 10,000 units, making will be more
profitable.
When it is necessary to increase the capacity of the firm 'to make', the increase in fixed costs may be significant
and in such a case the minimum volume or the break-even point has to be determined as above and a decision
may accordingly be taken.
When the manufacturing resources of the firm are limited and it becomes necessary to buy out some products if
market demands are to be met, the products to be manufactured must be selected on the basis of opportunity
costs. Where there is only one resource in limited supply, the selection should be based upon the contribution
made by each product per unit of limiting factor of output. The initial comparison made for each product should
be between the purchase price and the corresponding marginal cost where fixed costs do not change. Where
the purchase price exceeds the marginal cost and there is a limiting factor of production, those products earning
121
the highest rates of contribution per unit of limiting factor should be retained.
Illustration 5.26 Four types of components are currently being produced using a company's own facilities.
However, the company is working at full capacity and is considering buying one or more types of component
from an outside supplier. The total fixed costs will remain unaffected for the company as a whole with the
making in or buying out of the component. Relevant data per unit of component are given below:
Components A B C D
Time per unit:
Labour hours 0.40 0.50 0.50 0.30
Machine hours 0.10 0.20 0.40 0.50
Cost per unit: Rs. Rs. Rs. Rs.
Marginal costs 10 12 15 15
Fixed costs (allocated) 2 4 5 15
Total costs 12 16 20 B
Bought-out price 9 17 22 24
(i) labour time is the limiting factor; (ii) machine time is the limiting factor?
A B C D
Rs. Rs. Rs. Rs.
Bought-out price 9 17 22 24
Marginal costs 10 12 15 15
Contribution per unit (-) 1 5 7 9
Contribution per:
Labour hour (Rs.) (-) 2.5 10 14 30
Machine hour (Rs.) (-) 10.0 25 17.5 18
Fixed costs have not been given any consideration as they do not change for the firm as a whole with the
making in or buying out of the components.
Component which is showing negative unit contribution should be bought under all circumstances. Hence, A
should be bought out.
If a limitation on a resource still exists after removing A, selection of a further component or components for
buying out should be made in order of lower contribution per unit of limiting factor. Thus, when labour time is the
limiting factor, the order of selection of components for buying, if necessary, would be:
B
C
D
When machine hour is the limiting factor, the same order would be changed to:
C
D
B
If fixed costs remain constant, the decision will be taken on the basis of additional contribution expected from
opening of extra shift work. When, however, fixed costs increase, the decision will be taken based on additional
profit (additional contribution less increase in fixed cost). In other words, the decision should be taken on the
basis of whether the costs of the shift are exceeded by the benefits to be obtained.
Illustration 5.27
XYZ Co. Ltd currently operates a single production shift. The operating results of the company for the year just
122
ended show the following
£ £
Contribution 1,20,000
Profit 30,000
The company is planning for the activity of the next year. Sales demand exists for an extra 6,000 units (at the
existing sales price) which could be made in a second shift. The labour costs in the second shift would be the
same as in the first shift plus a second-shift premium. The second shift is paid at time-and-a-quarter. Additional
fixed overheads of £ 10,000 would be incurred, but a bulk purchase discount of 5% would be obtained on all
quantities of material bought. Should the second shift be opened up?
£ £ £
Additional sales (6,000 @ £ 36) 2,16,000
Less: Additional variable costs:
Direct materials:
Current cost 1,20,000
Total cost of materials with second shift:
(16,000 x £ 12 x 0.95) 1,82,400 62,400
Direct labour:
(£ 1,00,000 x ].25) 1,25,000
Less: Fixed portion of labour 1,00,000 25,000
Variable Overheads (6,000 X £ 2) 12,000 99,400
Additional contribution 1,16,600
Less: Additional fixed cost:
Labour 1,00,000
Overheads 10,000 1,10,000
Additional profit £ 6,600
The contribution technique may also be used in planning the level of activity.
Illustration 5.28
A company hat, a capacity of producing 1,00,000 units of certain product in a month, The Sales Department
123
reports that the following schedule of sale prices is possible:
The variable cost of manufacture between these levels is Re. 0.15 per unit and fixed cost, Rs. 40,000. At which
volume of production will the profit be maximum?
Contribution at 70% level or activity is maximum. Fixed cost being constant at all levels of production, profit is
also maximum at this level.
Another problem which is very frequently raised is the effect on profit of a change in sales price. When
management consider expansion programme, a price reduction may be contemplated to attract a wider market.
It is, therefore, necessary to ascertain the effect of such a proposal.
Illustration 5.29
The directors of a company are considering the results of trading during the last year. The Profit and Loss
Statement of the company appeared as follows:
Rs. Rs.
Sales 7,50,000
Direct materials 2,25,000
Direct wages 1,50,000
Variable overheads 60,000
Fixed overheads 2,20,000 6,55,000
Profit 95,000
The budgeted capacity of the company is Rs. 10,00,000, but the key factor is sales demand. The sales
manager is proposing that in order to utilize existing capacity, the selling price of the only product manufactured
by the company should be reduced by 5%.
You are requested to prepare a forecast statement which should show the effect of the proposed reduction in
selling price and to include any changes in costs expected during the coming year. The following additional
information is given:
124
Rs. Rs.
Sales 9,50,000
Direct materials 3,06,000
Direct waees 2,10,000 -
Variable overheads 84,000 6,00,000
Contribution 3,50,000
Fixed overheads 2,30,000
Profit 1,20,000
The above statement will show that although costs have increased and selling price has been reduced, the
profit forecast for the coming year is still more than that achieved last year. This is because increased volume of
sales at the reduced sales price has resulted in increased contribution more than sufficient to cover increase in
costs—variable and fixed.
Notes:
(a) Sales (after 5% price reduction) Rs. 9,50,000
5 50,000
Add: Reduction in Selling Price X 950000
95
Sales before price reduction 10,00,000
Less: Sales last year 7,50,000
Increase in Sales Rs. 2.50.000
Very often, management may be confronted with the problem of taking decisions as to the effect of alternative
courses of action. The problem of taking the appropriate decision in such a case can be tackled effectively if the
cost data are presented under marginal costing technique.
Illustration 5.30
The management of a concern, manufacturing two products, X and Y, have the following independent
possibilities before them:
(a) To produce and sell 16,000 additional units of Y but only if the production of X is reduced by 20,000 units.
(b) To reduce the price of X by Re. 0.20 per unit. This will result in a 25% increase in the sale of X without any
change in the activity of Y.
(c) To produce and sell 55,000 units of X and 1,05,000 units of Y.
125
Sales (in units) 50,000 1,00,000
Sales (value) Rs. 2,50,000 Rs. 8,50,000
Cost of sales Rs. 1,50,000 Rs. 6,00,000 Rs.3,50,000
Gross Margin Rs. 1,00,000 Rs. 2,50,000
Selling and distribution expenses Rs.60,000 Rs. 1,50,000
Net Margin Rs. 40,000 Rs. 1,00,000 Rs.1,40,000
Direct costs included in total costs amount to Rs. 1,20,000 for Product X and Rs. 3,40,000 for Product Y.
Present the information to the management in a suitable form giving your recommendation.
The alternative proposals to the management have been given in a suitable form on page 607 based on an
analysis of unit direct costs and total fixed costs of Products X and Y as shown below.
126
Product X Product Y Total
50,000 Per 1,00,000 Per 1,50,000
Units Unit Units Unit Units
Rs. Rs. Rs. Rs. Rs.
Sales 2,50,000 5.00 8,50,000 8.50 11,00,000
Direct costs 1,20,000 2.40 3,40,000 3.40 4,60,000
Contribution 1,30,000 2.60 5,10,000 5.10 6,40,000
Fixed costs (Costs of sales +
S. and D. costs - Direct costs) 90,000 4,10,000 5,00,000
Net Profit 40,000 1,00,000 1,40,000
Where the demand for a product is elastic, it may be contemplated to reduce the selling price more and more to
attract a greater volume of sale and thereby earn a higher total contribution. When sales volume at varying
selling prices is ascertainable, the problem arises as to the determination of the volume of sales and selling
price at which profit will be maximum. A break-even chart may be of significant help in such a case.
If the sales value and costs at different volume of sales are plotted on a graph paper, it is possible to determine
the volume and selling price at which the margin of profit appears to be the greatest. In other words, the point at
which the margin of profit is the greatest is the optimum volume and the selling price at this volume is the
optimum selling price.
Illustration 5.31
Given the information below, you are required to determine graphically at what volume of sales and selling price
a company can maximize profits.
The variable unit cost is Rs. 2.50. The fixed costs of the company amount to Rs. 12,000 but to increase output
beyond 3,000 units, additional capital expenditure would be necessary and fixed costs would therefore rise to
Rs. 16.000. In order to plot the data on a graph paper, sales value and costs at the varying volume of sales are
tabulated as follows:
Units Sales value Variable costs Fixed costs Total costs
Rs. Rs. Rs. Rs.
1,000 10,000 2,500 12,000 14,500
2,000 19,000 5,000 12,000 17,000
3,000 27,000 7,500 12,000 19,500
4,000 34,000 10,000 16,000 26,000
5,000 40,000 12,500 16,000 28,500
6,000 45,000 15,000 16,000 31,000
6,800 47,600 17,000 16,000 33,000
7,500 48,750 18,750 16,000 34,750
8,000 48,000 20,000 16,000 36,000
8,400 46,200 21,000 16,000 37,000
127
Figure 5.11 Break-even chart determining the optimum volume.
Thus, the margin of profit, i.e., the margin by which the sales value curve is higher than the total cost curve, is
seen to be the greatest at a sales volume of 6,800 units. Therefore, this is the optimum volume and profit will be
maximized at this volume at the given selling price.
1. Determination of break-even point and margin of safety (see p. 547 and p. 551).
2. Determination of variable cost for any volume of sales (this is done by deducting P/V ratio from 100% and
multiplying the sales by the resultant figure).
A manufacturer with an overall (interchangeable among the products) capacity of one lakh machine hours has
been so far producing a standard mix of 15,000 units of Product A and 10,000 units of Products B and C each.
On experience, the total expenditure exclusive of his fixed charges is found to be Rs. 2.09 lakhs and the cost
ratio among the products approximates 1 : 1.5 : 1.75 respectively per unit. The fixed charges come to Rs. 2.00
per unit. When the unit selling prices are Rs. 6.25 for A, Rs. 7.50 for B and Rs. 10.50 for C, he incurs a loss.
Solution
128
3. Determination of profit at a particular volume of sales (see p. 572),
4. Determination of sales volume for a desired amount of profit (see p. 565).
5. Fixing selling prices.
6. Selecting the most profitable line or lines of products when there is no limiting factor.
7. Determining the additional sales required to maintain the present profit level in the event of contemplated
price reduction.
8. Determination of the sales-mix to maximize profit.
Some of the applications have been shown in the previous chapter. In addition, the following two illustrations
would explain how P/V ratio serves the day-to-day needs of the management.
Illustration 5.32
A company proposes to introduce a product in the market. There is sufficient demand for the product. The sales
manager estimates that it is possible to sell 5,000 units. It is the policy of the company to maintain 30% P/V
ratio. Given the following costs, you are required to ascertain the selling price that the company should quote:
Per unit
Rs.
Direct materials 100
Direct labour 30
Variable overheads 10
140
Selling price can be found out by dividing variable cost by variable cost ratio (i.e., 100% - P/V%). Thus,
Therefore, to maintain a 30% P/V ratio, the selling price should be Rs. 200.
Illustration 5.33
The directors of ABC Ltd propose to reduce the selling price of Product X by 10%. By doing so, they anticipate
that sales volume may be increased and that present profit may be maintained. On the basis of the following
information, advise management as to the proposal:
Rs.
Selling price 10
Variable cost 6
Fixed cost 20,000
Present and sales 8,000 units.
production
Present Profit = 40% of sales - fixed cost = 40% of Rs. 80,000 - Rs. 20,000 = Rs. 12,000 If selling price is
reduced 'by 10%, the P/V ratio will come down to 33 — %
To maintain the same profit, the required total safes will be:
129
Total Fixed Cost: 35,000 x Rs. 2 = Rs. 70,000
A B C
Rs. Rs. Rs.
Selling Price 6.25 7.50 10.50
Marginal or Variable cost 4.40 6.60 7.70
Contribution
1.85 0.90 2.80
A market gardener is planning his production for next season and he asked you, as a Cost Accountant, to
recommend the optimal mix of vegetable production for the coming year. He has given you the following data
relating to the current year:
The land which is being used for the production of carrots and parsnips can be used for either crop, but not for
potatoes or turnips. The land being used for potatoes and turnips can be used for either crop, but not for carrots
or parsnips. In order to provide an adequate market service, the gardener must produce each year at least 40
tonnes each of potatoes and turnips and 36 tonnes each of parsnips and carrots.
(i) the profit for the current year; and (ii) the profit for the production mix which you would recommend.
(b) Assuming that the land could be cultivated in such a way that any of the above crops could be produced
and there was no market commitment, you are required to:
(i) advise the market gardener on which crop he should concentrate his production;
(ii) calculate the profit if he were to do so; and (iii) calculate in sterling the break-even point of sales.
Solution
130
(a) (i) Profit Statement for the Current Year
Workings;
Total
Contribution £ 21,200 £ 2,450 £ 5,020 £ 48,960 £ 75,630
Fixed overheads £ 54,000
Profit from recommended mi? £ 21,630
(b) (i) Production should be concentrated on carrots which have the highest contribution per acre (£ 960)
£
(ii) Contribution from 100 acres of carrots 96,000
Fixed overheads 54,000
ABC Ltd makes three products, all of which use the same machine which is available for 50,000 hours per
period.
ABC Ltd could buy in similar quality products at the following unit prices:
A £ 175
B £ 140
C £ 200.
(a) calculate the deficiency in machine hours for the next period;
(b) determine which product(s) and quantities (if any) should be bought out;
(c) calculate the profit for the next period based on your recommendations in (b).
Solution
First we have to decide whether it is worth trying to meet full demand for all products, even if some units have to
be bought out. Since all products will still have a positive contribution (selling price -variable costs) even if they
are bought out (in which case variable cost = buy-out price), it will be worth buying out as necessary to meet full
demand.
The next decision to be made is the choice of product(s) that should be made in-house and those that should
be bought out (wholly or partially). The quickest approach is to assess each product in terms of the benefit of
making-in over buying-out per-hour of machining time used—key factor analysis.
The benefit of making, in over buying, out is measured by the difference between make-in cost (variable cost)
and buy-out price.
Buy-out price per unit Product A £ 175 Product B £ 140 Product C £ 200
Variable cost per unit £ 154 £ 112 £ 178
Saving from making-in per unit £21 £28 £22
Make-in machine hours per unit 6 4 7
Saving per machine hour (£) 3.5 7 3.1
Thus, manufacturing priority should be given to Product B, then Product A and then Product C. Meeting
maximum demand for Products B and A would use 10,000 -f 18,000 - 28,000 hours [see (a)], leaving 22,000
hours available for Product C.
Profit 142,124
V. Ltd operating at 75% level of activity produces and sells two products A and B. The cost sheets of these two
products are as under:
Product
A B
Units produced and sold 600 400
Rs. Rs.
Direct materials 2.00 4.00
Direct labour 4.00 4.00
Factory overheads (40% fixed) 5.00 3.00
Selling and administration overheads (60% fixed) 8.00 5.00
Total cost per unit 19.00 16.00
Selling price per unit 23.00 19.00
Factory overheads are absorbed on the basis of machine hour which is the limiting (key) factor. The machine
hour rate is Rs. 2 per hour.
The company receives an offer from Canada for the purchase of Product A at a price of Rs. 17.50 per unit.
Alternatively, the company has another offer from the Middle East for the purchase of Product B at a price of
Rs. 15.50 per unit. In both the cases, a special packing charge of fifty paise per unit has to be borne by the
company.
The company can accept either of the two export orders and in either case the company can supply such
quantities as may be possible to produce by utilizing the balance of 25% of its capacity.
(i) statement showing the economics of the two export proposals giving your recommendations as to which
proposal should be accepted; and (ii) statement showing the overall profitability of the company after
incorporating the export proposal recommended by you.
Solution
133
(i) Statement Showing Economics of Export Proposals
Machine hours being the limiting factor, contribution per hour should be the criterion for determining the relative
profitability of two export proposals. Thus, Product B will yield higher contribution per hour than that of Product
A. Therefore, the offer from Middle East for Product B should be accepted in preference to that from Canada for
Product A.
A B Total
Units 600 867 _
Rs. Rs. Rs.
Safes 13,800 14,839 28,639
Less: Marginal costs 7,320 10,464 17,784
Contribution 6,480 4,375 10,855
Less: Fixed cost 5,760
Profit Rs. 5,095
BSE Veterinary Services is a specialist laboratory carrying out tests on cattle to ascertain whether the cattle
have any infection. At present, the laboratory carries out 12,000 tests each period but, because of current
difficulties with the herd, demand is expected to increase to 18,000 tests a period, which would require an
additional shift to be worked.
£
per test
Materials 115
Technicians' 30
wages
Variable 12
overheads
Fixed overheads 50
Working the additional shift would:
(i) require a shift premium of 50% to be paid to the technicians on the additional shift, (ii) enable a quantity
discount of 20% to be obtained for all materials if an order was placed to cover
134
The current fee per test is £ 300. You are required to:
(b) prepare a profit statement if the additional shift was worked and 18,000 tests were carried out; and
(c) comment on three other factors which should be considered before any decision is taken.
Solution
(£ '000) (£ '000)
Fees (12,000 x £ 300) 3,600
Variable costs:
Materials (12,000 x £ 115) 1,380
Wages (12,000 x £ 30) 360
Variable overheads (12,000 x £ 12) 144
1,884
Contribution 1,716
Fixed overheads (12,000 X £ 50) 600
Profit 1,116
(b) 18,000 tests with additional shift
(£ '000) (£ '000)
Fees (18,000 x £ 300) 5,400
Variable costs:
Materials ( 18
1,380x80% 1,656 \ YL )
Wages (360 + 6 x £ 30 x 150%) 630
Variable overheads 144x 216 V l2y
2,502
Contribution 2,898
Fixed overheads (600 + 700) 1,300
Profit 1,598
(1) The duration of the higher level of demand: If it is expected to continue over longer periods, it may be worth
employing extra staff rather than paying overtime premiums. Also, the commitment to extra fixed overheads
may extend over a longer period than the extra demand.
(2) The pricing policy: The urgency of the need for extra tests may mean that fees can be increased without
significantly affecting demand.
The quality of the work done, material used, etc.: Increasing activity by 50% using current resources may well
lead to a drop in the quality of output—i.e., unreliable test results—which could have an adverse effect on future
demand.
135
Chapter 6
Corporate restructuring and Finance
Before going into the details of financial distress, let us try to begin with its background and the consequences.
We first start with the basic causes of a business failure and then try to go in to the consequences of such
failure in the context of financial distress and bankruptcy.
The basic causes of business failure can be categorized into four major heads - the economic factors, the
financial factors, factors relating to neglect, disorder and fraud and some other factors. The economic factor
relate to the industry weakness and the poor location of the firm. The financial factor relate to the over
burdening debt capacity and the insufficient capital. The importance of the different factors varies over the time,
depending on such things as the state of the economy and the level of interest rates. Apart from this,
sometimes some factors produce a combining effect so as to make the business unsustainable. Studies have
provided further evidence that the causes of financial distress are a result of a series of errors, misjudgments
and interrelated weaknesses that can be attributed directly or indirectly to the management of the firm. In a
recent study, Dun & Bradstreet has assigned percentage values to the causes of business failures. The
following table reveals the same.
MEANING OF BANKRUPTCY
A firm is said to be bankrupt if it is unable to meet its current obligations to the creditors. Bankruptcy may occur
because of a number of external and internal factors.
DEFINITIONS
The Sick Industrial Companies (Special Provisions) Act, 1985 or SICA defines a sick industry as "an industrial
company (being a company registered for not less than five years) which has at the end of any financial year
accumulated losses equal to or exceeding its net worth".
WEAK UNIT
A non-SSI industrial unit is defined as 'weak' if its accumulation of losses as at the end of any accounting year
resulted in the erosion of fifty percent or more of its peak net worth in the immediately preceding four
accounting years. It is clarified that weak units will not only include those which fall within the purview of Sick
Industrial Companies (Special Provisions) Act, 1985 (of industrial companies) but also other categories such as
partnership firms, proprietory concerns, etc. A weak Industrial Company should be termed as "potentially sick"
company.
A small-scale industrial (SSI) unit, as per the RBI is classified as sick when:
a. Any of its borrowal accounts has become a doubtful advance, i.e. principal or interest in respect of any of its
borrowal accounts has remained overdue for periods exceeding 2 V2 years and
b. There is erosion in net worth due to accumulated cash losses to the extent of 50 percent or more of its peak
net worth during the preceding two accounting years.
In case of tiny/decentralized sector units, if requisite financial data is not available, a unit may be considered as
sick if the loan/advance in which any amount to be received has remained past due for one year or more.
136
FACTORS LEADING TO BANKRUPTCY
External Factors
Factors
a. Mismanagement
b. Fraudulent practices and misappropriation of funds by the management
c. Labor unrest
d. Technological obsolescence
e. Disputes among promoters.
SYMPTOMS OF BANKRUPTCY
A firm goes bankrupt gradually. Before a firm goes bankrupt, it exhibits a number of symptoms, which when
diagnosed and corrected in time can save the company from bankruptcy.
• Production
- Declining/Stagnant sales
• Finance
Failure to pay current liabilities, salaries etc. Failure to make statutory payments
137
• Others
Frequent changes in accounting policies to enhance profits Frequent change of accounting years for
undeclared reasons.
PREDICTION OF BANKRUPTCY
As the incidence of sickness became more frequent, a need was felt to evolve techniques and methods to
predict failure of a firm. While symptoms listed earlier are good indicators of the financial health, they are not
the best predictors of sickness. A number of models are available to accurately predict sickness of a firm.
These models provide early warning signals, so that a potentially disastrous situation can be averted. Most of
these techniques involve financial ratio analysis. A study has revealed that financial ratios are useful in
predicting the failure of a firm for a period up to 5 years before sickness accurately. A number of Indian models
are also available. Some of the models are discussed below,
International Models
• Beaver Model
• The Wilcox Model
• Blum Marc's Failing Company Model
• Altman's Z Score Model
• Argenti Score Board. Indian Model
• L.C.Gupta Model
Beaver Model
Beaver was the first to make a conscious effort to use financial ratios as predictors of failure. He defined failure
as "inability of a firm to pay its financial obligation as they mature."
He used 30 ratios classified under 6 categories. Beaver tested these ratios to predict the failure of a company.
The ratio of cash flow to total debt was found to be the best single predictor of failure. The study further
revealed that financial ratios are useful in prediction of failure of at least five years prior to the event.
Wilcox proposed that the net liquidation value of a firm is the best indicator of its financial health. The net
liquidation value can be obtained by the difference in liquidation value of firm's assets and the liquidation value
of liabilities. Liquidation value is the market value of assets and liabilities, if liquidated at that point of study.
Blum Marc's model predicts the financial health of a firm using 12 ratios divided into 3 groups: Liquidity ratios,
Profitability ratios and Variability ratios. Using these ratios, Blum Marc tried to accurately predict failure and
draw a distinction between bankrupt and non-bankrupt firms.
Airman improved upon the earlier models using ratio analysis to predict failure. Altman's model is based on the
fact that various ratios when used in combinations, can have better predictive ability than when used
individually. 22 ratios were considered in various combinations as predictors of failure. He used a statistical
technique called the Multiple Discriminant Analysis (MDA) to distinguish between bankrupt and non-bankrupt
firms.
Out of these 22 ratios, a final set of 5 ratios were selected as they were found to be better predictors of failure.
Weights were given to these ratios on the basis of their significance to predict health of the model. He
developed a discriminant score called the Z-score on the basis of these ratios.
Z = Discriminant score
138
X1 = Working capital/Total assets
X2 = Retained earnings/Total assets
X3 = EBIT/Total assets
X4 = Market value of equity/Book value of debt
X5 = Sales/Total assets.
If Z score for a firm is less than 1.81, the firm is likely to go bankrupt. If Z score is more than 2.99, it is regarded
as a healthy company. The range between LSI - 2.99 is treated as an area of ignorance.
Z Score Classification
<1.81 Bankrupt firm
1.81-2.99 Area of ignorance
>2.99 Healthy firm
J. Argenti in his famous article 'Company Failure - Long Range Prediction is Not Enough', developed a score
board for evaluating the health of the firm. The model is based on numerical assessment of the firms'
weaknesses. The weaknesses are classified as defects (management and accounting), mistakes and
symptoms. He has delineated a list of factors to be looked into along with the respective scores. All the scores
are to be summed up. The cut-off point for a "healthy firm" is a score of 25. This model has been criticized for
being "subjective" and "arbitrary."
Box 6.1
Defects In management - Score
The chief executive is an autocrat
He is also the chairman
Passive board - an autocrat will see to that
2 Unbalanced board - too many engineers or too many finance types
Weak finance director
Poor management depth
15 Poor response to change, old-fashioned product, obsolete factory, old
directors, out-of-date marketing
In accountancy -
3 No budgets or budgetary controls (to assess variance, etc.)
3 No cash flow plans, or not updated
3 No costing system. Cost and contribution of each product unknown
Total Score 43 Pass should be less than 10
Mistakes 15 High leverage, firm could get into trouble by stroke of bad luck
15 Overtrading. Company expanding faster than its funding. Capital base too small or
unbalanced for the size and type of business
15 Big project gone wrong. Any obligation which the company cannot meet if
something goes wrong
Total Score 45 Pass should be less than 15
Symptoms
4 Financial signs, such as Z-score, appear near failure
4 Creative accounting. Chief executive is the first to see signs of failure and, in an
attempt to hide it from creditors and the banks, accounts are
"glossed over" by, for instance, overvaluing stocks, using lower
depreciation, etc. Skilled observers can spot these things.
4 Non-financial signs, such as untidy offices, frozen salaries, chief executive
"ill", high staff turnover, low morale, rumors
Total Score 12
Total possible score 100 Pass should be less than 25
139
L.C. Gupta's model was the first Indian model proposed to predict failure. He used 56 ratios and sought to
determine the best set of ratios to predict failure. These were categorized as profitability ratios and balance
sheet ratios. He applied these ratios to a sample of sick and non-sick companies and arrived at the best set of
ratios.
• EBDIT/Net Sales
• OCF/Sales (Operating Cash Flow/Sales)
• EBDIT/(TotaI Assets + Accumulated Depreciation)
• OCF/Total Assets
• EBDIT/(Interest + 0.25 Debt). Balance Sheet Ratios
• Net Worth/Total Debt
• All Outside Liabilities/Tangible Assets.
The model was found to have a high degree of accuracy in predicting sickness for 2/3 years before failure,
The primary cause of a firm encountering financial distress starts when it finds it difficult to meet the scheduled
payments or when the cash flow projections of the firm are indicative of the fact that it will soon be unable to do
so. Few of the pivotal issues that arise in due course are as follows:
a. Primary cause of failure on part of the firm to meet the debt obligations. To ascertain whether such a failure
is due to a temporary cash flow problem or because of the fact that the asset values of the firm has fallen
much below its debt obligation.
b. If it is found out that the problem is a temporary one, then an agreement with the creditors of the firm can be
worked out so that the firm has time to recover and satisfy every one. But in case the long run asset values
have truly declined then the firm is said to have incurred economic losses. In such a situation it is important
to ascertain, who should bear the losses and how much of share should be given to each.
c. To ascertain the value of the firm both on liquidation as well as on working conditions and to take the
decision on whether it is profitable to continue the business or liquidate it based on the valuations.
d. Whether the firm should file protection under chapter 11 of Bankruptcy Act, or should it go for informal
procedures. It is to be noted here that in both the cases of reorganization and liquidation a firm can either
resort to informal procedures or work under the direction of the bankruptcy Court.
e. Ascertaining the controlling force of the firm while it is being liquidated or rehabilitated. To ascertain whether
the existing management be left in charge or should a trustee be placed in charge.
When a firm goes through the period of financial distress, it is very important for its management and creditors
to decide whether the problem is a temporary one and it is possible for the firm to continue its operations or
whether the problem is more serious and permanent in nature that has the possibility of endangering the life of
the firm. So having done this, the parties involved in the process decides upon solving the problem either
through the intervention of the bankruptcy court or through informal process. If the firm goes for filing a formal
bankruptcy under chapter 11 of the Bankruptcy Act it involves certain costs. Coupled to this, there is also the
possibility of the fact that when the creditors come to know that the firm has resorted to the Court, it might lead
to disruptions. Thus it is preferable to go for reorganization and liquidation through informal means. Here we
first start our discussion with the informal reorganization and then go into the details of the procedures of the
formal bankruptcy.
Informal Reorganization
Those companies that possess more strong economic fundamentals, are always prepared to work with these
companies so as to help then to come out of their distress conditions and to re-establish themselves on a sound
financial basis. Such voluntary plans rendered by the creditors, generally termed as the "workouts", involves
restructuring of the firm's debt; because of the fact that the current cash flows of the firm are insufficient to
service the existing debt. The restructuring process typically consists of extension and composition. In the
former case, the creditors postpone the dales of the interest or the principal payments as well as both. In case
140
of the latter, the creditors voluntarily reduce their claims on the debt by accepting a lower principal amount or by
reducing the interest rate on the debt. They may even take equity for debt or they may resort to the combination
of all these three possible ways.
The process of debt restructuring begins with the initiation of both the firm's managers and the creditors
meeting for seeking a proper balance. The creditors form a committee with four to five representatives of the
larger creditors and a few of the smaller ones so that each side is equally represented. The meeting is often
arranged and conducted by an adjustment bureau that is associated with and run by local credit manager's
association. The first step involves drawing up a list of creditors with the amount of debt that is owed to each.
This follows by developing the information that shows the value of the firm in different scenarios. One such
scenario may be the firm going out of business, selling off its assets and then distributing the proceeds to the
various creditors as per the importance of the claim that is associated with each of them with the surplus going
to the common stock holders. The firm may even take help of an appraiser who can appraise the value of the
firm's property that can be used as a basis for ascertaining the value of the firm in different scenarios. Other
scenarios may include continued operations, frequently with some improvements in the capital equipments,
marketing and perhaps some management changes. This information is then shared with the bankers and the
creditors of the firm. It has been frequently observed that the debt capacity of the firm exceeds its liquidation
value and it is further observed that the legal fees and the other costs that are associated with the formal
liquidation process under the bankruptcy lowers the proceeds available to the creditors. Added to this, the
process of resolving the case through formal procedure is also very time consuming, it may take a year or even
more than a year. This reduces the present value of the proceeds to much lower level. When the creditors are
supplied with this information, they might be somewhat convinced to accept something less than their full value
of the claim. In case where the management and the primary creditors agree for a resolution, then a formal plan
is drafted and is presented to all the creditors providing them the reasons why they should be willing to
compromise on their claims.
While framing the reorganization plan, creditors offer extension because that promises them their full payment
at some point of time. In certain cases, the creditors may agree to not only postpone the date of payment but
also to subordinate the existing claims to the vendors who show their willingness to extend new credit during
the workout period. In a similar way, the creditors may also be willing to accept a lower interest rate on the
loans during the extension period. This may be perhaps in exchange for a pledge of collateral. Because of the
sacrifices that are involved, the creditors should have more faith than the debtor firm will able to solve the
problems.
In comparison to this, the creditors agree to reduce their claims. Typically, the creditors receive the cash and
the new securities that have a combined market value that is less than the amounts owed to them. Generally it
is observed that bargaining is taking place between the debtors and the creditors over the savings that in turn
results from avoiding the cost of legal bankruptcy, administrative cost, legal fees, and investigative cost and so
on. In addition to get away from such costs the debtor feels relieved that the stigma of bankruptcy is not put on
him. It is also sometimes seen that the bargaining process may lead to the process of restructuring that may
involve both extension as well as composition. As an example, the settlement may provide for a cash payment
of 25% of the debt amount immediately, along with a new note that promises six future installments of 10%
each for a total payment of 85%.
The process of voluntary settlement is both informal as well as simple. They are also relatively cheap because
the legal and the administrative expenses that are associated with it are limited to the minimum amount as a
result of which the voluntary procedures normally result in the maximum return to the creditors. Although the
creditors do not receive the payments immediately, and may some times have to accept an amount that is
lower than that owed to them, they generally recover more money and sooner than in case the firm were to file
a bankruptcy. Restructuring process also enjoys the benefit of avoiding the loss that is incurred by the creditors.
So a bank that is facing distress with its regulators over weak capital ratios may even agree to extend further
loans that may be used to pay the interest on the earlier loans in order to keep the bank from having to write
down the values of the earlier loans. It is to be kept in mind that the informal voluntary settlements are not
limited to the smaller firms. Recent studies have confirmed that they can extensively be used even by the larger
firms. The biggest problem that is encountered by informal reorganization is getting all the parties to agree to
the voluntary plan. This problem termed as the hold out problem is discussed in the chapter.
Informal Liquidation
When the management of the firm realizes that the value of the firm is more when it is dead than it is alive, it
may resort to informal procedures to liquidate the firm. Assignment is an informal procedure for the purpose of
liquidating a firm. This process generally yields them a greater return that they would have received in formal
bankruptcy liquidation. However, the feasibility of the assignments finds its significance only when the firm is
small and the affairs of the firm are not that complex. Assignments enjoy certain advantages over the process
141
of liquidation in the American bankruptcy Courts, in terms of time, legal formality, and expense. The assignee
has more flexibility in disposing a property than does a federal bankruptcy trustee. So an action can be taken
much faster when the inventory becomes obsolete or the machine rusts. At the same time it is to be
remembered that the assignment does not automatically result in a full and legal discharge of all the debtors
liabilities and neither does it protect the creditors against fraud. Formal liquidation in bankruptcy can help in
solving both these problems.
Before going in for a detailed analysis of the causes and effects of financial distress, let us first try to find an
answer to the following questions. These questions basically revolve round the primary reasons for a firm
experiencing financial distress; the effects of the distressed firms etc., let us answer these questions using a
top-down approach. The discussion will be dealt in two separate sections. The first will speak more about the
macroeconomic growth and the government policies and the regulations that center around the financial
distress rates. The second section will deal with the ways by which the financial distress can be related to the
industrial factors. It is to be borne in mind that the concept of financial distress is nothing new in the area of
corporate finance. Here we try to focus more on the causes and effects of the financial distress that is
encountered by firms around the globe.
Macro Level factors affecting the Financial Distress, Liquidity and Recession
In his study, Bernanke (1981) made some key findings on the relationship among liquidity, economic growth
and financial distress. His argument stressed on the fact that the existence of bankruptcy risk plays a role in the
propagation of recession for both the firms as well as individuals. He says, that bankruptcy leads to social costs,
as a result of which almost all the agents try to avoid the consequence of bankruptcy costs. From the viewpoint
of the consumers, they try to avoid it by retaining considerable amount of liquid assets so as to meet their fixed
expenses, the banks and the tenders try to avoid it by being selective as far as their borrowers are concerned
and by limiting the size of the loan with recession creeping into any system, there is the reduction in the cash
flow income that is available to meet the current obligation. This, in turn, increases the uncertainty about the
future liquidity needs.
There is also the general demand to bring solvency which consequently results in a reduced demand for
consumer and producer durables, which again may generate further income reduction. Bernanke's study
focused on the critical relationship among the changes in liquidity, financial distress and recession for both the
consumer and firms. He postulates, that recession leads to the creation of financial distress by bridging the gap
of margin between cash flow and debt service. When there is a constrained flow, the fall in the current income
reduces the expenditure on illiquid, long lived assets. Two reasons can be attributed for this. The first being, the
lower level of current income enhances the short run probability, so that the flow constraint has to be satisfied
through expensive means. Say for example, the distress rate of assets, borrowing at unfavorable terms, severe
reduction in the current standards of living or even the last possible resort, the bankruptcy of the firm. The other
reason being, the fall in the current level of income, reflects a hazy implication for the estimate by the consumer
of the future income flows and thus too, for the level of durables holding consistent with maintenance of
solvency in the long run. It must be remembered that, firms must bring together and balance the long-term
spending plans with the need for having the cash flow so as to meet the short-term obligations. With a low level
of internal liquidity, coupled with many fixed expenses, there is the possibility of increase in the level of financial
embarrassment, for at least they raise the cost of new financing. At the same time, postponement of capital
expenditures is a proper defence mechanism of the balance sheet, against any expected fall in the current
income. Bernanke has also stated the cause of bankruptcy. His suggestion is somewhat based on moral
hazard. It is not possible for the tenders to perceive the objective conditions on which borrowers base their
portfolio decisions. If a tender does not build a reputation for pressing his claims the borrowers will have an
incentive to become more of illiquid so as to force an improvement in terms.
MONETARY POLICY
Bernanke's study on the liquidity takes us back to the critical role of the monetary policy on the overall liquidity
of a nation. The overall liquidity of the US is governed by the Federal Reserve Board through its open market
transactions. These operations may include the Fed's buying and selling of the US treasury bills out of its
considerable inventory, so as to have an effect on its liquidity on either ways. When the Fed buys the bills, as
an expansionary mechanism, it adds on to the legal reserves to the banking industry, that the nation's banks
can use in creation of new loans on a multiplied basis. On the other hand, selling of the T-bills has a
contractionary effect. The short-term interest rates fall when the Fed is pursuing an expansionary policy and
rises when the contractionary policy is followed. The primary duty of the Fed is protecting the purchasing power
of the dollar, while at the same time ensuring a sustainable level of real growth in the economy. The operation
of the Fed is under the assumption that inflation and real economic growth is positively correlated. But, if the
142
real economic growth is weak, the Fed can pursue an expansionary policy without concerning much about
inflation. In situations, where the economy is growing at a high and presumably at a rate that is unsustainable,
the Fed steps in to make corrections. It is the contractionary policy to bring down the level of inflation. At the
same time, it is to be remembered that, as a result of such monetary policies, the interest rates also rise, and
entail a much tighter limit on the availability of the short-term loans. These events, along with the subsequent
slow down of the economy itself, leads to an increase in the financial distress of all firms, particularly those firms
that are relatively weak in financial terms or those that are highly levered.
The effect of the reversal on financial distress can be viewed from many angles, in one instance, it was found
that when the changes to the corporate focus began to take shape, many of the inefficient conglomerates that
were facing keener competition became financially distressed. The traditional thinking says that, the economies
of scope have been reversed in the 1980s. Managers of today tend to focus more on the core business, and
they are more likely to rationalize mergers and growth strategies, as well as divestitures and restructuring, as a
reflection of a strategy for specialization. This particular view is a deviation from the steady increase in
diversification since the 1950s, and from the several theoretical justifications for diversification that have since
been evolved. They may include -
The industry level causes of financial distress can be said to be a three tier system, They are competition,
industry shocks and deregulation. Let us now discuss each of these factors in detail.
Competition
For identifying the possible industry level causes of financial distress, one can resort to Michael Porter's five
forces model. The five forces that are included in it are
a. Barriers to entry
b. Bargaining power of suppliers
c. Bargaining power of buyers
d. Threat of substitute products
e. Rivalry among the competing firms.
Each of the above stated factors is associated with the financial distress of an individual firm that operates
within the industry. One of the possible implications of the stated factors is that the firms in the different
industries display different level of competition as well as different profit sensitivities to the changes in the
macroeconomic and industry conditions over time. Financial distress is likely to be more in case of larger firms
than that of the smaller ones as per conclusions drawn from Williams analysis. The author further states that a
highly leveraged firm will commit to riskier projects as well as aggressive product market strategies so as to
prevent other firms from entry.
Industry Shocks
Any negative shock to the demand of the product or its cost, especially over a period of time, eventually forces
a shakeout of firms in the industry. The weakest of the firms are forced into bankruptcy or they must consider
being taken over by a stronger firm in the industry. Studies conducted by Mitchell & Mulherin (1996) tested the
proposition that industry shocks contribute to the frequency of takeover and restructuring activities. The shocks
include, deregulation, changes in input costs of innovations in financial technology that brings about changes in
the industry structures. In a separate study, Long and Srulz (1992) examined the effect of bankruptcy
announcements by one firm on the values of other firms in the industry. They tested for two contradicting
effects. One may be the contagion effect. The market may pull down the values of other firms within the
industry because of the fact that the bankruptcy announcement brings new, negative information about the
status of the industry as a whole. On the other hand, the market may also raise the value of other firms in the
industry because one of their rival firms has failed. It has been found out that the balance between these
contrary views is dependent on the financial characteristics of the firm, within the industry.
Industry Deregulation
143
The process of deregulation in an industry can bring in financial distress in many firms. This is mainly because
of the fact, that deregulation within the industry brings forth a change in the economic structure of the industry.
Let us now try to focus on some of the studies that reveal the effects of financial position of a firm due to
deregulation creeping in. In their studies conducted in 1986, Chen and Mercrilte studied the forced break up of
AT&T, that was initiated by Court Order on first of January 1984, and continued for almost two years. The
authors concentrated on the issue on whether the break up resulted in wealth transfers among the security
claimants of AT&T and other stakeholders as well. The findings of their studies showed that economically
significant events took place during the deregulation process, which resulted in the transfer of funds from third
parties to the operating company shareholders. At the same time, it was also observed that, no transfer of
wealth from the bondholders to stockholders took place during the deregulation process. In another study, Kote
and Lehn (1999) examined the effects of the Airlines Deregulation Act of 1978, along with the associated
increase in competition, on airline firm's governance structures. They were able to develop several hypotheses
about the expected effects based on the agency theory. They stated that deregulation may bring in the
concentration of equity ownership. Deregulation may also lead to the increase in the costs of monitoring
managers. This can have a dual effect. The first being, the outside shareholder will engage in monitoring only if
his private benefits, which are proportional to his equity stake, exceed the cost of monitoring. The other effect
being, in order to internalize the agency problems that are associated with higher monitoring costs, the
managers themselves may own larger stakes so that they can have a larger proportion of wealth associated
with their decisions. The authors also made predictions regarding the increase in the level of executive
compensation for the airline executives, and also involving a change in the form of the compensation provided.
They also put forth the argument that before the process of deregulation, the executives pay would relatively be
more sensitive towards the firm's earnings, whereas it would be more sensitive towards the stock's price after
the process of deregulation.
144
Chapter 7
Valuation
Most important business decisions require capital. For example, when Daimler-Ben decided to develop the
Mercedes ML 320 sports utility vehicle and to build a plat is Alabama to produce it, Daimler had to estimate the
total investment that would be re quired and the cost of the required capital. The expected rate of return
exceeded the cost of the capital, so Daimler went ahead with the project. Microsoft had to make a I similar
decision with Windows 2000, Pfizer with Viagra, and Harcourt when it de- cided to publish this textbook.
Mergers and acquisitions often require enormous amounts of capital. For example. I Vodafone Group, a large
telecommunications company in the United Kingdom, spent I £60 billion to acquire Air7buch Communications, a
U.S. telecommunications com- pany, in 1999. The resulting company, Vodafone AirTouch, later made a $124
billion I offer for Mannesmann, a German company. In both cases, Vodafone estimated the in- cremental cash
flows that would result from the acquisition, then discounted those cash flows at the estimated cost of capital.
The resulting values were greater than the I targets' market prices, so Vodafone made the offers.
As these examples illustrate, the cost of capital is a critical element in business de- cisions. When the decision
involves a single project, it is called a "capital budgeting decision." Companies that consistently make wise
capital budgeting choices create value for their investors, hence it is important for managers to understand the
capital budgeting process. The cost of capital is also necessary to estimate the value of an entire company.
When evaluating a potential acquisition, it is vital to have a reliable estimate of the company's value,. It is also
important for a company to de- velop a corporate valuation model for itself. Such a model provides insights into
the sources of the company's value, and it can be used to guide managers when they evaluate alternative
courses of action.
Recent survey evidence indicates that almost half of all large companies use com- pensation plans based on
the concept of Economic Value Added (EVA). EVA is the difference between net operating profit after-taxes
(NOPAT) and a charge for capital, where the capital charge is calculated by multiplying the amount of capital by
the cost of capital. Thus, the cost of capital is an increasingly important component of compensation plans.
The cost of capital is also a key factor in decisions relating the use of debt versus equity capital. Finally, the
cost of capital is an important factor in the regulation of electric, gas, and telephone companies. These utilities
are natural monopolies in the sense that one firm can supply service at a lower cost than could two or more
firms. Because it has a monopoly, your electric or telephone company could, if it were unregulated, exploit you.
Therefore, regulators (1) determine the cost of the capital investors have provided the utility and (2) then set
rates designed to permit the company to earn its cost of capital, no more and no less.
What precisely do the terms "cost of capital" and "weighted average cost of capital" mean? To begin, note that it
is possible to finance a firm entirely with common equity. However, most firms employ several types of capital,
called capital components, with common and preferred stock, along with debt, being the three most frequently
used types. All capital components have one feature in common: The investors who provided the funds expect
to receive a return on their investment.
If a firm's only investors were common stockholders, then the cost of capital used in capital budgeting would be
the required rate of return on equity. However, most firms employ different types of capital, and, due to
differences in risk, these different securities have different required rates of return. The required rate of return
on each capital component is called its component cost, and the cost of capital used to ana-lyze capital
budgeting decisions should be a weighted average of the various components' costs. We call this weighted
average just that, the weighted average cost of capital, or WACC.
Most firms set target percentages for the different financing sources. For example, National Computer
Corporation (NCC) plans to raise 30 percent of its required capital as debt, 10 percent as preferred stock, and
60 percent as common equity. This is its target capital structure. but for now simply accept NCC's 30/10/60
debt, preferred, and common percentages as given.
Although NCC and other firms try to stay close to their target capital structures, they frequently deviate in the
short run for several reasons. First, market conditions may be more favorable in one market than another at a
particular time. For example, if the stock market is extremely strong, a company may decide that now is a good
time to issue common stock. The second, and probably more important, reason for deviations relates to
flotation costs, which are the costs that a firm must incur to issue securities. Flotation costs are addressed in
detail later in the chapter, but note that these costs are to a large extent fixed, so they become prohibitively high
145
if small amounts of capital are raised. Thus, it is inefficient and expensive to issue relatively small amounts of
debt, preferred stock, and common stock. Therefore, a company might issue common stock one year, debt in
the next couple of years, and preferred the following year, thus fluctuating around its target capital structure
rather than staying right on it all the time.
This situation can cause managers to make a serious error in their capital budgeting. To illustrate, assume that
NCC is currently at its target capital structure, and it is now considering how to raise capital to finance next
year's projects. NCC could raise a combination of debt and equity, but to minimize flotation costs it will raise
either debt or equity, but not both. Let's suppose it decides to issue debt, at a cost of 8 percent. The argument
is sometimes made that the cost of capital this year is 8 percent, because only debt at 8 percent will be used.
However, this is incorrect. If NCC finances this year's projects with debt, it will move away from its target capital
structure. Then, as expansion occurs in the future, it will at some point find it necessary to raise additional
equity.
Now suppose NCC borrows heavily at 8 percent during 2 002, using up its debt capacity in the process, to
finance projects that yield 10 percent. In 2003, it has new projects available that yield 13 percent, well above
the return on the 2002 projects. However, because it used up its debt capacity in 2002, it must issue equity,
which costs 15.3 percent. Therefore, the company might reject these 13 percent projects because they would
have to be financed with 15.3 percent money.
However, this entire capital budgeting process would be incorrect. Why should a company accept 10 percent
projects one year and then reject 13 percent projects the next? Note also that if NCC had reversed the order of
its financing, raising equity in 2002 and debt in 2003, it would have reversed its capital budgeting decisions,
rejecting all projects in 2 002 and accepting them all in 2 003. Does it make sense to accept or reject projects
just because of the more or less arbitrary sequence in which capital is raised? The answer is no. To avoid such
errors, managers should view companies as ongoing concerns, and calculate their costs of capital as weighted
averages of the various types of fonds they use, regardless of the specific source of financing employed in a
particular year.
The following sections discuss each of the component costs in more detail, and then we show how to combine
them to calculate the weighted average cost of capital.
Cost of Debt, kd (1 - T)
The first step in estimating the cost of debt is to determine the rate of return debtholders require, or kd.
Although estimating kd is conceptually straightforward, some problems arise in practice. Companies use both
fixed and floating rate debt, straight and convertible debt, and debt with and without sinking funds, and each
form has a somewhat different cost.
It is unlikely that the financial manager will know at the start of a planning period the exact types and amounts
of debt that will be used during the period: The type or types used will depend on the specific assets to be
financed and on capital market conditions as they develop over time. Even so, the financial manager does know
what types of debt are typical for his or her firm. For example, NCC typically issues commercial paper to raise
short-terrn money to finance working capital, and it issues 30-year bonds to raise long-term debt used to
finance its capital budgeting projects. Since the WACC is used primarily in capital budgeting, NCC's treasurer
uses the cost of 30-year bonds in her WACC estimate.
Assume that it is January 2002, and NCC's treasurer is estimating the WACC for the coming year. How should
she calculate the component cost of debt? Most financial managers would begin by discussing current and
prospective interest rates with their investment bankers. Assume that NCC's bankers state that a new 30-year,
non-callable, straight bond issue would require an 11 percent coupon rate with semiannual payments, and that
it would be offered to the public at its $1,000 par value. Therefore, kd is equal to 11 percent.
Note that the 11 percent is the cost of new, or marginal, debt, and it will probably not be the same as the
average rate on NCC's previously issued debt, which is called the historical, or embedded, rate. The
embedded cost is important for some decisions but not for others. For example, the average cost of all the
capital raised in the past and still outstanding is used by regulators when they determine the rate of return a
public utility should be allowed to earn. However, in financial management the WACC is used primarily to make
investment decisions, and these decisions hinge on projects' returns versus the cost of new, or marginal,
capital. Thus, for our purposes, the relevant cost is the marginal cost of new debt To be raised during the
planning period.
Suppose NCC had issued debt in the past, and its bonds are publicly traded. The financial staff could use the
market price of the bonds to find their yield to maturity (or yield to call if the bonds sell at a premium and are
146
likely to be called). The YTM (or YTC) is the rate of return the existing bondholders expect to receive, and it is
also a good estimate of kj, the rate of return that new bondholders would require.
If NCC had no publicly traded debt, its staff could look at yields on publicly traded debt of similar firms. This too
should provide a reasonable estimate of kd
The required return to debtholders, kd, is not equal to the company's cost of debt because, since interest
payments are deductible, the government in effect pays part of the total cost. As a result, the cost of debt to the
firm is less than the rate of return required by debtholders.
The after-tax cost of debt, kd(1 - T), is used to calculate the weighted average cost of capital, and it is the
interest rate on debt, kd, less the tax savings-that result because interest is deductible. This is the same as kd
multiplied by (1 — T), where T is the firm's marginal tax rate:"
Therefore, if NCC can borrow at an interest rate of 11 percent, and if it has a marginal federal-plus-state tax
rate of 40 percent, then its after-tax cost of debt is 6.6 percent:
A number of firms, including NCC, use preferred stock as part of their permanent financing mix. Preferred
dividends are not tax deductible. Therefore, the company bears their full cost, and no tax adjustment is used
-when calculating the cost of preferrd stock. Note too that while some preferreds are issued without a stated
maturity date, today most have a sinking fund that effectively limits their life. Finally, although it is not mandatory
that preferred dividends be paid, firms generally have every intention of doing so, because otherwise (1) they
cannot pay dividends on their common stock, (2) they will find it difficult to raise additional funds in the capital
markets, and (3) in some cases preferred stockholders can take control of the firm.
The component cost of preferred stock used to calculate the weighted average cost of capital, kps, is the
preferred dividend, Dps, divided by the net issuing price, Pm which is the price the firm receives after deducting
flotation costs:
D ps
Component cost of preferred stock = kps =
Pn
Flotation costs are higher for preferred stock than for debt, hence they are incorporated into the formula for
preferred stocks' costs.
To illustrate the calculation, assume that NCC has preferred stock that pays a $10 dividend per share and sells
for $100 per share. If NCC issued new shares of preferred, it would incur an underwriting (or flotation) cost of
2.5 percent, or $2.50 per share, so it would net $97.50 per share. Therefore, NCC's cost of preferred stock is
10,3 percent:
Companies can raise common equity in two ways: (1) by issuing new shares and (2) by I retaining earnings. If
new shares are issued, what rate of return must the company earn to satisfy the new stockholders? In Chapter
6, we saw that investors require a return of ks. However, a company must earn more than ks on new external
equity to provide this rate of return to investors because there are commissions and fees, called flotation costs,
when a firm issues new equity.
Few mature firms issue new shares of common stock.4 In fact, less than 2 percent of all new corporate funds
come from the external equity market. There are three reasons for this:
147
2. Investors perceive issuing equity as a negative signal with respect to the true value of the company's stock.
Investors believe that managers have superior knowledge about companies' future prospects, and that
managers are most likely to issue new stock when they think the current stock price is higher than the true
value. Therefore, if a mature company announces plans to issue additional shares, this typically causes its
stock price to decline.
3. An increase in the supply of stock will put pressure on the stock's price, forcing the company to sell the new
stock at a lower price than existed before the new issue was announced.
There are times when companies should issue stock in spite of these problems, hence we discuss stock issues
later in the chapter. However, for the most part we assume that the companies in our examples, like most, do
not plan to issue new shares.
Does new equity capital raised by retaining earnings have a cost? The answer is a resounding yes. If some of
its earnings are retained, then stockholders will incur an opportunity cost—the earnings could have been paid
out as dividends (or used to repurchase stock), in which case stockholders could then have reinvested the
money in stocks, bonds, real estate, and so on. Thus, the firm should earn on its reinvested earnings at least as
much as its stockholders themselves could earn on alternative investments of equivalent risk.
What rate of return can stockholders expect to earn on equivalent-risk investments? The answer is ksp, because
they expect to earn that return by simply buying the stock of the firm in question or that of a similar firm.
Therefore, ks is the cost of common equity raised internally by retaining earnings. If a company cannot earn at
least ks on reinvested earnings, then it should pass those earnings on to its stockholders and let them invest
the money themselves in assets that do provide ks.
Whereas debt and preferred stock are contractual obligations that have easily determined costs, it is more
difficult to estimate ks... However, we can employ the principles described in Chapters 6 and 10 to produce
reasonably good cost of equity estimates. Three methods typically are used: (1) the Capital Asset Pricing Model
(CAPM), (2) the discounted cash flow (DCF) method, and (3) the bond-yield-plus-risk-premium approach.
These methods are not mutually exclusive—no method dominates the others, and all are subject to error when
used in practice. Therefore, when faced with the task of estimating a company's cost of equity, we generally use
all three methods and then choose among them on the basis of our confidence in the data used for each in the
specific case at hand.
To estimate the cost of common stock using the Capital Asset Pricing Model (CAPM) as discussed in Chapter
6, we proceed as follows:
Step 3. Estimate the stock's beta coefficient, b,, and use it as an index of the stock's
risk. The i signifies the ith company's beta. Step 4. Substitute the preceding values into the CAPM equation to
estimate the required rate of return on the stock in Question:
ks = kRF + (RPM)bF
Equation 11-3 shows that the CAPM estimate of ks begins with the risk-free rate, kRfh I to which is added a risk
premium set equal to the risk premium on the market, RPM, I scaled up or down to reflect the particular stock's
risk as measured by its beta coefficient. The following sections explain how to implement the four-step process.
The starting point for the CAPM cost of equity estimate is kRF, the risk-free rate, There is really no such thing as
a truly riskless asset in the U.S. economy. Treasury securities are essentially free of default risk, but
nonindexed long-term T-bonds will suffer capital losses if interest rates rise, and a portfolio of short-term T-bills
will provide a volatile earnings stream because the rate earned on T-bills varies over time.
Since we cannot in practice find a truly riskless rate upon which to base the CAPM, what rate should we use?
A recent survey of highly regarded companies shows that about two-thirds of the companies use the rate on
long-term Treasury bonds. We agree with their choice, and here are our reasons:
148
1. Common stocks are long-term securities, and although a particular stockholder may not have a long
investment horizon, most stockholders do invest on a long-term basis. Therefore, it is reasonable to think
that stock returns embody long-term inflation expectations similar to those reflected in bonds rather than the
short-term expectations in bills.
2. Treasury bill rates are more volatile than are Treasury bond rates and, most experts agree, more volatile
than ks...
3. In theory, the CAPM is supposed to measure the expected return over a particular holding period. When it
is used to estimate the cost of equity for a project, the theoretically correct holding period is the life of the
project. Since many projects have long lives, the holding period for the CAPM also should be long.
Therefore, the | rate on a long-term T-bond is a logical choice for the risk-free rate.
In light of the preceding discussion, we believe that the cost of common equity is more closely related to
Treasury bond rates than to T-bill rates. This leads us to favor T-bonds as the base rate, or kRF, in a CAPM
cost of equity analysis. T-bond rates can, be found in The Wall Street Journal or the Federal Reserve Bulletin,
Generally, we use the yield on a 10-year T-bond as the proxy for the risk-free rate.
The market risk premium, RPM, is the expected market return minus the risk-free rate, kM — kRF. It can be
estimated on the basis of (1) historical data or (2) forward-looking data.
Historical Risk Premium A very complete and accurate historical risk premium study, updated annually, is
available from Ibbotson Associates, who examine market data over long periods of time to find the average
annual rates of return on stocks, T-bills, T-bonds, and a set of high-grade corporate bonds. For example, Table
7.1 summarizes some results from their 2000 study, which covers the period 1926-1999.
Note that common stocks provided the highest average return over the 74-year period, while Treasury bills
gave the lowest. T-bills barely covered inflation, while common stock provided a substantial real return. Table 7-
1 also reports the implied risk premiums, or differences, between stocks and Treasury securities. Note that the
risk premium of stocks over long-term T-bonds is about 7.8 percent when using the arithmetic average and
about 6.2 percent when using the geometric average. This leads to the question of which average to use. Keep
in mind that the logic behind using historical risk premiums to estimate the current risk premium is the basic
assumption that the future will resemble the past. If this assumption is reasonable, then the annual arithmetic
average is the theoretically correct predictor for next year's risk premium. On the other hand, the geometric
average is a better predictor of the risk premium over a longer future interval, say, the next 20 years.
However, it is not at all clear that the future will be like die past. For example, the choice of the beginning and
ending periods can have a major effect on the calculated risk premiums. Ibbotson Associates used the longest
period available to them, but had their data begun some years earlier or later, or ended earlier, their results
would have been very different. In fact, using data for the past 3 0 or 40 years, the arithmetic average market
risk premium has ranged from 5 to 6 percent, which is quite different than the 7.8 percent over the last 74 years.
Note too that using periods as short as 5 to 10 years can lead to bizarre results. Indeed, over many periods the
Ibbotson data would indicate negative risk premiums, which would lead to the conclusion that Treasury
securities have a higher required return than common stocks. That, of course, is contrary to both financial
theory and common sense. All this suggests that historical risk premiums should be approached with caution.
As one businessman muttered after listening to a professor give a lecture on the CAPM, "Beware of
academicians bearing gifts"
149
forward-Looking Risk Premiums The historical approach to risk premiums used by Ibbotson Associates
assumes that investors expect future results, on average, to equal past results. However, as we noted, the
estimated risk premium varies greatly depending on the period selected, and, in any event, investors today
probably expect results in the future to be different from those achieved during the Great Depression of the
1930s, the World War II years of the 1940s, and the peaceful boom years of the 1950s, all of which are
included (and given equal weight with more recent results) in the bbotson data. The questionable assumption
that future expectations are equal to past realizations, together with the sometimes nonsensical results
obtained in historical risk I premium studies, has led to a search for forward-looking, or ex ante, risk premiums.
The most common approach to forward-looking premiums is to use the discounted cash flow (DCF) model to
estimate the expected market rate of return, kM =
kM, then to calculate RPM as kM — kRF, and finally to use this estimate of RPM in the Security Market Line. This
procedure recognizes that if markets are in equilibrium, the expected rate of return on the market is also its
required rate of return, so when we estimate kM) we are also estimating kM:
Since Dj for the market as measured by the S&P 500 or some other index can be predicted quite accurately,
and since the current market value of the index (used for P 0}is I also known, the major task is to estimate g, the
average expected long-term growth rate for the market index. Even here, however, the estimation task is
simplified because one can reasonably assume a constant long-term growth rate for a portfolio of mature stocks
such as those in the S&P 500.
Financial services companies such as Value Line publish, on a regular basis, a forecast based on DCF
methodology for the expected rate of return on the market, kM. One can subtract the current T-bond rate from
such a market forecast to obtain an estimate of the current market risk premium, RPM-
Two potential problems arise when we attempt to use data from organizations such as Value Line. First, what
we really want is the marginal investor's expectations, not those of a security analyst. However, this is probably
not a major problem, since several studies have proved beyond much doubt that investors, on average, form
their own expectations on the basis of professional analysts' forecasts. The second problem is that there are a
number of securities firms besides Value Line, and, at any given time, different analysts' forecasts of future
market returns are somewhat different. This suggests that it would be most appropriate to obtain a number of
forecasts of kM and then to use the average value to estimate RP M for use in the SML. Several services
(including Zacks and Institutional Brokers Estimate System, or IBES) publish data on the forecasts of
essentially all widely followed analysts. Therefore, one can use the Zacks or IBES aggregate growth rate
forecast, along with an aggregate dividend yield, to develop a consensus RP M forecast and thus avoid potential
bias from the use of only one organization's estimate. However, we have followed the forecasts of several of the
larger organizations over a period of several years, and we have rarely found their kM estimates to differ by
more than ±0.3 percentage point from one another. Note, though, that ex ante risk premiums are not stable:
they vary over time. Therefore, when using the CAPM to estimate the cost of equity, it is best to use a current
estimate of the ex ante RPM. In recent years, the forward-looking risk premium has been in the range of 4.5 to
6.5 percent.
Our View on the Market Risk Premium After reading the previous sections, you might well be confused about
the correct market risk premium, since the different approaches give different results. Using the historical
Ibbotson data over die last 74 years, it appears that the market risk premium is somewhere between 6.2 and
7.8 percent, depending on whether you use an arithmetic average or a geometric average. However, in the past
30 to 40 years, the historical premium has been in the range of 5 to 6 percent. Using the forward-looking
approach, it appears that the market risk premium is somewhere in the area of 4.5 to 5.5 percent. To further
muddy the waters, the previously cited survey indicates that 37 percent of responding companies use a market
risk premium of 5 to 6 percent, 15 percent use a premium provided by their financial advisors (who typically
make a recommendation of about 7 percent), and 11 percent use a premium in the range of 4 to 4.5 percent.
Moreover, it has been toward the low end of the range when interest rates were high and toward the high end
when rates were low.
Here is our opinion. The risk premium is driven primarily by investors' attitudes toward risk, and there are good
reasons to believe that investors are less risk averse today than 50 years ago. The advent of pension plans,
Social Security, health insurance, and disability insurance means that people today can take more chances with
their investments, which should make them less risk averse. Also, many households have dual incomes, which
150
also allows investors to take more chances. Finally, the historical average return on the market as Ibbotson
measures it is probably too high due to a survivorship bias. Patting it all together, we conclude that the true risk
premium in 2001 is almost certainly lower than the long-term historical average of more than 7 percent.
But how much lower is the current premium? In our consulting, we typically use a risk premium of 5 percent; hut
we would have a hard time arguing with someone who used a risk premium in the range of 4.5 to 5.5 percent.
The bottom line is that there is no way to prove that a particular risk premium is either right or wrong, although
we are extremely doubtful that the premium market is less than 4 percent or greater than 6 percent.
Estimating Beta
Recall from Chapter 6 that beta is usually estimated as the slope coefficient in a regression, with the company's
stock returns on the y-axis and market returns on the x-axis. The resulting beta is called the historical beta,
since it is based on historical data. Although this approach is conceptually straightforward, complications quickly
arise in practice. We described these complications in detail in Chapter 7, but it is worthwhile to repeat some of
them here.
First, there is no theoretical guidance as to the correct holding period over which to measure returns. The
returns for a company can be calculated using daily, weekly, or monthly time periods, and the resulting
estimates of beta will differ. Beta is also sensitive to the number of observations used in the regression. With
too few observations, the regression loses statistical power, but with too many, the "true" beta may have
changed during the sample period. In practice, it is common to use either four to five years of monthly returns or
one to two years of weekly returns.
Second, the market return should, theoretically, reflect every asset, even the human capital being built by
students. In practice, however, it is common to use only an index of common stocks such as the S&P 500, the
NYSE Composite, or the Wilshire 5000. Even though these indexes are highly correlated with one another,
using different indexes in the regression will often result in different estimates of beta.
Third, some organizations modify the calculated historical beta in order to produce what they deem to be a
more accurate estimate of the "true" beta, where the true beta is the one that reflects the risk perceptions of the
marginal investor. One modification, called an adjusted beta, attempts to correct a possible statistical bias by
adjusting the historical beta to make it closer to the average beta of 1.0. Another modification, called a
fundamental beta, incorporates information about the company, such as changes in its product lines and capital
structure.
Fourth, even the best estimates of beta for an individual company are statistically imprecise. The average
company has an estimated beta of 1.0, but the 95 percent confidence interval ranges from about 0.6 to 1.4. For
example, if your regression produces an estimated beta of 1.0, then you can be 95 percent sure that the true
beta is in the range of 0.6 to 1.4.
So, you should always bear in mind that while the estimated beta is useful when calculating the required return
on stock, it is not absolutely correct. Therefore, managers and financial analysts must learn to live with some
uncertainty when estimating the cost of capital.
To illustrate the CAPM approach for NCC, assume that kRF = 8%, RPM = 6%, and bi = 1.1, indicating that NCC
is somewhat riskier than average. Therefore, NCC's cost of equity is 14.6 percent:
It should be noted that although the CAPM approach appears to yield an accurate, precise estimate of kg, there
are actually several problems with it. First, if a firm's stockholders are not well diversified, they may be
concerned with stand-alone risk in addition to market risk. In that case, the firm's true investment risk would not
be measured by its beta, and the CAPM procedure would understate the correct value of kj.. Further, even if the
CAPM method is valid, it is hard to know the correct estimates of the inputs required to make it operational
because (1) it is hard to estimate the beta that investors expect the company to have in the future, and (2) it is
difficult to estimate the market risk premium.
151
Dividend – Yield – plus –Growth – Rate, or
Discounted Cash Flow (DCF), Approach
In Chapter 10, we saw that both the price and the expected rate of return on a share of common stock depends
on the dividends expected on the stock:
D1 D2
P0 = + +. . .
(1+ k s ) 1 (1+ k s ) 2
−
Dt
= ∑(1+k
t −1 )t
s
Here PQ is the current price of the stock; Dt is the dividend expected to be paid at the end of Year t; and ks is the
required rate of return. If dividends are expected to grow at a constant rate, then Equation 11-4 reduces to
D1
P0 =
k s −g
We can solve for ks to obtain the required rate of return on common equity, which for the marginal investor is
also equal to the expected rate of return:
D1
Ks = ks = + Expected g.
P0
Thus, investors expect to receive a dividend yield, Di/P0, plus a capital gain, g, for a total expected return of ks.
In equilibrium this expected return is also equal to the required return, ks. This method of estimating the cost of
equity is called the discounted cash flow, or DCF, method. Henceforth, we will assume that equilibrium exists,
hence ks = ks, so we can use the terms ks and ks interchangeably.
Three inputs are required to use the DCF approach: the current stock price, the current dividend, and the
expected growth in dividends. Of these inputs, the growth rate is by far the most difficult to estimate. The
following sections describe the most commonly used approaches for estimating the growth rate: (1) historical
growth rates, (2) the retention growth model, and (3) analysts' forecasts.
Historical Growth Rates First, if earnings and dividend growth rates have been relatively stable in the past, and
if investors expect these trends to continue, then the past realized growth rate may be used as an estimate of
the expected future growth rate.
We illustrate several different methods for estimating historical growth in the file Ch 11 Tool Kit.xls on the
textbook's CD-ROM. For NCC, these different methods produce estimates of historical growth ranging from 4.6
percent to 11.0 percent, with most estimates fairly close to 7 percent.
As the Ch 11 Tool Kit.xls shows, one can take a given set of historical data and, depending on the years and
the calculation method used, obtain a large number of quite different growth rates. Now recall our purpose in
making these calculations: We are seeking the future dividend growth rate that investors expect, and we
reasoned that, if past growth rates have been stable, then investors might base future expectations on past
trends. This is a reasonable proposition, but, unfortunately, we rarely find much historical stability. Therefore,
the use of historical growth rates in a DCF analysis must be applied with judgment, and also be used (if at all) in
conjunction with ^-' ~r growth estimation methods as discussed next.
Retention Growth Model Another method for estimating the growth rate is to use the retention growth model;
G = b(r)
Here r is the expected future return on equity (ROE), and b is the fraction or its earnings that a firm is expected
to retain (1 — Payout ratio). Equation 11-7 produces a constant growth rate, but when we use it we are, by
implication, making four important assumptions: (1) We expect the payout rate, and thus the retention rate, b =
1 -Payout, to remain constant; (2) we expect the return on equity on new investment, r, to equal the firm's
152
current ROE, which implies that we expect the return on equity to remain constant; (3) the firm is not expected
to issue new common stock, or, if it does, we expect this new stock to be sold at a price equal to its book value;
and (4) future projects are expected to have the same degree of risk as the firm's existing assets.
Some analysts use a subjective, ad hoc procedure to estimate a firm's cost of common equity: they simply add
a judgmental risk premium of 3 to 5 percentage points to the interest rate on the firm's own long-term debt. It is
logical to think that firms with risky, low-rated, and consequently high-interest-rate debt will also have risky,
high-cost equity, and the procedure of basing the cost of equity on a readily observable debt cost utilizes this
logic. For example, if an extremely strong firm such as BellSouth had bonds which yielded 8 percent, its cost of
equity might be estimated as follows:
The bonds of NCC, a riskier company, have a yield of 10.4 percent, making its estimated cost of equity 14-4
percent:
ks - 10.4% + 4% - 14.4%.
Because the 4 percent risk premium is a judgmental estimate, the estimated value of ks. is also judgmental.
Empirical work in recent years suggests that the risk premium over a firm's own bond yield has generally
ranged from 3 to 5 percentage points, so this method is not likely to produce a precise cost of equity. However,
it can get us "into the right ballpark."
We have discussed three methods for estimating the required return on common stock. For NCC, the CAPM
estimate is 14.6 percent, the DCF constant growth estimate is 14.5 percent, and the bond-yield-plus-risk-
premium is 14.4 percent. The overall average of these three methods is (14.6% + 14.5% + 14.4%)/3 = 14.5%.
These results arc unusually consistent, so it would make little difference which one we used. However, if the
methods produced widely varied estimates, then a financial analyst would have to use his or her judgment as to
the relative merits of each estimate and then choose the estimate that seemed most reasonable under the
circumstances.
A 2000 research paper that reported the results of two surveys found that the CAPM approach is by far the
most widely used method. Although most firms use more than one method, almost 74 percent of respondents in
one survey, and 85 percent in the other, used the CAPM,' This is in sharp contrast to a 1982 survey, which
found that only 30 percent of respondents used the CAPM. Approximately 16 percent now use the DCF
approach, down from 31 percent in 1982. The bond-yield-plus-risk-premium is used primarily by companies that
are not publicly traded.
People experienced in estimating equity capital costs recognize that both careful analysis and sound judgment
are required. It would be nice to pretend that judgment is unnecessary and to specify an easy, precise way of
determining the exact cost of equity capital. Unfortunately, this is not possible—finance is in large part a matter
of judgment, and we simply must face that fact.
As we shall see in Chapters 16 and 17, each firm has an optimal capital structure, defined as that mix of debt,
preferred, and common equity that causes its stock price to be maximized. Therefore, a value-maximizing firm
will establish a target (optimal) capital structure and then raise new capital in a manner that will keep the
actual capital structure on target over time. In this chapter, we assume that the firm has identified its optimal
capital structure, that it uses this optimum as the target, and that it finances so as to remain constantly on
target. How the target is established will he examined in Chapters 16 and 17.
The target proportions of debt, preferred stock, and common equity, along with the component costs of capital,
are used to calculate the firm's-WACC. To illustrate, suppose NCC has a target capital structure calling for 30
percent debt, 10 percent preferred stock, and 60 percent common equity. Its before-tax cost of debt, kd, is 11
percent; its after-tax cost of debt is kd(l - T) = 11%(0.6) - 6.6%; its cost of preferred stock, kps, is 10.3 percent; its
cost of common equity, It,, is 14.5 percent; its marginal tax rate is 40 percent; and all of its new equity will come
from retained earnings. We can calculate NCC's weighted average cost of capital, WACC, as follows:
153
WACC = wdkd(1 - T) + wpskps + wceks
- 11.7%.
Here w(], wps, and wce are the weights used for debt, preferred, and common equity, respectively.
Every dollar of new capital that NCC obtains will on average consist of 30 cents of debt with an after-tax cost of
6.6 percent, 10 cents of preferred stock with a cost of 10.3 percent, and 60 cents of common equity with a cost
of 14.5 percent. The average cost of each whole dollar, the WACC, is 11.7 percent.
Two points should be noted. First, the WACC is the weighted average cost of each new, or marginal, dollar of
capital—it is not the average cost of all dollars raised in the past. We are primarily interested in obtaining a cost
of capital for use in capital budgeting, and for this purpose the cost of the new money that will be invested is the
relevant cost. On average, each of these new dollars will consist of some debt, some preferred, and some
common equity.
Second, the percentages of each capital component, called weights, could be based on (1) accounting values
as shown on the balance sheet (book values), (2) current market values of the capital components, or (3)
management's target capital structure, which is presumably an estimate of the firm's optimal capital structure.
The correct weights are those based on the firm's target capital structure, since this is the best estimate of how
the firm will, on average, raise money in the future.
Valuation of firm(s) as a going concern is the base for any merger and acquisition exercise. The determination
of the right value of a business firm is crucial for the sustainable long-term success of the acquisition.
There are various approaches and methodologies for valuation of a firm. Some of the common approaches to
valuation are the discounted cash flow approach, the comparable firms approach and the adjusted book value
approach. The discounted cash flow approach to valuation relates the value of the firm to the present value of
the expected future cash flows of the firm. The comparable firms approach estimates the value of a firm in
relation to the value of other similar firms based on various parameters like earnings, sales, book value, cash
flows, etc. The adjusted book value approach to valuation involves estimation of the market value of the assets
and liabilities of the firm as a going concern. Historically the comparable firms method and the adjusted book
value method have been the more commonly used approaches to valuation of firms. However, in the recent
years, there is a marked shift towards the application of the discounted cash flow method. The reasons for its
increasing popularity and acceptance is its conceptual soundness and its strong endorsement by leading
investment bankers and consultancy firms.
A postulate for sound investing is that an investor does not pay more for an asset than its worth. This statement
may seem logical and obvious, but it is forgotten and rediscovered at sometime in every generation and every
market. There are those who are disingenuous to argue that the value lies in the eyes of the beholder and that
any price can be justified, if there are other investors willing to pay that price. This is patently absurd.
Perceptions may be all that matters when the asset is a painting or a sculpture, but investors do not (and should
not) buy assets or firms for aesthetic or emotional reasons. They buy them for the cash flows they expect to
receive. Consequently, the perceptions of value have to be backed up by reality, which implies that the price
paid for any asset should reflect the cash flow it is expected to generate. There are many areas in valuation on
which there is room to disagree, including the actual estimates of the true value and the time taken for the
prices to adjust to their true value. However, there is one point on which there can be no disagreement: pricing
cannot be justified by merely using the argument mat there will be other buyers around willing to pay a higher
price in the future. That is equivalent to playing a very expensive game of musical chairs in which, before
playing, every investor has to answer the question "Where will I be when the music stops?"
Source: Damodaran on Valuation: Security Analysis for Investment and Corporate Finance by Aswath
Damodaran
The discounted cash flow model relates the value of the firm to the present value of its expected future cash
flows. The nature of the cash flows will depend upon the asset: dividends for an equity share, coupons and
redemption value for bonds and the post-tax cash flows for a project. This approach is based on the time value
154
concept where the value of any asset is the present value of its expected future cash flows.
The first step in the Discounted Cash Flow approach entails estimating the Free Cash Flow for the explicit
forecast period. The free cash flow represents the cash flow available to all the suppliers of capital to the firm.
These include the equity holders, the preference investors and the providers of debt to the firm. The free cash
flow is used for the following purposes;
The free cash flow of a firm is the sum of its free cash flow from operations and its non-operating cash flows.
The free cash flow from operations is the difference between the gross cash flow of the firm and its gross
investments.
Add: Depreciation
Add: Non-Cash Charges Gross Cash Flow
The Gross Investment can be computed as follows: Increase in Net Working Capital
Add: Capital Expenditure incurred
Add: Increase in Other Assets Gross Investments
Non-Operating Cash Flows represent the post-tax cash flows from items other than the regular operations of
the firm. For e.g. profit realized on sale of fixed assets.
The explicit forecast period of the firm also needs to be determined. One of the premises of this theory is that
the firm is a going concern. The implication of this assumption is that cash flows in perpetuity need be
discounted to value the firm. This is, however, impossible in practice. Hence the cash flows are explicitly
computed for a finite period of time and the continuing value of the firm at the end of such period is computed.
This finite period (say 7 years) for which the free cash flows are computed is called as the explicit forecast
period. Normally the explicit forecast period is coterminous with the period during which the company enjoys
competitive advantage. The firm is expected to stabilize and reach a steady state at the end of the explicit
forecast period. This implies that the ROCE, reinvestment rate and the growth rate remain constant in
perpetuity after the explicit forecast period,
It is also important to obtain a historical perspective before forecasting the expected free cash flow of the firm.
The principal drivers which affect the free cash flow are the Return on Capital Employed (ROCE) and the
Reinvestment Rate. The historical analysis involves careful perusal of past financial statements, analysis of the
historical ROCE and reinvestement rates and assessing the sustainability of these rates over the explicit
forecast period.
The second step in the DCF model involves computation of the cost of capital to the firm. The cost of capital
is the rate to be used for discounting the free cash flows to their present values.
The cost of capital is to be computed as the weighted average of the costs of all sources of capital. The weights
assigned are based on the market value or each or the components of the capital. Weightages based on
market value is considered to be superior to assigning weights based on book value. This is because book
values represent the financial legacy rather than a current perspective. On the other hand, weightages based
on market value are taken to represent the economic claims of the various providers of capital. Secondly the
cost of capital is to be computed on post- tax terms. This is to ensure consistency in the approach, as the free
cash flow is computed on post-tax basis.
155
The cost of capital is to be computed as follows:
k
o = kev + kpv + M1-') v
where,
V is the sum of the market values of the equity capital, preference capital and the debt i.e. S + P + B
It is to be noted that non-interest bearing debt like sundry creditors and bills payable are to be excluded in the
above computation. This is because the cost of extending credit would have been factored in by the seller in
pricing of the goods/services. Hence the impact of the same would have been reflected in the free cash flows
which would have been understated to that extent.
The third step in the DCF model involves computing the continuing value of the firm. Continuing value is also
referred to as the horizon value. The continuing value represents the value of the free cash flows beyond the
explicit forecast period. In many cases, the continuing value may be dominant component of the value of the
firm. Hence the valuer should be circumspect and realistic in computing the continuing value.
There are a number of methods to compute the continuing value. The most common method is the Free Cash
Flow method. The premise of this method is that the free cash flow will grow at a constant rate after the explicit
forecast period. The continuing value of the firm may be computed as follows:
where,
In addition to the above method, there are a number of non-cash flow based methods. The non-cash flow
based methods are the Book Value Method, Price Earnings Multiple (P/E) Method and the Replacement Cost
Method.
The Book Value Method values the firm at its book value at the end of the explicit forecast period. Some valuers
use a variation of this method and values the firm as a multiple of its book value. The main drawback of this
method is that it does not take into account the increase in the book value due to inflation. Another drawback is
that the book values are influenced by the accounting policies.
The Price Earnings Multiple Method involves valuing the firm based on its earnings of the first year after the
explicit forecast period. The earnings are multiplied by an appropriate multiple (P/E ratio) to determine the
continuing value. The advantage of this method is its familiarity as it is extensively used to value equity. The
156
main drawback is that this method uses earnings, which is vulnerable to distortion, due to the high degree of
subjectivity involved in its computation. Secondly, the valuation process becomes inconsistent due to use of
cash flows in valuing the firm in the explicit forecast period and the use of earnings thereafter.
The replacement cost method determines the continuing value based on the replacement cost of its assets. The
main drawback of this method is that only certain assets can be replaced. Some non-tangible factors like
relationships with customers (e.g. Goldman Sachs relationship with their clients), reputation of the firm for its
ethical practices (e.g. Infosys Technology), employee loyalty, etc. cannot be replaced. However, some of these
aspects are the principal factors responsible for the success of a firm. Ignoring these factors as non-replaceable
grossly understates the value of the firm. Secondly, in some instances, it may be simply uneconomical to
replace some of its assets. In such an eventuality, the replacement cost exceeds the value of the firm as a
going concern.
The last step in the DCF model involves determination of the value of the firm. The free cash flow projections
and the continuing value of the firm should be discounted by the cost of capital to arrive at the present value of
the cash flows. The value of non-operating assets like investments should be added to it. The market value of
all claims (bonds issued, loans, etc.) on the firm should be deducted to arrive at the ownership value of the firm.
Valuing a company is neither an art nor a science but an odd combination of both. There is enough science that
appraisers are not left to rely solely on experience but there is enough art that without experience and
judgments, failure is assured.
Illustration 7.1
Swagat Enterprises is engaged in the construction business. Its current financials are as follows:
The current level of its net fixed assets is Rs.80 crore. The corresponding level of net current assets stands at
Rs.l0 crore.
The sales of the firm are expected to grow at the rate of 10% per year for the next 5 years. During the same
period, the operating expenses are expected to increase at the rate of 8% per annum. Depreciation is to be
charged @ 10% of the net fixed assets at the beginning of the year. To finance this expansion, Swagat
Enterprises will be making the following investments:
Year Investment in
fixed assets
(Rs. crore)
1 20
2 0
3 10
4 15
5 0
Throughout the five-year period, the net current assets will remain at 10% of the net fixed assets.
All the investments will be made at the beginning of the respective years.
157
The tax rate will continue to be at 40%. The post-tax non-operating cash flows will be as follows:
Year — Non-operating
cash flows
(Rs. crore)
1 10
3 5
4 20
The post-tax cost of debt is 8% for the firm. The cost of equity is 15%.
The market value of debt is Rs.40 crore, and the market value of equity is Rs.ll0 crore.
From the sixth year onwards, the free cash flow is expected to grow @ 10% per annum.
Calculate the value of Swagat Enterprises.
Solution
Step 1
Calculating the Gross Cash Flow for the explicit forecast period
Year 1 2 3 4 5
Sales 110 121 133 146 161
Operating 43 47 50 54 59
Expenses
EBDIT 67 74 83 92 102
depreciation** 10 9 9 10 9
EBIT 57 65 74 82 94
Taxes 23 26 29 33 37
NOPLAT 34 39 44 49 56
Gross Cash Row 44 48 53 59 65
Year 1 2 3 4 5
Net fixed assets at the 80 90 81 82 87
end of previous year
Additions at the beginning 20 0 10 15 0
of the year
Total 100 90 91 97 87
Depreciation for the year 10 9 9 10 9
158
Step 3
Year 1 2 3 4 5
Gross cash flow 44 48 53 59 65
Gross investment 20 -1 10 16 -1
Free cash flow from 24 49 43 43 66
operations
Non-operating cash flow 10 0 5 20 0
Free cash flow 34 49 48 63 66
Step 4
Ascertaining the cost of capital Cost of capital = (0.08 x 40/150) + (0.15 x 110/150) 13.13%
Step 5
Step 6
= Rs.2455 cr.
The DCF approach is the ideal model to be used when a firm has positive future cash flows, the expected cash
flows can be reliably estimated and there exists a proxy for risk which is required in computation of discount
rates. However, in a real life situation, the valuer faces some practical challenges. The limitations of the DCF
approach become apparent in the following cases.
1. Asset Rich Firms: DCF valuation reflects the value of all the assets which produce cash flows. The firm
may have some assets which do not produce any cash flows. For e.g. surplus land, unutilized floor space in
factory buildings, staff quarters, etc. The value of such assets will not be reflected in the DCF valuation. The
159
same limitation also applies, to a lesser extent, to underutilized assets, as their values will be understated in
the DCF model.
2. Firms in Distress: Firms in financial distress may have negative current and future cash flows. The
present value of such firms will be a negative figure under the DCF method. Further such firms have a high
probability of going into bankruptcy.
This violates the basic premise of the DCF approach which views a firm as a going concern.
4. Cyclical Firms: The cash flows of cyclical firms tend to shadow the performance of the economy. The
earnings and cash flows are high during the boom periods and are low during recessionary periods. The
valuations can be misleading if the explicit forecast period does not cover the entire economic cycle.
However this is an onerous task and the resulting valuation can be highly subjective depending on
the valuer's assumptions about the timing and the duration of the phases of the economic cycle.
5. Firms with Product Options: Firms often have unutilized product options which do not generate any
current cash flows. For e.g. for companies involved in oil exploration, winning the right to drill oil and gas in
a particular region represents a product option. Similarly firms may also have unutilized intellectual property
rights like patents and copyrights. If DCF model is applied for such valuations, the firm will be grossly
undervalued. Some practitioners have overcome this limitation either by obtaining the market value of such
options or by applying the option pricing model for its valuation. The resultant value of the option is added to
the value obtained from DCF valuation to arrive at the true value of the firm.
This approach is also called as the relative approach. In this approach, the value of any firm is derived from the
value of comparable firms, based on a set of common variables like earnings, sales, cash flows, book value,
etc. The most common manifestation of the comparable approach is in the use of the industry average Price-
Earnings multiple (P/E ratio) for valuation of equity. Another commonly used tool for valuing equity is the Price-
Book Value multiple (P/BV ratio).
The comparable firms model is essentially a top-down approach. The valuation process applying this approach
is a four staged exercise.
The valuer is required to make an indepth analysis of the firm to get rich insights into the financial and
operational aspects.
The profitability of the firm may be analyzed by looking at the operating profit margins and the net profit
margins. Further analysis may be made by analyzing the return on capital employed and return on net worth.
The liquidity position may be analyzed from the current ratio and quick ratio. The interest coverage and the debt
service coverage would provide pointers to the solvency position. The efficiency of the operations can be
captured from ratios like inventory turnover, fixed assets turnover, debtors turnover, etc. The cash flows of the
firm need to be carefully studied and a sensitivity analysis may be conducted. The capital structure of the firm
also needs to be analyzed.
The qualitative analysis includes assessing the position of the firm in the industry, market share, competitive
advantage (if any) etc. For e.g. Reliance Industries plays a dominant role in the petrochemical industry in India
and commands a valuation multiple than IPCL. The managerial evaluation is also important as the competence
and integrity of the management have a greater bearing on the valuation. For e.g. one of the factors due to
which Infosys Technologies commands high valuation is the market perception of its exemplary corporate
governance. The ownership pattern plays its part as historically MNC firms have been given higher valuation
vis-a-vis domestic firms as they are considered to be better managed.
160
Identification of Comparable Firms
The next stage involves identification of comparable firms. This process begins with a thorough analysis of the
industry in which the firm operates. The valuer is to carefully assess the general profile of the industry,
competitive structure, demand-supply position, installed capacities, pricing system, availability of inputs,
government policies and regulatory framework, long-term trends, etc. The next step involves identification of
firms with comparable profile. The parameters used for identification of such firms include product profile, scale
of operations, markets served, cost structures, geographical location, technology, etc.
In practice, it is virtually impossible to find truly similar firms, which can match the subject firm on all or even
most of the parameters. The identification process generally involves delineating a list of firms which bear some
resemblance to the firm being valued. Once a universe of potentially comparable firms is identified, each of the
firms is analyzed based on the predetermined parameters. From the universe identified as above three to five
specific firms which bear similarity, as close as possible, to the firm being valued, are selected.
Solution
The weightages to P/S ratio, P/E ratio and the P/BV ratio are 1, 2 and 1 respectively. Thus the weighted
average value will be
= Rs.141.32 cr.
The value of Sigma Ltd., using the comparable firms approach, is Rs.141.32 cr.
The adjusted book value approach to valuation involves estimation of the market value of the assets and
liabilities of the firm as a going concern. It is a pointer to the liquidation value of the firm. It is, however, distinct
from the conventional book value method. The conventional approach relies on the historical book value of the
assets and liabilities as against the valuation of the assets and liabilities at their fair market value in this method.
The approach begins with valuation of all the assets of the firm. Fixed assets constitute substantial portion of
the asset side of the balance sheet in capital intensive companies. Land is valued at its current market price.
Buildings are normally valued at replacement cost. However appropriate allowances are to be made for
depreciation and deterioration in its conditions. Similarly plant & machinery, capital equipments, furniture,
fixtures, etc. are to be valued at fixed costs net of depreciation and allowances for deterioration in conditions.
An alternative method of valuing plant & machinery involves estimation of the prevailing market price of similar
used (second-hand) machinery and adding the cost of transportation and erection. The other major block on the
asset side of the balance sheet is current assets. The principal components of current assets are inventory,
debtors and cash. The inventory is valued depending upon its nature; the raw materials are to be valued at the
rates of the latest orders; the finished goods at the current realizable sale value after deducting provisions for
161
packing, transportation, selling costs, etc. The work-in-process can be valued either based on the cost i.e. cost
of materials plus processing costs incurred or based on the sales price i.e. sale price of the finished product
less cost incurred to convert the work-in-process into sales. Debtors are generally valued at their book value.
However, allowances should be made for any doubtful debts. Valuation of cash (including balances with bank)
does not need any great expertise. Miscellaneous current assets like income accrued but not due, prepaid
expenses, deposits made etc. are to be taken at their book value. Non-operating assets like investments,
surplus land, staff quarters, etc. are generally valued at their fair market value.
The valuation of intangible assets like brands, goodwill, patents, trademarks & copyrights, distribution channel,
etc. is a controversial area of valuation. Several major companies (consumer goods in particular) believe that
brands are its most valuable assets. The idea of intangibles as financial assets emerged in the mid-eighties. As
intangibles have significant financial value, their absence from the valuation distorts the true financial position of
a company. Hence in order to ensure that the valuation of a company is reflective of its true intrinsic worth it has
become necessary for companies to determine the values of their brands.
In the late eighties, the Australian group. Goodman Fielder Wattie (GFW) mounted a hostile bid on a British
company Ranks Hovis McDougall (RHM). RHM issued a defense document that mentioned that GFW bid
significantly undervalued RHM's true worth, since it did not take into account the company's strong brands. It
said "These valuable assets are not included in the balance sheet, but they have helped RHM build profits in
the past and provide a sound base for future growth". RHM engaged the services of a professional consultancy
firm to do a brand valuation. Viewing brands as assets, the consultants valued the business at 900 million
Pounds, significantly higher than GFW bid of 600 million Pounds. Once they published that information, it was
clear that GFW's bid undervalued the business and the bid finally drifted away.
However, there is a large element of subjectivity in the process of valuation of intangibles. The two popular
methods of valuing intangibles are given below.
Earnings Valuation Method: This method of valuation is widely accepted in most markets around the world.
The value of an intangible like any other asset is equal to the present value of the future earnings attributable to
it. This is a two-staged process involving
The main drawback of this approach is that the future projections of the earnings may be optimistic. Further the
process of determining the multiplier is highly subjective. Due care has to be taken for the above factors, failing
which the intangible asset may be overvalued. Unscrupulous companies may possibly overvalue the intangibles
and use brand values as a tool for window dressing.
Cost Method: This method involves stating the value of the intangible asset at its cost to the company. This is
relatively easy when the intangible asset is acquired. The money paid to buy the brands can be directly stated.
(For e.g. Coca Cola paid Rs.170 cr to acquire the soft drinks brands of Parle). It is more difficult to value the
brand when the intangible asset has been developed in-house by the company. The methodology involves
determining the cost incurred in developing the intangible asset. The process of identification of the the costs
incurred is characterized by a great degree of subjectivity. This may have a significant impact on the final
valuation.
Valuation of Liabilities
The valuation of liabilities is relatively simple. It must be noted that share capital, reserves and surpluses are
not included in the valuation. Only liabilities owed to outsiders are to be considered. All long-term debt like
loans, bonds, etc. are to be valued at their present value using the standard bond valuation model. This
involves computing the present value of the debt servicing (both principal and interest payments) by applying an
appropriate discount rate. Current liabilities include amount due to creditors, short-term borrowings, provision
for taxes, accrued expenses, advance payment received, etc. Normally such current liabilities and provisions
are taken at their book value.
162
Valuation of the Firm
The ownership value of a firm is the difference between the value of the assets (both tangible and intangible)
and the value of the liabilities. Normally no premium is added for control as assets and liabilities are taken at
their economic values. On the other hand, a discount may be necessary to factor in the marketability element.
The market for some of the assets may be illiquid or may fetch a slightly lesser price if the buyer does not
perceive as much value of the asset to his business. Hence a discount factor may be applied.
A significant portion of the current research in the area of valuations is devoted to the application of option
theory to value firms. This is leading to the emergence of a new model to value firms or businesses called as
the contingent claims model. A contingent claim (option) is an asset that pays off under certain contingencies; if
the value of the underlying variable exceeds a predetermined amount, then for a call option has a value and if it
is less than the predetermined value the put option has a value.
The contingent claims model is based on the premise that equity can be viewed as call option on the firm. The
equity in a firm is a residual claim; the equity holders can lay their claims to the cash flows (in the form of
dividends) of the firm only after all the claims of other stakeholders (creditors', debt providers, preference
shareholders, etc.) have been satisfied. Similarly, if the firm is liquidated, the equity holders receive the entire
balance portion after all the financial claims on the firm have been paid off. The principle of limited liability
provides immunity to the equity holders if the value of the firm is less than the outstanding financial claims. In
other words, the maximum loss to the equity holders cannot exceed the amount of their investment. Thus the
pay-off to the equity holders in the event of liquidation is
V - C if V > C
or zero if V<C
where,
Thus an analogy can be drawn between equity and options wherein the equity shares are treated as call
options on the value of the underlying firm, the value of the claims can be taken as the exercise price (strike
price), the maturity of the claims measuring the life of the option and the original investment representing the
option premium. The principle of limited liability eliminates the downside risk for the equity holders. The
contingent claims model values the firm by valuing its equity using the option-pricing models.
While the contingent claim model stands the test of conceptual soundness, the practical application of this
model in the real world has not been very significant. Some of the issues which need to be further addressed
before this model gets acceptability in the corporate world are as follows:
• One of the basic assumption of the Black-Scholes Model is that the variance in the price of the underlying
asset is known and remains constant over the life of the option. In case of the contingent claim model
where the underlying variable is the value of the firm, it is impractical to determine the variance in the first
place. Secondly, even if such variance were to be measured, it is unlikely that it will remain constant over
extended periods.
• Option pricing theory, as in both the Binomial Model and the Black-Scholes Model, is built on the premise
that a replicating portfolio can be created using the underlying asset and riskless borrowing and lending.
This is a reasonably defensible assumption when the underlying variable is a security, commodity or a
currency. However, the business (not the equity share) which is being valued is not traded in the market
and the probability of building a replicating portfolio appears remote. Hence no possibility of arbitrage
exists, which is the basis of the option pricing models.
• The Black-Scholes Model is built on the assumption that the underlying asset's price process is continuous.
The validity of this assumption to the value of the firm is doubtful. Experts opine that a possible solution is to
use an option pricing model that explicitly allows for price jumps (discrete instead of continuous price
process). However, the inputs required for such models are again difficult to determine. Jump process
models are based on poisson distribution and require inputs on the probability of the price jumps, the
average magnitude and the variance.
163
VALUATION: SOME MISCONCEPTIONS
1. Valuation Models give an exact estimate of value: The appropriateness of the valuation depends
upon the quality of the data, correctness of the assumptions and the application of the right valuation
model. Most of the data pertaining to projected cash flows is futuristic and is thus characterized by
uncertainty. This makes valuation an inexact and imprecise exercise. The valuation process gives us at
best a value anchor. A value range may be determined based on the margin of error which in turn is a
function of the degree of uncertainty of the cash flows.
2. Valuation is a totally objective exercise: The models used in valuation may be quantitative but the inputs
leave plenty of room for subjective judgements. The opinions and the biases of the valuer"get reflected in
the valuation. _The_ estimation of the_future cash flows depend -upon the aggressiveness or the
conservatism of the assumptions made. Further there is also a certain degree of subjectivity in
determination of discounting rates. It is generally observed that in case of takeovers the value of the target
firm as estimated by their investment bankers is higher than the valuation estimates by the investment
bankers of the predator company.
Any valuation exercise is time specific and is reflective of the information available to the valuer at that
specific point of time. As time passes by, the flow of new information begins. The information may be firm
specific, industry specific or pertain to the market as a whole. Thus the valuation done in the past becomes
increasingly obsolete and the same needs to be updated to reflect the current information.
4. The value estimated is important; the process does not matter: The valuation exercise depends on the
robustness of the valuation process. Hence the focus should not be exclusively on the outcome in the form
of a definitive value figure. The valuation process is informative and provides valuable insights about the
firm. The process reveals a great deal about the determinants of value and the user should make an effort
to understand the valuation process.
5. The market is always wrong: The benchmark for comparison of a valuation exercise is the market
valuation of the firm. When the value estimated is significantly different from the market valuation, there can
be two conclusions: the valuer is right and the market has substantially undervalued/ overvalued the firm or
that the market is right and the valuation is incorrect. It is observed that very often, the instantaneous
conclusion drawn is that the market is wrong. In such cases it is prudent to give the benefit of doubt to the
market as the collective wisdom of the market as a whole is generally superior to the judgement of the
valuer. However, if the valuer is able to convincingly prove the wisdom of his valuation, only then the same
may be accepted and not otherwise.
DIVERSIFICATION STRATEGY
All other things being constant, an ideal strategy is to move into a diversification program from a base or core of
existing capabilities or organizational strengths. The firm should be clear on both its strengths and weaknesses
and should clearly define the specific new capabilities it is seeking to obtain. If the firm does not possess a
sufficient breadth of capability to use as a basis for moving into other areas, an alternative strategy may be
employed.
In recent years, the nature of the firms and the boundaries of industries have become much more dynamic and
flexible. This has to be kept in mind even before the carryover of capabilities in pure conglomerate mergers. In
this dynamic changing world managements must relate to missions, defined in terms of customer needs, wants,
or problems to be solved. Another important dimension of the concept of industries is a range of capabilities.
The technological capabilities include all processes from the basic research, product design and development
to interrelated manufacturing methods and obtaining feedback from consumers. Managerial capabilities include
competence in the generic management functions of planning, organizing, directing, and controlling as well as
specific management functions of research, marketing, finance and personnel.
164
Characteristics of a Successful Diversification Strategy
Any diversification strategy should be built on the foundation of existing competencies. This facilitates entry into
new markets. A company can have multiple capabilities, but a capability qualifies as a core competence if it
fulfills the following criteria:
The markets earmarked for expansion should be growth markets with low gestation periods. A small company
cannot afford to operate in markets where the 'gestation period is high. The telecom sector, for instance, was
opened up in the year 1994. The private operators in most circles are yet to make profits. On the other hand,
the software boom saw many companies diversify into the Info Tech arena with substantial rewards. The new
markets should also offer room for companies to operate in a niche.
NEW CAPABILITIES
Though the strategy is based on existing capabilities, companies should acquire new ones to augment the
existing strengths. They could make an effort to acquire new technologies, distribution channels or adding
marketing muscle.
Implementation of the strategy will require strong and aggressive management. The owner/manager may have
to take swift, decisive measures during the diversification effort, These could be decisions related to investment
or downsizing. These decisions may be risky and face resistance from employees. Strong and visionary
leadership is required to ensure successful implementation.
A skilled and autonomous workforce is a must for the diversification strategy to succeed. Employees are more
productive if given autonomy.
Companies that can maintain a lean management structure can avoid high overhead margins. The success of
the diversification ultimately hinges upon the tenacity of the personnel to see it through.
Diversifying to new markets can be a risky proposition. The risk can be minimized if companies can identify
their strengths and evaluate market opportunities accordingly. The key for small companies is to identify
markets where their capabilities can be profitably leveraged to create customer value.
The changing environments and the new forms of competition have created new opportunities and threats for
business firms. Firms must adjust to new forces of competition from all directions. They have been forced to
adopt many forms of restructuring activity. M&As will be considered first, but it should be understood that they
represent only one set of the many adjustment and restructuring responses.
Internal growth and mergers are not mutually exclusive activities. They are mutually supportive and reinforcing.
Successful growing firms use many forms of M&As and restructuring based on opportunities and limitations.
The characteristics and competitive structure of an industry will influence the strategies employed.
Growth and diversification can be achieved both internally and externally. Internal development is more
advantageous for some activities and for some other external diversification is more beneficial.
165
The factors which support the external growth and diversification through mergers and acquisitions include the
following:
Faster achievement of goals and objectives through an external acquisition. Greater cost of building an
organization internally, than the cost of an acquisition.
Attainment of feasible market share with less risk, in shorter time and at lower cost.
Internal development is favored when the above given advantages are minimal. When the firms which are
available for acquisition do not provide attractive opportunities for achieving the goals that have been set,
internal development is more feasible from an economic perspective.
GE Capital is the product of dozens of acquisitions that have been blended to form one of the world's largest
financial services organizations. GE Capital was founded in 1933 as a subsidiary of the General Electric
Company to provide consumers with credit to purchase GE appliances. Since then, the company has grown to
become a major financial services conglomerate with 27 separate businesses and more than 50,000
employees worldwide. These businesses include private label credit card services to commercial real estate
financing to rail car and aircraft leasing. More than half of these businesses have become part of GE Capital
through acquisitions.
The acquisitions come in different forms and shapes. Sometimes, the acquisition is a portfolio or asset
purchase that adds volume to a particular business without adding people. Sometimes, it is consolidating
acquisition in which a company is purchased and then consolidated into an existing GE Capital business like it
happened when GE Capital Vendor Financial Services bought Chase Manhattan Bank's leasing business.
Sometimes the acquisition moves into a new territory, generate an entirely new GE Capital business, as when
GE Capital bought Travelers Corporation's business. Sometimes the acquisition is a hybrid, parts of which fit
into one or more existing businesses while the other parts stand alone or become joint ventures.
Growth through mergers and diversification represents a very good alternative to be taken into account in
business planning. The external growth contributes to opportunities for effective alignment to the firm's
changing environments. The primary reason for acquiring or merging with another business is to produce
improved cash flow or to reduce the risk faster or at a lower cost than achieving the same goal internally. Thus,
the goal of any acquisition is to create a strategic advantage by paying a price for the target that is lower than
the total resources required for internal development of a similar strategic position.
Another reason is the expectation on the part of the diversifying or acquiring firm that it has or will have excess
capacity of general managerial capabilities in relation to its existing product market activities. Moreover, there is
an expectation that in the process of interacting with the generic management activities, the diversifying firms
will develop industry specific managerial experience and firm specific organization capital overtime.
Important changes in the management technology include issues like development in theory and practice of
planning, increased role of management functions in the firm's operations, the development and use of
formalized decision models, increased recognition of quality and continuity of the firm's management
organization as an important economic variable etc. These factors have made it beneficial to spread these
abilities over a greater number of activities. Conversely, these management capabilities are not evenly
distributed throughout industries giving an opportunity for firms to extend their capabilities to other firms and to
new areas in order to increase the returns on investments in both management and physical assets.
The opportunities for diversification have increased along with the demands to change. The expertise of
technology is spread unequally among various business firms and industries. The prospects of economic profits
from the supply of advanced technological capabilities to industries and firms which need them provide an
increased incentive to diversify.
166
Large Fixed Costs and Staff Services
Fixed cost of business firms have increased due to the need to maintain an affective competitive position in the
world economy and the resultant larger management capabilities. Investments in managerial organizations
have always resulted in economies of scale rather than investment in physical assets. Hence, the economies
derived from spreading the fixed costs for managerial staff functions over a wide range of activities have
increased.
The trends in the equity markets have strengthened the influence of the above mentioned factors in
encouraging diversification by external diversification. In the equity markets stock which had a potential for
growth in earnings and dividends, were highly valued. Hence, growth stocks had higher P/E ratio. This
increased interest in the growth stimulated mergers in various ways. The search for product markets with
growth opportunities intensified.
In a rapidly changing world, companies are facing unprecedented turmoil in global markets. Severe competition,
rapid technological change, and rising stock market volatility have increased the burden on managers to deliver
superior performance and value for their shareholders.
In response to these pressures, an increasing number of companies around the world are dramatically
restructuring their assets, operations, and contractual relationships with shareholders, creditors, and other
financial stakeholders. Corporate restructuring has facilitated thousands of organizations to re-establish their
competitive advantage and respond more quickly and effectively to new opportunities and unexpected
challenges. Corporate restructuring has had an equally profound impact on the many more thousands of
suppliers, customers, and competitors that do business with restructured firms.
Generally, most of the corporate growth occurs by internal expansion, when a firm's existing divisions grow
through normal capital budgeting activities, Neverthless, if the goals are easily achieved within the firm, it may
mean that the goals are too small. Growth opportunities come in a variety of other forms and a great deal of
energy and resources may be wasted if an entrepreneur does not wait long enough to identify the various
dynamics which are already in place. The most remarkable examples of growth and often the largest increases
in stock prices are a result of mergers and acquisitions. M&As offer tremendous opportunities for companies to
grow and add value to shareholders wealth. M&As is a strategy for growth and expansion. M&As are expected
to increase value and efficiency and thereby increase shareholders' value. M&As is a generic term used to
represent different types of corporate restructuring exercises.
Business firms in their pursuit of growth, engage in a broad range of restructuring activities. Actions taken to
expand or contract a firm's basic operations or fundamentally change its asset or financial structure are referred
to as corporate restructuring activities. Corporate restructuring is a broad umbrella that covers many things.
One of them is the merger or takeover. From the viewpoint of the buyer, M&A represent expansion and from the
perspective of the seller it represents a change in ownership that may or may not be voluntary. In addition to
mergers, takeovers, and contests for corporate control; there are other types of corporate restructuring like
divestitures, rearrangements, and ownership reformulation.
These corporate restructuring activities can be divided into two broad categories -operational and functional.
Operational restructuring refers to outright or partial purchase or sale of companies or product lines or
downsizing by closing unprofitable, and non-strategic facilities. Financial restructuring refers to the actions
taken by the firm to change its total debt and equity structure.
167
An overview of all these restructuring activities, is shown in a summarized form in Table 1. The grouping is a bit
random but indicates the direction of the emphasis in these various practices.
Expansion
Mergers and Acquisitions Tender offers Asset acquisition Joint ventures Contraction Spin offs Split offs
Divestitures Equity carve outs Assets sale
Corporate Control
Anti takeover defenses Share repurchases Exchange offers Proxy contests Changes in Ownership Structures
Leveraged buyout Junk bonds Going private ESOPs and MLPs
Each type of activity mentioned in the above Table is briefly explained below:
Expansion
Expansion is a form of restructuring, which results in an increase in the size of the firm. It can take place in the
form of a merger, acquisition, tender offer, asset acquisition or a joint venture,
MERGER
Merger is defined as a combination of two or more companies into a single company. A merger can take place
either as an amalgamation or absorption.
Amalgamation
This type of merger involves fusion of two or more companies. After the amalgamation, the two companies lose
their individual identity and a new company comes into existence. A new firm that is hitherto, not in existence
comes into being. This form is generally applied to combinations of firms of equal size.
Example: The merger of Brooke Bond India Ltd with Lipton India Ltd resulted in the formation of a new
company Brooke Bond Lipton India Ltd.
Absorption
This type of merger involves fusion of a small company with a large company. After the merger the smaller
company ceases to exist.
Example: The recent merger of Oriental Bank of Commerce with Global Trust Bank. After the merger, GTB
ceased to exist while the Oriental Bank of Commerce expanded and continued.
TENDER OFFER
Tender offer involves making a public offer for acquiring the shares of the target company with a view to acquire
management control in that company.
Example: (1) Flextronics International giving an open market offer at Rs.548 for 20% of paid-up capital in
Hughes Software Systems.
(2) AstraZenca Pharmaceuticals AB, a Swedish firm, announced an open offer to acquire 8.4% stake in Astra
Zenca Pharma India at a floor price of Rs.825 per share.
ASSET ACQUISITION
Asset acquisitions involve buying the assets of another company. These assets may be tangible assets like a
manufacturing unit or intangible assets like brands. In such acquisitions, the acquirer company can limit its
acquisitions to those parts of the firm that coincide with the acquirer's needs.
Example: The acquisition of the cement division of Tata Steel by Laffarge of France. Laffarge acquired only the
1.7 million tonne cement plant and its related assets from Tata Steel.
168
The asset being purchased may also be intangible in nature. For example, Coca-Cola paid Rs. 170 crore to
Parle to acquire its soft drinks brands like Thums Up, Limca, Gold Spot, etc.
The business world has changed drastically. Markets, instruments, financing and relationships have
transformed to become exceedingly complex. The economic environment has shifted dramatically and in order
to prosper or even to survive in such an environment, the strategy formulation has become very important. It is
no longer possible to take a simple, idealistic view of what should be done and how it should be done.
The pursuit of growth and the need to access new markets are driving companies all over the world to
undertake mergers, and acquisitions. This phenomenon is becoming part of the strategic planning of many
corporate bodies seeking not only to exploit existing core competencies but also to build new ones for the
future. While the motives or influences leading to mergers are multiple, varied and complex, the potential for
concentration of economic power is inherent in the phenomenon of mergers.
When two businesses combine their activities, the combination may take the form of acquisition (takeover) or a
merger (amalgamation). The distinction between a merger and an acquisition is not very clear. The methods
used for mergers are often the same as the methods used to make takeovers. However, theoretically there can
be a subtle difference between the two, as can be interpreted from the following definitions:
Acquisition or Takeover: The purchase of a controlling interest by a company in the voting share capita! of
another company, usually by buying the majority of the voting shares is called an acquisition or a takeover. Idea
Cellular acquiring Escotel is an example of an acquisition.
Merger: A business combination that results in the creation of a new reporting entity formed from the combining
parties, in which the shareholders of the combining entities come together in a partnership for the mutual
sharing of the risks and the benefits of the combined entity, and in which no party to the combination obtains
control over the other. An example of a merger is Daimler-Benz and Chrysler.
The main reason for any business organization to combine is to increase the shareholder wealth. This increase
usually comes from the effects of synergy. In this chapter we shall discuss in detail the various types of mergers
and the process undergone by firms to accomplish a merger or an acquisition.
TYPES OF MERGERS
Merger or acquisition depends upon the purpose for which the target company is acquired. A company will seek
to acquire the other company only when it has arrived at its own developmental plan to expand its operations
after a thorough analysis of its own internal strength. It has to aim at a suitable combination where it could have
opportunities to supplement its funds; secure additional financial facilities, eliminate competition and strengthen
its market position. Based on the reason why firms combine, mergers can be divided into three categories: (i)
Horizontal mergers (ii) Vertical mergers, and (iii) Conglomerate mergers.
Horizontal Merger
A horizontal merger involves a merger between two firms operating and competing in the same kind of business
activity. The main purpose of such mergers is to obtain economies of scale of production. The economies of
scale is obtained by the elimination of duplication of facilities and operations and broadening the product line,
reduction in investment in working capital, elimination of competition in a product, reduction in advertising costs,
increase in market share, exercise of better control on market, etc.
Horizontal mergers result in decrease in the number of firms in an industry and hence such type of mergers
make it easier for the industry members to join together for monopoly profits. Horizontal mergers also have a
potential to create monopoly power on the part of the combined firm enabling it to engage in anticompetitive
practices. Hence, in many countries, restrictive business practices legislation enforce strict regulations on the
integration of competitors. Horizontal mergers of even small enterprises may create conditions triggering
concentration of economic power and oligopoly.
The alliance between Birla, AT&T and Tata (BATATA) in Idea Cellular Ltd., is an example of a horizontal
merger.
Vertical Mergers
A vertical merger involves merger between firms that are in different stages of production or value chain. They
are combination of companies that usually have buyer-seller relationships. A company involved in a vertical
merger usually seeks to merge with another company or would like to takeover another company mainly to
169
expand its operations by backward or forward integration. The acquiring company through merger of another
unit attempts to reduce inventories of raw material and finished goods, implements its production plans as per
objectives and economizes on working capital investments. In other words, in vertical combination, the merging
company would be either a supplier or a buyer using its product as an intermediary material for final production.
Firms integrate vertically between various stages due to reasons like technological economies, elimination of
transaction costs, improved planning for inventory and production, reconciliation of divergent interests of parties
to a transaction, etc. Anticompetitive effects have also been observed as both the motivation and the result of
these mergers.
Examples: Nirma's bid for Gujarat Heavy Chemical (backward integration) or Hindalco bidding for Pennar
Aluminium (forward integration).
Conglomerate Mergers
Conglomerate mergers involve merger between firms engaged in unrelated types of business activity. The
basic purpose of such combination is utilization of financial resources. Such type of merger enhances the
overall stability of the acquirer company and creates balance in the company's total portfolio of diverse products
and production processes and thereby reduces the risk of instability in the firm's cash flows.
Conglomerate mergers can be distinguished into three types: product extension mergers, geographic market
extension mergers and pure conglomerate mergers.
Product extension mergers are mergers between firms in related business activities and may also be called
concentric mergers. These mergers broaden the product lines of the firms.
Geographic market extension mergers involve a merger between two firms operating in two different
geographic areas.
Pure conglomerate mergers involve merger between two firms with unrelated business activities. They do not
come under product extension or market extension mergers. Within the broader category of conglomerate
mergers two types of conglomerate firms can be distinguished.
Financial Conglomerates: Financial conglomerates provide a flow of funds to each segment of their operations,
exercise control and are the final financial risk takers. They undertake strategic planning but do not participate
in operating decisions.
Managerial Conglomerates: Managerial conglomerates transmit the attributes of financial conglomerates still
further. They not only assume financial responsibility and control, but also play a role in operating decisions and
provide staff expertise and staff services to the operating entities. By providing managerial guidance and
interactions on decisions, managerial conglomerates increase the potential for improving performance.
The acquisition process can be divided into a planning stage and an implementation stage. The planning stage
consists of the development of the business and the acquisition plans. The implementation stage consists of the
search, screening, contacting the target, negotiation, integration and the evaluation activities. In short, the
process of acquisition can be summarized in the following steps:
As discussed earlier, a merger or an acquisition decision is a strategic choice. The acquisition strategy should
fit the company's strategic goals of increasing the net cash flows and reduce risk.
170
A business plan communicates a mission or vision for the firm and a strategy for achieving that mission. A well-
structured business plan consists of the following activities:
i. Determining where to compete i.e., the industry or the market in which the firm desires to compete.
ii. Determining how to compete. An external industry or the market analysis can be made to determine how
the firm can most effectively compete in its chosen market(s).
iii. Self-assessment of the firm by conducting an internal analysis of the firm's strengths and, weaknesses
relative to the competition.
iv. Defining the mission statement by summarizing where and how the firm has chosen to compete and the
basic operating beliefs of the management.
v. Setting objectives by developing quantitative measures of performance.
vi. Selecting the strategy most likely to achieve the objectives within a reasonable time period subject to
constraints identified in the self-assessment.
The strategic planning process identifies the company's competitive position and sets objectives to exploit its
relative strengths while minimizing the effects of its weaknesses. The firm's Mergers and Acquisitions strategy
should complement this process, targeting only those industries and companies that improve the acquirer's
strengths or lessen the weaknesses,
After a proper analysis of the various available options if it is determined that a merger or an acquisition process
is appropriate to implement the business strategy then an acquisition plan is prepared. This plan focuses on the
tactical rather than the strategic issues. The acquisition plan defines the key management objectives for the
takeover, resource constraints, appropriate tactics for implementing the proposed transactions and the
schedule or a time table for completing the acquisition. It furnishes a proper guidance to those responsible for
successfully completing the transaction by providing valuable inputs to all the later phases of the acquisition
process.
MANAGEMENT OBJECTIVES
Management objectives are both financial and non-financial. The financial objectives include a minimum rate of
return or operating profit, revenue and cash flow targets to be achieved within a specified time period. Non-
financial objectives address the motivations for making the acquisition that support the achievement of the
financial returns predetermined in the business plan.
RESOURCE ASSESSMENT
The assessment of the resources involves the determination of the maximum amount of resources available to
assign to the merger or acquisition. This information is useful in the selection of the right candidate for the
merger or the acquisition. The resources available generally include the financial resources like the internal
cash flows in excess of the normal operating requirements plus funds from equity and the debt markets. If the
target is identified, resources should also include funds which the combined firm can raise by issuing equity or
by increasing leverage. It is the management's perception about the likely risks that it would be exposed to by
virtue of acquisition that determines the financial implications. These risks may be:
Operating Risk
It refers to the ability of the acquirer to manage the acquired company. The risk is higher in conglomerate
mergers. The limited understanding of the business operations of the newly acquired firm may negatively
impact the integration effort and the ongoing management of the combined companies.
Financial Risk
It refers to the acquirer's willingness and the ability to leverage a transaction as well as the willingness of
shareholders to accept near-term earnings per share dilution. The acquiring company tries to maintain certain
level of financial ratios such as the debt to equity and interest coverage ratio to retain a specific credit rating.
171
The incremental debt capacity of the firm can be estimated by comparing the relevant financial ratios to those of
comparable firms in the industry. The difference represents the amount of money that the firm can borrow
without making the current credit rating vulnerable.
Overpayment Risk
It refers to the possibility of dilution in the earnings per share or reduction in the growth of the firm because of
paying more than the economic value of the acquired firm.
TIME TABLE
A time table or a schedule that recognizes all the key events that should take place in the acquisition process is
the final component of a properly structured acquisition plan. It should be both realistic and aggressive to
motivate all the participants in the process to work as fast as possible to achieve the management objectives
established in the acquisition plan. The schedule should also include the names of the individuals who will be
responsible for ensuring that the set objectives are achieved.
After the firm has developed a viable business plan that requires an acquisition to realize the firm's strategic
direction and an acquisition plan the search for the right candidate for acquisition begins. The search for a
potential acquisition candidate generally takes place in two stages.
The first stage of the search process involves establishing a primary screening process. The primary criteria
based on which the search process is based include factors like the industry, size of the transaction and the
geographic location. The size of the transaction is best defined in terms of the maximum purchase price a firm
is willing to pay. It can be expressed as the maximum purchase price to earnings, book, cash flow or revenue
ratio or a maximum purchase price stated in terms of rupees.
The second stage involves developing the search strategy. Such strategies generally involve using
computerized database and directory services to identify the prospective candidates. Law, banking and
accounting firms also form valuable sources from which information can be obtained. Investment banks,
brokers, and leveraged buyout firms are also useful sources although they are likely to require an advisory fee.
The screening process starts with the reduction of the initial list of potential candidates identified by using the
primary criteria such as the size and the type of the industry. In addition to the primary criteria employed,
secondary selection criteria include a specific market segment within the industry or a specific product line
within the market segment. Other measures like the firm's profitability, degree of leverage and the market share
are also used in the screening process.
First Contact
The contact phase of the process involves meeting the acquisition candidate and putting forward the proposal
of acquisition. It could run through several distinctively identifiable phases that need a little more elaboration.
The approach employed for contacting the target depends on the size of the company and whether it is publicly
or privately held. For small companies in which the buyer has no direct contacts, a letter expressing interest in a
joint venture or marketing alliance is enough. Thorough preparation before the first contact is essential for that
alone enables the acquirer to identify the company's strengths and weaknesses and be able to explain the
benefit of the proposal to the client convincingly. A face to face meeting is then arranged when the target is
willing to entertain the idea of an acquisition. Contact is made through an intermediary for a medium sized
company. The intermediaries might include members of the acquirer's board of directors, accounting firm,
lender or an investment banker. For a large sized company contact is made through an intermediary but it is
important that the contact is made with the highest level of the management of the target firm.
DISCUSSING VALUE
Valuation of the target company is the most critical part of a deal. A conservative valuation can result in
collapse of the deal while an aggressive valuation may create perpetual problems for the acquiring company.
The commonly used valuation methods are:
172
i. Discounted Cash Flow Method: In this method, valuation represents the present value of the expected
stream of future cash flow discounted for time and risk. This is the most valid methodology from the
theoretical standpoint. However, it is very subjective due to the need to make several assumptions during
the computations.
ii. Comparable Companies Method: This method is based on the premise that companies in the same
industry provide benchmark for valuation. In this method, the target company is valued vis-a-vis its
competitors on several parameters.
iii. Book Value Method: This method attempts to discover the worth of the target company based on its Net
Asset Value.
iv. Market Value Method: This method is used to value listed companies. The
stock market quotations provide the basis to estimate the market capitalization of the company.
Some of the main reasons why firms are forced to divest are; efficiency gains and refocus, information effects,
wealth transfers, and tax reasons.
While Mergers and Acquisitions lead to synergy, divestures can result in reverse synergy. A particular business
may be more valuable to someone for generating cash flows and that someone will be paying a higher price for
the business than its present value. Divestiture is also taken to enable a company to make certain strategic
changes.
The competitive advantage that a company has may change over time due to changing market conditions, and
as a result, a company may have to divest a particular business. In some cases, the past diversification
programs of a company may have lost value, making it necessary for the company to refocus its core
competencies. A divestiture helps a company to refocus on its core competencies.
Information Effects
The information that a divestiture conveys to investors is another reason for divestiture. If the information given
by management is not known to investors, the announcement of divestiture can be seen as a change in
investment strategy or in operating efficiency. This may be taken in a positive sense and boost share price.
However, if the divestiture announcement is perceived as the firms' attempt to dispose off a marketable
subsidiary to deal with adversities in other businesses, it will send a wrong signal to investors. Whether the
divestiture is seen as a good or a bad signal depends on the circumstances.
Wealth Transfers
Divestiture results in the transfer of wealth from debtholders to stockholders. This transfer takes place when a
company divests a particular division and distributes the resulting proceeds of the sale among, stockholders. As
a result of this transaction there is less likelihood of repayment and it will have lesser value. If the total value of
the firm remains unchanged, its equity value is expected to rise.
Tax Reasons
As in the case of mergers, divestitures also provide a considerable tax advantage. When a company is losing
money and is unable to use a tax-loss carry forward, it is better to divest wholly or in part to realize a tax benefit.
When there is increased leverage due to restructuring, a firm can have a tax shield advantage due to interest
payments being tax deductible.
Divestitures Definition
A divestiture is the sale of portion of the firm to an outside party generally resulting in cash infusion to the
parent. They are generally the least complex of the exit restructuring activities to understand. Most of the sell
offs are simply divestitures. The most common form of divestiture involves sale of a division of the parent
company to another firm. The process is a form of contraction for the selling company and a means of
expansion for the purchasing corporation.
173
SPIN OFFS
It is a transaction in which a company distributes to its own shareholders on a pro rata basis all of the shares it
owns in a subsidiary. Hence a spin-off results in the creation of a new public company with the same
proportional equity ownership as the parent company.
Spin off has emerged as a popular form of corporate downsizing in the nineties. A new legal entity is created to
takeover the operations of a particular division or unit of the company. The shares of the new unit are
distributed on a pro rata basis among the existing shareholders. In other words, the shareholding in the new
company at the time of spin-off will reflect the shareholding pattern of the parent company. The shares of the
new company are listed and traded separately on the stock exchanges, thus providing an exit route for the
investors. Spin-off does not result in cash inflow to the parent company.
Spin-offs are often tax-free to the parent company and to the shareholders receiving stock in the spin-off. In
addition, a spin-off can be an effective method for minimizing the execution risk of a divestiture, whether due to
third-party negotiations or to market conditions. Spin-offs also have smaller underwriting discounts and fees
than transactions such as carve-outs. Moreover, the shareholders of the parent company receive a direct
benefit by obtaining the stock of the spun-off subsidiary, as opposed to the less direct benefits of the parent
company receiving the proceeds of a negotiated sale.
In the US spin-offs have become increasingly popular in the last decade, with firms seeking to divest a part of
their businesses. Most of these spin-offs involve a pro rata distribution of shares in a wholly owned subsidiary to
the shareholders of the firm, in the form of a dividend. After the distribution, both the parent and the subsidiary
initially share the same shareholder base, even though the operations and management of the two entities are
now separate and independent of each other. Another important feature of a spin-off that sets it apart from
other types of corporate divestitures is that it does not provide the parent with any cash infusion.
Recently, there has been a noticeable trend towards two-step spin off transactions, where parent firms first sell
up to 20% of the shares in the subsidiary in an initial public offering, followed shortly by a distribution of the
remaining shares to its shareholders. The 20% limit is usually observed in the first step in order to preserve the
tax-free status of the transaction. Why firms choose to pursue a two-step spin-off instead of a 100% pure spin-
off is unclear. Previous research generally focuses on pure spin-offs, so this question has yet to be addressed.
A possible reason for a two-step spin-off is to avoid the dip in the stock price that the spun-off subsidiary usually
experiences in the first few months following the distribution. This initial stock price decline is usually associated
with the portfolio rebalancing activities of large institutional investors who may not wish to hold the shares of the
subsidiary given away by the parent in a spin-off transaction.
For example, the manager of an index fund may be required to sell the shares of the spun-off subsidiary if that
subsidiary does not form part of the index. In a two-step spin-off, the minority carve-out enables the parent firm
to create an orderly market for the new issue, so as to avoid flooding the market with a large number of shares,
as in the case of a pure spin-off (Lament and Thaler, 2000). Also, since the carve-out takes the form of an IPO,
investment banks are often committed to help support and market the new issue - a feature that is also
conspicuously absent in a pure spin-off transaction. When the second step of the spin-off takes place, the
market is then better positioned to support the portfolio rebalancing activities highlighted above.
There is a wealth of research on the effects of spin-offs on both parent and subsidiary firms. Early research
efforts focused mainly on the changes in parent company share prices at the time of the spin-off
announcement. In a study of 6 major spin-offs in the 1970s, Kudla and Mclnish (1983) showed a positive
market reaction in the parents' stock 15 to 40 weeks before the distribution took place - an indication that the
market correctly predicted the spin-off well ahead of the actual event. This result has been supported by many
other studies for periods that date back as early as 1963 to 1981.
Cusatis, Miles and Woolridge (1993) were among the first researchers to focus on the performance of the
subsidiary post-spin-off. They examined 815 spin-offs from 1965 to 1988 and found significantly positive
abnormal returns for the spun-off subsidiary, the parent and the spin-off-parent combination for a period of up to
three years after the spin-off announcement date. They also found that the abnormal returns were attributable
to increased takeover activity, which was not folly anticipated by the market at the time of the spin-off
announcement. Hence, they concluded that earlier event studies underestimated the value created by spin-offs.
A 1997 study done by J P Morgan provided evidence that the positive stockholder wealth effects continued well
into the 1990s. Also, it was found that smaller spin-offs (with an initial market capitalization of less than $200
million) significantly outperformed their larger counterparts. J P Morgan attributed this to underpricing by the
market, which was in turn due to the lack of knowledge on the part of investors.
174
An interesting phenomenon reflected in the graphs showing the post-distribution stock returns of the spun-off
subsidiary, but not investigated by J P Morgan, is the initial decline in returns experienced by the spin-offs in
approximately the first 30 trading days after the distribution. Thereafter, the downward trend is reversed and
returns become positive three months after the spin-off date. This pricing anomaly, however, had already been
picked up by the press and documented by other researchers such as Brown and Brooke (1993) and
Abarbanell, Bushee and Raedy (1998). Brown and Brooke reported price declines of approximately 4% in spun-
off subsidiaries that coincided with substantial reductions in institutional holdings in these firms, and concluded
that the sudden and substantial sell-off of subsidiary shares by institutional investors as part of their portfolio
rebalancing activities explained the downward pressures on price and consequently returns.
Likewise, Abarbanell et al. found empirical evidence supporting the initial decline in the stock returns of the
spun-off subsidiary. In a study of 179 spin-offs between 1980 and 1996, they noted that the overall returns to
subsidiaries were significantly negative within 10 trading days of the distribution date, and this was consistent
with a decrease in mean level of institutional ownership. In fact, a negative abnormal return of- 4.12% was
observed for a 35-day trading period (similar to the finding by Brown and Brooke) and it took another 25 trading
days for this trend to completely reverse. However, Abarbanell et al. did not find any reliable evidence that led
them to conclude that this decline was associated with institutional sell-offs.
Tax Consideration
Spin offs consist of multiple spin offs not taxable to shareholders. To avoid ordinary income taxes the parent
and the subsidiary must have been engaged in business for 5 years prior to the spin-off. The subsidiary should
be at least 80% owned by the parent. And parent has to distribute the shares in the subsidiary without a
prearranged plan for these securities to be resold.
When the parent company has issued the warrants and the securities the conversion ratio may have to be
adjusted. The spin-off may cause the common stock in the parent company to be less valuable if the deal is
structured for the gain through the distribution of the proceeds in the form of special dividend. Warrant and
security holders may not participate in this gain. The stock price of the parent company may fall because it will
be less likely that the price will rise high to enable the securities to be converted. If this is the case, the
conversion prices may not need to be adjusted as part of the terms of the deal.
Employee's shares are held under an employee stock option plan. The number of the shares obtained also
need to be adjusted after the spin-off. The adjustment is designed to leave the market value of the shares that
could be obtained after the spin-off at the same level. The main goal is to maintain the market value of the
shares that may be obtained through the conversion of the employee stock options.
It has grown its popularity since 1992. Its growth was partly fueled by investors' preferences to release the
internal values in the company's stock prices.
Disadvantages of Spin-offs
• There will be considerable selling pressure from institutions and index funds immediately after the spin-off.
This will have a downward pressure on the stock price in the short-term.
• As shares are distributed primarily to existing shareholders, spin-off lack liquidity.
• From the disposition proceeds the parent does not get anything.
• The parent company does not gain monetarily through the spin off.
• A spin off is often perceived as a method for getting rid of a sub-par asset by the parent.
• The new company formed by the spin off has to incur expenses for issuing new shares.
• Servicing the shareholders will lead to duplication of the activities in parent and the spun off company.
EQUITY CARVE-OUTS
An Equity Carve-out (ECO) is a partial public offering of a wholly owned subsidiary. Unlike spin-offs, ECOs
generate a capital infusion because the parent offers shares in the subsidiary to the public through an IPO,
although it usually retains a controlling interest in the subsidiary. Like spin-offs, ECOs have become
increasingly popular in the last several years.
175
An equity carve-out involves conversion of an existing division or unit into a wholly owned subsidiary. A part of
the stake in this subsidiary is sold to outsiders. The parent company may or may not retain controlling stake in
the new entity. The shares of the subsidiary are listed and traded separately on the stock exchange. Equity
carve-outs result in a positive cash flow to the parent company. An equity carve-out is different from a spin-off
because of the induction of outsiders as new shareholders in the firm. Secondly equity carve-outs require
higher levels of disclosure and are more expensive to implement.
"Pure play" Investment Opportunity: Pure plays have been in much demand by investors in recent years. An
ECO, especially for a subsidiary that is not involved in the parent's primary business or industry, increases the
subsidiary's visibility as well as analyst and investor awareness. This enhances its overall value. Investors also
like ECO pure plays because separating the parent and subsidiary minimizes cross-subsidies and other
potentially inefficient uses of capital.
Management Scorecard and Rewards: Management is evaluated on a daily basis through the company's
stock price. This immediate, visible scorecard can boost performance by spurring managers to make timely
strategic decisions and concentrate on the factors that contribute to better shareholder value. Correspondingly,
managers are also more likely to be rewarded for improved results.
Capital Market Access: An ECO typically improves access to capital markets for both the parent and the
subsidiary.
A typical carve-out scenario in the US begins with the parent publicly announcing its intention to offer securities
in a subsidiary or division through an ECO. Since an ECO is a type of IPO, companies must file an S-l
registration statement with the SEC. Registration requires three years of audited income statements, two years
of audited balance sheets, and five years of selected historic financial data. The ensuing process — including
the preparation of financial and registration statements, SEC review, responses, and amendments, and offering
marketing — normally takes up to six months. Once the SEC reviews and declares it effective, the parent can
sell the offering, either listing the spin-off on an exchange or providing for trading over the counter.
Either the parent or the carve-out (or both) can receive the IPO proceeds. If the subsidiary sells the shares, the
IPO represents a primary offering. Over 70 percent of the companies in the researchers' sample reported
handling the ECO in this manner. If the parent sells the shares (known as secondary shares), it must recognize
the difference between the IPO proceeds and its basis as a gain or loss for tax purposes. If the subsidiary sells
the shares in the IPO, neither the parent nor the carve-out incurs a tax liability. When the ECO sells the shares,
it often uses some of the proceeds to repay loans to the parent or pay a special dividend. A relatively small
number of ECOs are handled as joint offerings of the parent and subsidiary.
A study has found that 50 percent of the ECOs used for the proceedings of primary offerings to repay loans to
the parent, 30 percent to be retained, and 20 percent pay to creditors. In secondary offerings, 50 percent of the
parents ECOs retain the proceeds, while 50 percent pay to creditors. The research indicates that the initial
stock market reaction to an ECO announcement is more favorable if the subsidiary retains the funds.
After the IPO, all transactions between the parent and the subsidiary must be conducted on an arm's-length
basis and disclosed in the registration statement. The parent typically continues to perform certain corporate
services, such as investor relations, legal and tax services, human resources, data processing, and banking
services, on a contractual basis.
Strong potential ECO candidates have some or all of the following characteristics.
Strong Growth Prospects: If the subsidiary is in an industry with better growth prospects than the parent, it
will likely sell at a higher price/earnings multiple once it has been partially carved out of the parent.
Independent Borrowing Capacity: A subsidiary that has achieved the size, asset base, earnings and growth
potential, and identity of an independent company will be able to generate additional financing sources and
borrowing capacity after the carve-out.
176
Unique Corporate Culture: Subsidiaries whose corporate culture differs from that of the parent may be good
ECO candidates because the carve-out can offer management the freedom to run the company as an
independent entity. Companies that require entrepreneurial cultures for success can especially benefit from this
transaction.
Special Industry Characteristics: Subsidiaries with unusual characteristics are often better suited to
decentralized management decision-making, which may allow management to respond more quickly to
changes in technology, competition, and regulation.
Management Performance, Retention, and Rewards: Subsidiaries that compete in industries where
management retention is an issue and targeted reward systems are required can benefit from an ECO.
While analyzing a sample of ECOs, researchers found important increases in sales, operating income before
depreciation, total assets, and capital expenditures. However, they believe these improvements owe less to
newly gained efficiencies than to the carve-out's growth after going public. This is because the relative growth
rates were not positive or statistically significant.
Note that ECOs, like spin-offs, are subject to a great deal of takeover activity. In the sample, 50% of the ECOs
were acquired within three years. An analysis of returns for these companies suggests that ECOs that are taken
over perform better than average, while those that are not perform worse than average. Nonetheless, even the
latter outperform, on average, in other types of firms. Overall, it is clear that ECOs earn significantly positive
abnormal stock returns for up to three years after the carve-out. Parents, on the other hand, earn negative stock
returns.
As with spin-offs, these higher-than-normal stock returns are associated with better operating performance and
corporate restructuring activity. As a restructuring device, ECOs clearly seem to lead to better operating
performance (on average) and greater increases in shareholder value.
In a study of equity carve-outs by J P Morgan, it was found that carve-out firms in which the parent firm
announced that a spin-off would follow at a later date, outperformed the market by 11% for a period of 18
months after the initial public offering, while carve-out firms without spin-off announcements under performed
the market by 3%. Equity carve outs involve the sale of an equity interest in a subsidiary to outsiders. This sale
may not necessarily leave the parent in control of the subsidiary. Post carve-out, the partially divested
subsidiary is operated and managed as a separate firm.
The biggest disadvantage of carve-outs is the scope for conflict between the two companies as operation level
conflict occurs because of the creation of a new group of financial stakeholders by the mangers of the carved-
out company. The requirements of these stakeholders differ from those of the original stakeholders. This
conflict can hinder the performance of both firms. The stock performance of a company that has carved out 70
to 100 percent is better than that of a company that has carved-out less than 70 percent. This indicates that
lack of separation between the two entities prevents the carved-out entity from reaching its potential.
Split-Off
In a split-off, a new company is created to takeover the operations of an existing division or unit. A portion of the
shares of the parent company are exchanged for the shares of the new company. In other words, a section of
the shareholders will be allotted shares in the new company by redeeming their existing shares. The logic of
split-off is that the equity base of the parent company should be reduced reflecting the downsizing of the firm.
Hence the shareholding of the new entity does not reflect the shareholding of the parent firm. Just .as in spin-
off, a split-off does not result in any cash inflow to the parent company.
Split-Up
Split-up results in the complete break up of a company into two or more new companies. All the division or units
are converted into separate companies and the parent firm ceases to exist. The shares of the new companies
are distributed among the existing shareholders of the firm.
The term "split-up" is defined as the division of a company into two or more publicly traded comparatively
substantial entities through one or more transactions.
177
Chapter 8
Financial Engineering
ACTIVITY BASED COSTING
Applying overhead costs to each product or service based on the extent to which that product or service causes
overhead cost to be incurred is the primary objective of accounting for overhead costs. In many production
processes, overhead is applied to products using a single predetermined overhead rate based on a single
activity measure. With Activity-Based Costing (ABC), multiple activities are identified in the production process
that are associated with costs. The events within these activities that cause work (costs) are called cost drivers.
Examples of overhead cost drivers are machine set-ups, material-handling operations, and the number of steps
in a manufacturing process. Examples of cost drivers in non-manufacturing organizations are hospital beds
occupied, the number of take-offs and landing for an airline, and the number of rooms occupied in a hotel. The
cost drivers are used to apply overhead to products and services when using ABC.
The following five steps are used to apply costs to products under an ABC system.
The first step of ABC is to choose the activities that result in incurring of overhead costs. These activities do not
necessarily coincide with existing departments but rather represent a group of transactions that support the
production process. Typical activities used in ABC are designing, ordering, scheduling, moving materials,
controlling inventory, and controlling quality.
Each of these activities is composed of transactions that result in costs. More than one cost pool can be
established for each activity. A cost pool is an account to record the costs of an activity with a specific cost
driver.
Once the activities have been chosen, costs must be traced to the cost pools for different activities. To facilitate
this tracing, cost drivers are chosen to act as vehicles for distributing costs. These cost drivers are often called
resource drivers. A predetermined rate is estimated for each resource driver. Consumption of the resource
driver in combination with the predetermined rate determines the distribution of the resource costs to the
activities.
Cost drivers for activities are sometimes called activity drivers. Activity drivers represent the event that causes
costs within an activity. For example, activity drivers for the purchasing activity include negotiations with
vendors, ordering materials, scheduling their arrival, and perhaps inspection. Each of these activity drivers
represents costly procedures that are performed in the purchasing activity. An activity driver is chosen for each
cost pool. If two cost pools use the same cost driver, then the cost pools could be combined for product-costing
purposes.
Cooper has developed several criteria for choosing activity drivers. First, the data on the cost driver must be
easy to obtain. Second, the consumption of the activity implied by the activity driver should be highly correlated
with the actual consumption of the activity. The third criterion to consider is the behavioral effects induced by
the choice of the activity driver. Activity drivers determine the application of costs, which in turn can affect
individual performance measures.
The judicious use of more activity drivers increase the accuracy of product costs. Ostrenga concludes that there
is a preferred sequence for accurate product costs.
178
Direct costs are the most accurate in applying costs to products. The application of overhead costs through cost
drivers is the next most accurate process. Any remaining overhead costs must be allocated in a somewhat
arbitrary manner, which is less accurate.
An application rate must be estimated for each activity driver. A predetermined rate is estimated by dividing the
cost pool by the estimated level of activity of the activity driver. Alternatively, an actual rate is determined by
dividing the actual costs of the cost pool by the actual level of activity of the activity driver. Standard costs,
could also be used to calculate a predetermined rate.
The application of costs to products is calculated by multiplying the application rate times the usage of the
activity driver in manufacturing a product or providing a service.
Alpha Motors Inc. produces electric motors. The company makes a standard electric-starter motor for a major
auto manufacturer and also produces electric motors that are specially ordered. The company has four
essential activities: design, ordering, machinery, and marketing. Alpha Motors incurs the following costs during
the month of January:
Traditional cost accounting would apply the overhead costs based on a single measure of activity. If direct labor
was used, then the overhead rate would be Rs.60,00,000/(Rs. 10,00,000 + Rs.2,00,000), or 500% of direct
labor. Hence:
=Rs.50,00,000
= Rs.10,00,000
With ABC, activities are chosen and the overhead costs are distributed to cost pools within these activities
through resource drivers. The costs of activities are then applied to products through activity drivers. Alpha
Motors performs the following activities: designing, ordering, machining, and marketing. Each activity has one
cost pool. The overhead costs are distributed to the cost pools of the activities using the following resource
drivers:
Overhead Account Resource Driver
Indirect labor Labor hours
Depreciation of building Area of building
Depreciation of equipment Machine time
Maintenance Area of building
Utilities Amps used
179
The usages of the resource drivers by activity are:
The resource driver application rates are calculated by dividing overhead costs by total resource driver usage:
By multiplying the application rate times the resource usage of each activity, overhead costs can be allocated to
the different activities. For example, the cost of the indirect labor allocated to the designing activity is Rs.l0/labor
hour i.e. Rs. 10,00,000.
Once the overhead costs have been distributed to the activity cost pools, activity drivers must be chosen to
apply the costs to the products. Suppose the following activity drivers are chosen:
Alpha Motors uses actual costs and activity levels to determine the application rates shown below:
The application rates are then multiplied by the cost driver usage for each product to determine-the costs
applied.
The ABC method applied a much higher amount of the overhead cost to the special-order electric motors than
when all overhead was applied by direct-labor basis. The reason for the greater overhead application to the
special-order electric motors is the greater usage of the activities that enhance the manufacturing of the electric
180
motors during their production. Use of direct-labor hours to allocate overhead does not recognize the extra
overhead requirements of the special-order electric motors. Misapplication of overhead could lead to
inappropriate product line decisions. The greater the diversity of requirements of products on over head-related
services and other overhead costs, the greater the need for an ABC system.
ABC is" valuable for planning, because the establishment of an ABC system requires a careful study of the total
manufacturing or service process of an organization. ABC highlights the causes of costs. An analysis of these
causes can identify activities that do not add to the value of the product. These activities include moving
materials and accounting for transactions. Although these activities cannot be completely eliminated, they may
be reduced. A recognition of how various activities affect costs can lead to modifications in the planning of
factory layouts and increased efforts in the design process stage to reduce future manufacturing costs.
An analysis of activities can also lead to better performance measurement. Workers on the line often
understand activities better than costs and can be evaluated accordingly. At higher management levels, the
activities can be aggregated to be in line with responsibility centers. Managers would be responsible for the
costs of the activities associated with their responsibility centers.
First, ABC is based on historical costs. For planning decisions, future costs are generally the relevant costs.
Second, ABC does not partition variable and fixed costs. For many short run decisions, it is important to identify
variable costs. Third, ABC is only as accurate as the quality of the cost drivers. The distribution and application
of costs becomes an arbitrary allocation process when the cost drivers are not associated with the factors that
are causing costs. And finally, ABC tends to be more expensive than the more traditional methods of applying
costs to products.
Organizational Base Costing – It is a traditional costing that considers the cost of a product to be its direct
costs for materials and labor, plus some allocated portion of manufacturing overhead. In OBC, overhead rates
are allocated to products using a plant-wide or departmental overhead rates, i.e. cost assignment follows the
organization chart.
• To decide on the number of cost drivers, the following factors can be considered:
– Purpose of the system
• The objectives of the system will determine how many cost drivers are needed
• The greater the number of cost drivers, the greater will be the cost of designing and maintaining the
system
– Resources availability
• Cost benefit analysis can be applied, companies should always ask the question as to whether the
incremental benefit is justifiable in terms of the incremental cost incurred. The ultimate question is whether
the company can afford the best system available given its requirements.
– Company complexity
• Product as the cost object:
– Number of production processes
– Total indirect costs
– Product diversity
• Customer as the cost object:
– Number of distribution channels
– Steps in distribution system
– Variety in items
– Customer diversity
181
ECONOMIC VALUE ADDED (EVA)
Economic Value Added or EVA is the economic profit generated after the cost of invested capital. EVA
incorporates the opportunity cost of invested capital that is not realized by traditional accounting measures.
Numerous studies have shown EVA to have a higher correlation to stock valuation than accounting based
measures.
EVA = Net Operating Profit after Tax - (Invested Capital x Cost of Capital)
There are two steps required to convert GAAP net income to EVA. First, calculate net operating profit after tax
(NOPAT) by adjusting net income. Common adjustments include extraordinary gains and losses, securities
gains and losses, provision expenses and preferred stock dividends. Second, calculate invested capital and
apply cost of capital. Invested capital includes book value of common and preferred equity, after-tax allowance
for loan losses, and certain adjustments for cumulative non-operating gains and losses. Cost of capital equals
the minimum required rate of return for investors {e.g. 15%). Whenever EVA is positive, shareholders have
received a total economic return on their investment in excess of their required rate of return. .
CFROI is defined as the return on investment expected over the average life of the firm's existing assets.
CFROI is nothing but another form of IRR measure. The key difference between the IRR and CFROI is that
cash flows and investment are stated in constant monetary units in CFROI which overcome deficiencies of the
traditional return on investment methods.
LEVERAGE- Leverage in the general sense means influence of power i.e., utilizing the existing resources to
attain something else. Leverage in terms of financial analysis is the influence which an independent financial
variable has over a dependent/related financial variable. When leverage is measured between two financial
variables it explains how the dependent variable responds to a particular change in the independent variable.
To explain further, let X be an independent financial variable and Y its dependent variable, then the leverage
which Y has with X can be assessed by the percentage change in Y to a percentage change in X.
where
Measures of Leverage
To better understand the importance of leverage in financial analysis, it is imperative to understand the three
measures of leverage.
• Operating Leverage
• Financial Leverage
• Combined/ Total Leverage.
These three measures of leverage depend to a large extent on the various income statement items and the
relationship that exists between them. Given below is the Income Statement of XYZ Company Ltd. and the
relationship that exits between the various items of the statement:
Hence,
...(iii)
The above three equations [(i), (ii) and (iii)] which establish the relationship between the various items of the
Income Statement form the base for the measurement of the different leverages.
OPERATING LEVERAGE
Operating leverage examines the effect of the change in the quantity produced on the EBIT of the company and
is measured by calculating the Degree of Operating Leverage (DOL).
Illustration 8.1
Calculate the DOL for XYZ Company Ltd. given the following additional information:
It is important to know how the operating leverage is measured, but equally essential is to understand its
application and utility in financial analysis. To understand the application of DOL one has to understand the
behavior of DOL visa-vis the changes in the output by calculating the DOL at the various levels of Q.
Following are the different DOL for the various levels of Q for XYZ Company Ltd.:
When the value of Q is 3000 the EBIT of the company is zero and this is the operating break-even point. Thus,
183
at operating break-even point, where the EBIT is zero, the quantity produced can be calculated as follows:
Q = F/(S - V)
After measuring the DOL for a particular company at varying levels of output the following observations can be
made:
• If Q is less than the operating break-even point, then DOL will be negative (which does not imply that an
increase in Q leads to a decrease in EBIT).
• If Q is greater than the operating break-even point, then the DOL will be positive. However, the DOL will
start to decline as the level of output increases and will reach a limit of 1.
IMPLICATIONS
DOL helps in ascertaining change in operating income for a given change in output (quantity produced and
sold). If the DOL of a firm is say, 2, then a 10% increase in the level of output will increase operating income by
20%. A large DOL indicates that small fluctuations in the level of output will produce large fluctuations in the
level of operating income.
In Table 8.1, two firms with different cost structures are compared.
Table 8.1 Cost and Profit Schedules for Bell Metal Works and Fibre Glass Ltd.
From table 8.1, we can see that Bell Metal Works has lower fixed costs and higher variable cost per unit when
compared to Fibre Glass Limited. The selling price per unit (P) of both firms is the same, viz., Rs.10. An
interesting point we notice is that at an output of 50,000 units both firms have the same profit i.e., Rs.60,000.
However, as sales fluctuate, the EBIT of Bell Metal Works fluctuates for less than the EBIT of Fibre Glass
Limited. This brings us to the conclusion that the DOL of Fibre Glass Limited is greater than the DOL of Bell
Metal Works. Let us compute the DOL of these two firms at an output of 50,000 units. For Bell Metal Works:
DOL = [50,000 (10 - 7)] / [50,000 (10 - 7) - 90,000]
184
= 4.17
• Measurement of Business Risk: We know that the greater the DOL, the more sensitive is EBIT to a given
change in unit sales, i.e. the greater is the risk of exceptional losses if sales become depressed. DOL is
therefore a measure of the firm's business risk. Business risk refers to the uncertainty or variability of the
firm's EBIT. So, every thing else being equal, a higher DOL means higher business risk and vice-versa.
• Production Planning: DOL is also important in production planning. For instance, the firm may have the
opportunity to change its cost structure by introducing labor-saving machinery, thereby reducing
variable labor overhead while increasing the fixed costs, Such a situation will increase DOL. Any method
of production which increases DOL is justified only if it is highly probable that sales will be high so that the
firm can enjoy the increased earnings of increased DOL.
FINANCIAL LEVERAGE
While operating leverage measures the change in the EB1T of a company to a particular change in the output,
the financial leverage measures the effect of the change in EBIT on the EPS of the company. Financial
leverage also refers to the mix of debt and equity in the capital structure of the company. The measure of
financial leverage is the Degree of Financial Leverage (DFL) and it can be calculated as follows:
DFL = (AEPS/EPS)/(AEBTT/EBIT)
Taking the example of XYZ Company Ltd., which has an EBIT of Rs.6,00,000 at 5,000 level of production, the
capital structure of the company is as follows:
Financial leverage when measured for various levels of EBIT will aid in understanding the behavior of DFL and
also explain its utility in financial decision making. Consider the case of XYZ Company Ltd. to measure DFL for
varying levels of EBIT.
EBIT (Rs.) DFL
50,000 -0.40
185
1,00,000 -1.33
1,75,000 00
6,00,000 1.41
7,00,000 1.33
7,50,000 1.30
The DFL at EBIT level of 175000 is undefined and this point is the Financial Break-even Point. It can be defined
as:
EBIT = I + Dp/(l-T)
The following observations can also be made from studying the behavior of DFL.
By assessing the DFL one can understand the impact of a change in EBLT on the EPS of the company. In
addition to this it also helps in assessing the financial risk of the firm, Impact of Financial Leverage on
Investor's Rate of Return
Let us see with the help of a very simple example, how financial leverage affects return on equity. A company
needs a capital of Rs. 10,000 to operate. This money may be brought in by the shareholders of the company.
Alternatively, a part of this money may also be brought in through debt financing. If the management raises Rs.
10,000 from shareholders, the company is not financially leveraged and would have the following balance
sheet.
The use of debt in the company's capital structure has caused the net profit to decline from Rs. 1,500 to Rs.
1,000. But has the return on owner's capital declined? Return on Equity now works out to 30%, as. the owners
have invested only Rs.3,333 now which earned them Rs.1,000. What were the factors which contributed to this
additional return? We can trace out two sources of this additional return:
• though the company has to pay interest at 15% on borrowed capital, the company's operations have been
able to generate more than 15% which is being transferred to the owners.
• the reduction in PBT has brought about a reduction in the amount of tax paid, as interest is a tax deductible
expense, to the extent of interest (1 - tax rate) i.e., Rs.500. The greater the tax rate, the more is the tax
shield available to a company which is financially leveraged.
As was seen in the above example, a company may increase the return on equity by the use of debt i.e., the
use of financial leverage. By increasing the proportion of debt in the pattern of financing i.e., by increasing the
debt-equity ratio, the company should be able to increase the return on equity.
If increased financial leverage leads to increased return on equity, why do companies not resort to ever
increasing amounts of debt financing? Why do financial and other term lending institutions insist on norms for
Debt-Equity Ratio? The answer is that as the company becomes more financially leveraged, it becomes riskier,
i.e., increased use of debt financing will lead to increased financial risk which leads to:
In the previous example, let us assume that sales decline by 10% (from Rs. 10,000 to Rs.9,000), expenses
remaining the same, What happens to return on equity? The income statements for the financially unleveraged
and leveraged firms will appear as follows:
Leveraged Finn
Unleveraged Firm
(Debt-Equity Ratio
(zero Debt)
2 : 1)
186
Sales 9,000 9,000
Expenses 7,000 7,000
EBIT 2,000 2,000
Interest Charges - 1,000
(6667x0.15)
PBT 2,000 1,000
Tax @5Q% 1,000 500
Net Profit 1,000 500
Net Profit at Sales of Rs, 10,000 1,500 1000
ROE at Sales of Rs. 10,000 15% 30%
ROE at Sales of Rs.9,000 10% 15%
We see that a 10% decline in sales produces substantial declines in earnings and the rates of return on owner's
equity in both cases. But the decline is greater for the financially leveraged firm than for the financially
unleveraged firm. Why is this so? The reason can be traced to the fact that once a firm borrows capital, interest
payments become obligatory and hence fixed in nature. The same interest payment which was the cause for
increase in owner's equity when sales were Rs. 10,000 is now the cause for its more than proportional decline
with a decline in sales. Hence, the greater the use of financial leverage, the greater the potential fluctuation in
return on equity.
Firms that are highly financially leveraged are perceived by lenders of debt as risky. Creditors may refuse to
lend to a highly leveraged firm or may do so only at higher rates of interest or more stringent loan conditions. As
the interest rate increases, the return on equity decreases. However, even though the rate of return diminishes,
it might still exceed the rate of return obtained when no debt was used, in which case financial leverage would
still be favorable.
IMPLICATIONS
Let us again refer to our earlier example. In the first situation, the company was unleveraged, in the second
situation the debt-equity ratio was 2:1. The balance sheet and income statements are reproduced below;
Balance Sheets
Unleveraged Leveraged
Liabilities Assets [Liabilities Assets
Equity 10,000 Equity
10,000 Cash 3,333 Cash 10,000
Capital Capital Debt Debt
6,667
10,000 10,000 10,000 10,000
Income Statements
Unleveraged Leveraged
Sales 10,000 10,000
Expenses 7,000 7,000
EBIT 3,000 3,000
Interest - 1,000
PBT 3,000 2,000
Tax @ 50% 1,500 1,000
Net Profit 1,500 1,000
187
What do these figures imply? They imply that if EBIT is changed by 1%, EPS will also change by 1%, the
company uses no debt. However, EPS changes by 1,5% when it uses debt in the ratio of 2:1 (66.67% of total
capital). This is proof of what we have stated earlier: The greater the leverage, the wider are fluctuations in the
return on equity and the greater is the financial risk the company is exposed to. Through an EBIT-EPS analysis,
we can evaluate various financing plans or degrees of financial leverage with respect to their effect on EPS.
TOTAL LEVERAGE
A combination of the operating and financial leverages is the total or combined leverage. Thus, the degree of
total leverage (DTL) is the measure of the output and EPS of the company. DTL is the product of DOL and DFL
and can be calculated as follows;
= {[Q(S-V)]/[Q(S-V)-F]}X
Calculating the DTL for XYZ Co. Ltd. given the following information:
- 2.5x1.41 -3.53
Thus, when the output is 5,000 units, a one percent change in Q will result in 3.5% change in EPS.
188
Before understanding what application the total leverage has in the financial analysis of a company, let us make
a few more observations by studying its behavior. Let us calculate the overall break-even point and the DTL for
the various levels of Q, given the following information:
F = Rs.8,00,000
I = Rs.80,000
Dp = Rs.60,000
S = Rs. 1,000
V = Rs.600
The overall break-even point is that level of output at which the DTL will be undefined and EPS is equal to zero.
This level of output can be calculated as follows:
= 2,500.
Thus, the overall break-even point is at 2500 units. Calculating DTL for various levels of output with the given
information:
Q DTL
1000 -0.67
2000 -4.00
2500 QO
3000 6.00
5000 2.00
Further, the DTL has the following applications in analyzing the financial performance of a company:
1. Measures changes in EPS: DTL measures the changes in EPS to a percentage change in Q. Thus, the
percentage change in EPS can be easily assessed as the product of DTL and the percentage change in Q.
For example, if DTL for Q of 3000 units is 6 and there is a 10% increase in Q, the affect on EPS is 60%.
2. Measures Total Risk: DTL measures the total risk of the company since it is a measure of both operating
risk and total risk. Thus, by measuring total risk, it measures the variability of EPS for a given error in
forecasting Q.
APPRAISAL CRITERIA
Having defined the costs and the benefits associated with a project, we are now ready to examine whether the
project is financially desirable or not. A number of criteria have been evolved for evaluating the financial
desirability of a project. These criteria can be classified as follows:
Figure 8.2
189
Payback Period
The payback period measures the length of time required to recover the initial outlay in the project. For
example, if a project with a life of 5 years involves an initial outlay of Rs.20 lakh and is expected to generate a
constant annual inflow of Rs.8 lakh, the payback period of the project = 20/8 = 2.5 years. On the other hand if
the project is expected to generate annual inflows of, say Rs.4 lakh, Rs.6 lakh, Rs.10 lakh, Rs.12 lakh and
Rs.14 lakh over the 5 year period the payback period will be equal to 3 years because the sum of the cash
inflows over the first three years is equal to the initial outlay.
In order to use the payback period as a decision rule for accepting or rejecting the projects, the firm has to
decide upon an appropriate cut-off period. Projects with payback periods less than or equal to the cut-off period
will be accepted and others will be rejected. The payback period is a widely used investment appraisal criterion
for the following reasons:
• It helps in weeding out risky projects by favoring only those projects which generate substantial inflows in
earlier years.
The payback period criterion however suffers from the following serious shortcomings:
It fails to consider the time value of money, the importance of which has already been discussed at length.
• The cut-off period is chosen rather arbitrarily and applied uniformly for evaluating projects regardless of
their life spans. Consequently the firm may accept too many short-lived projects and too few long-lived
ones.
• Since the application of the payback criterion leads to discrimination against projects which generate
substantial cash inflows in later years, the criterion cannot be considered as a measure of profitability.
To incorporate the time value of money in the calculation of payback period some firms compute what is called
the "discounted payback period". In other words, these firms discount the cash flows before they compute the
payback period. For instance if a project involves an initial outlay of Rs.10 lakh, and is expected to generate a
net annual inflow of Rs.4 lakh for the next 4 years, the discounted pay back will be that value of 'n' for which
4xPVIFA(12,n)-10 ......(1)
PVIFA(12, n) = 2.5
Therefore, 'n' lies between 3 and 4 years and is approximately equal to 3.15 years. We find the discounted pay
back period is longer than the undiscounted pay back period which will be 2.5 years in this case.
Evaluating the discounted pay back period as an appraisal criterion, we find it to be a whisker better than the
190
undiscounted pay back period. It considers the time value of money and thereby does not give an equal weight
to all flows before the cut-off date. But it still suffers from the other shortcomings of the pay back period. This
criterion also depends on the choice of an arbitrary cut-off date and ignores all cash flows after that date. In
practice, companies do not give much importance to the payback period as an appraisal criteria.
Trie accounting rate of return or the book rate of return is typically defined as follows:
Accounting Rate of Return (ARR) = Average Profit After Tax/Average book value of the investment.
To use it as an appraisal criterion, the ARR of a project is compared with the ARR of the firm as a whole or
against some external yard-stick like the average rate of return for the industry as a whole. To illustrate the
computation of ARR consider a project with the following data:
(Amount in Rs.)
Year 0 1 2 3
Investment Sales
Revenue (90000) 120000 100000 80000
Operating expenses
60000 50000 40000
(excluding depreciation)
depreciation 30000 30000 30000
Annual Income 30000 20000 10000
The firm will accept the project if its target average rate of return is lower than 44 percent.
• Like payback criterion, ARR is simple both in concept and application. It appeals to businessmen who find
the concept of rate of return familiar and easy to work with rather than absolute quantities.
• It considers the returns over the entire life of the project and therefore serves as a measure of profitability
(unlike the payback period which is only a measure of capital recovery).
This criterion, however, suffers from several serious defects. First, this criterion ignores the time value of
money. Put differently, it gives no allowance for the fact that immediate receipts are more valuable than the
distant flows and results giving too much weight to the more distant flows. Second, the ARR depends on
accounting income and not on the cash flows. Since cash flows and accounting income are often different and
investment appraisal emphasizes cash flows, a profitability measure based on accounting income cannot be
used as a reliable investment appraisal criterion. Finally, the firm using ARR as an appraisal criterion must
decide on a yard-stick for judging a project and this decision is often arbitrary. Often firms use their current
book-return as the yard-stick for comparison. In such cases if the current book return of a firm tends to be
unusually high or low, then the firm can end up rejecting good projects or accepting bad projects.
We have already discussed the concept of present value and the method of computing the present value in the
chapter on time value of money. The net present value is equal to the present value of future cash flows and
any immediate cash outflow. In the case of a project, the immediate cash flow will be investment (cash outflow)
and the net present value will be therefore equal to the present value of future cash inflows minus the initial
investment. The following illustration illustrates this point.
Illustration 8.2
191
Consider the project described in illustration 8.3. Compute the net present value of the project, if the cost of
funds to the firm is 12 percent.
Solution
The net cash flows of the project and their present values are as follows:
Year 1 2 3 4
Net cash flow (Rs.) 5100 5100 5100 7100
PVIF@k-12% 0.893 0.797 0.712 0.636
Present value (Rs.) 4554 4065 3631 4516
The decision rule based on the NPV criterion is obvious. A project will be accepted if its NPV is positive and
rejected if its NPV is negative. Rarely in real life situations, we encounter a project with NPV exactly equal to
zero. If it happens, theoretically speaking, the decision-maker is supposed to be either indifferent in accepting
or rejecting the project. But in practice, NPV in the neighborhood of zero, calls for a close review of the
projections made in respect of such parameters that are critical to the viability of the project because even
minor adverse variations can mar the viability of such marginally viable projects.
The NPV is a conceptually sound criterion of investment appraisal because it takes into account the time value
of money and considers the cash flow stream in its entirety. Since net present value represents the contribution
to the wealth of the shareholders, maximizing NPV is congruent with the objective of investment decision
making viz., maximization of shareholders' wealth. The only problem in applying this criterion appears to be the
difficulty in comprehending the concept per se. Most non-financial executives and businessmen find 'Return on
Capital Employed' or 'Average Rate of Return' easy to interpret compared to absolute values like the NPV.
where
A variant of the benefit-cost ratio is the net benefit-cost ratio (NBCR) which is defined as:
The BCR and NBCR for the project described in illustration 18.4 will be: BCR -16,766/12,500 =1.34
NBCR = 4,266/12,500 = 0.34
The decision-rules based on the BCR (or alternatively the NBCR) criterion will be as follows:
192
If Decision Rule
BCR > 1 (NBCR > 0) Accept the project BCR < 1 (NBCR < 0) Reject the project
Since the BCR measures the present value per rupee of outlay, it is considered to be a useful criterion for
ranking a set of projects in the order of decreasingly efficient use of capital. But there are two serious limitations
inhibiting the use of this criterion. First, it provides no means for aggregating several smaller projects into a
package that can be compared with a large project. Second, when the investment outlay is spread over more
than one period, this criterion cannot be used. The following illustration illustrates the first limitation.
Illustration 8.3
The funds available for investment are limited to Rs.20 lakh and the cost of funds to the firm is 14 percent. Rank
the 4 projects in terms of the NPV and BCR criteria. Which project(s) will you recommend given the limited
supply of funds?
Solution
Based on the NPV and BCR criteria, all 4 projects are acceptable because NPV is positive and BCR is greater
than one for each project. But all 4 projects cannot be taken by the firm because of the limited availability of
funds. Either Zeta has to accept project A or a package consisting of projects, B, C and D but not both. The
decision will depend upon which option maximizes the shareholders' wealth. In this sort of a decision-making
situation, the BCR becomes inapplicable because there is no way by which we can aggregate the BCRs of
projects B, C and D. On the other hand NPVs of projects B, C, and D can be aggregated and compared with
the NPV of project A to arrive at a decision.
NPV (B + C + D) = NPV (B) + NPV (C) + NPV (D) =0.65 + 1.58 + 4.02-6.25 which is more than NPV (A).
Therefore the package comprising projects B, C and D must be accepted.
Illustration 8.4
Solution
To determine the IRR, we have to compute the NPV of the project for different rales of interest until we find that
rale of interest at which the NPV of the project is equal to zero or sufficiently close to zero. To reduce the
number of iterations involved in this trial and error process, we can use the following short-cut procedure:
Step 1
Find the average annual net cash flow based on the given future net cash flows. In our illustration, the average
annual net cash flow will be equal to:
Step 2
Divide the initial outlay by the average annual net cash flow i.e., 10/3.57 = 2.801
Step 3
From the PVIFA table find that interest rate at which the present value of an annuity of Re.l will be nearly equal
to 2.801 in 4 years i.e., the duration of the project. In our case, this rate of interest will be equal to 15%.
We use 15% as the initial value for starting the trial and error process and keep trying at successively higher
rates of interest until we get an interest rate at which the NPV is marginally above zero and an interest rate at
which the NPV is marginally below zero. Now we know that IRR. lies between the two rates of interest and
using a linear approximation, we can determine the approximate value of the IRR. In the case of our project,
-10 + (5 x 0.870) + (5 x 0.756) + (3.08x0.658)+ (1.2x0.572) -0.84 NPV at r - 16% will be equal to:
-10 + (5 x 0.862) + (5 x 0.743) + (3.08 x 0.641) + (1.2x0.552)-0.66 NPV at r = 18% will be equal to:
NPV at r - 20% will be equal to: -10 + (5 x 0,833) + (5 x 0.694) + (3.08 x 0.579) '+(1.20 x 0.482)-0
We find that at r - 20%, the NPV is zero and therefore the IRR of the project is 20%.
To use IRR as an appraisal criterion, we require information on the cost of capital or funds employed in the
project. If we define IRR as 'r' and cost of funds employed as 'k', then the decision rule based on IRR will be:
Accept the project if 'r1 is greater than k and reject the project if r is less than k. (If r = k, it is a matter of
indifference).
194
IRR is a popular method of investment appraisal and has a number of merits like:
• It considers the cash flow stream over the entire investment horizon.
• Like ARR, it makes sense to businessmen who prefer to think in terms of rate of return on capital employed.
IRR is uniquely defined only for a project whose cash flow pattern is characterized by cash outflow(s) followed
by cash inflows (such projects are called simple investments). If the cash flow stream has one or more cash
outflows interspersed with cash inflows, there can be multiple internal rates of return. This point can be clarified
with the help of the following table no; 8.1 where four projects with different patterns of cash flows are given:
Table: 8.1
(Rs. in lakh)
Project Cash Flow Stream (Rs.)
Year 0 Year 1 Year 2 Year3 Year 4
A -20 5 10 15 15
B -10 -10 15 15 15
C -10 5 -10 20 20
D -10 15 10 -5 20
• Projects A and B are simple investments and therefore will have unique IKK values. But projects C and D
can have multiple internal rates of return because their cash inflows and outflows are interspersed. For
such projects, IRR cannot be a meaningful criterion of appraisal.
• The IRR criterion can be misleading when the decision-maker has to choose between mutually exclusive
projects that differ significantly in terms of outlays.
In spite of these defects, IRR is still the best criterion today to appraise a project financially. Financial
Institutions insist that projects having substantial outlay specially in the medium and large scale sectors must
show the computation of IRR in the Detailed Project Report, which they appraise before sanctioning financial
assistance.
This appraisal criterion is used for evaluating mutually exclusive projects or alternatives which provide similar
service but have differing patterns of costs and often unequal life spans, e.g., choosing between fork-lift
transportation and conveyor-belt transportation.
The steps involved in computing the annual capital charge are as follows:
Step 1
Determine the present value of the initial investment and operating costs using the cost of capital (k) as the
discount rate.
Step 2
Divide the present value by PVIFA (k,n) where n represents the life span of the project. The quotient is defined
as the annual capital charge or the equivalent annual cost. Once the annual capital charge for the various
alternatives are defined, the alternative which has the minimum annual capital charge is selected.
Illustration 8.5
195
Hindustan Forge Limited is evaluating two alternative systems: A and B, for internal transportation. While the
two systems serve the same purpose, system A has a life of 7 years and system B has a life of 5 years. The
initial outlay and operating costs (in Rs.) associated with these systems are:
Year A B
0 10,00,000 8,00,000
1 1,00,000 75,000
2 1,25,000 1,00,000
3 1,50,000 1,20,000
4 1,75,000 1,40,000
5 2,00,000 1,00,000
6 2,25,000
7 2,00,000
Calculate the annual capital charge associated with these two systems, if the cost of capital is 12 percent. (You
can assume that the net salvage values of the two systems at the end of their economic lives will be zero.)
Solution
- Rs.10,00,000 + (1,00,000 x 0.893) + (1,25,000 x 0.797) (1,50,000 x 0.712) + (1,75,000 x 0.636) + (2,00,000 x
0.567) + (2,25,000 x 0.507) + (2,00,000 x 0.452) = Rs.17,24,900
= Rs.8,00,000 + (75,000 x 0.893) + (1,00,000 x 0.797) + (1,20,000 x 0.712) + (1,40,000 x 0.636) + (1,00,000
x 0.567) = Rs.l 1,77,855
= Rs.3,26,728
Since the annual capital charge associated with system B is lower than that of system A, system B is preferred
to system A.
A wide variety of measures are used in practice for appraising investments. But whatever method is used, the
appraisal must be carried out in explicit, well-defined, preferably standardized terms and should be based on
sound economic logic.
196
Chapter 9
Financial Ethics
Business Ethics and Social Responsibility
Is the goal of maximizing stock prices consistent or inconsistent with high standards of ethical behavior and
social responsibility? It is most definitely consistent. Many socially responsible firms have created enormous
value for their owners, and many unethical firms now are bankrupt.
Business Ethics
The word ethics is defined in Webster's dictionary as "standards of conduct or moral behavior." Business
ethics can be thought of as a company's attitude and conduct to-1 ward its employees, customers, community,
and stockholders. High standards of ethical behavior demand that a firm treat each party that it deals with in a
fair and honest manner. A firm's commitment to business ethics can be measured by the tendency of the firm
and its employees to adhere to laws and regulations relating to such factors as ! product safety and quality, fair
employment practices, fair marketing and selling practices, the use of confidential information for personal gain,
community involvement, bribery, and illegal payments to obtain business.
There are many instances of firms engaging in unethical behavior. For example, in recent years the employees
of several prominent Wall Street investment banking houses have been sentenced to prison for illegally using
insider information on pro-posed mergers for their own personal gain, and E. E Hutton, a large brokerage firm,
lost its independence through a forced merger after it was convicted of cheating its banks out of millions of
dollars in a check kiting scheme. Drexel Burnham Lambert, once the most profitable investment banking firm,
went bankrupt, and its "junk bond king," Michael Milken, who had earned $550 million in just one year, was
sentenced to ten years in prison plus charged a huge fine for securities-law violations. Another investment
bank, Salomon Brothers, was implicated in a Treasury bond scandal that resulted in the firing of its chairman
and other top officers.
These cases received a lot of notoriety, and they made people wonder about the ethics of business in general.
However, the results of a recent study indicate that the executives of most major firms in the United States do
try to maintain high ethical standards in all of their business dealings. Furthermore, there is a positive
correlation between ethics and long-run profitability. For example, Chase Bank suggested that ethical behavior
has increased its profitability because such behavior helped it (1) avoid fines and legal expenses, (2) build
public trust, (3) attract business from customers who appreciate and support its policies, (4) attract and keep
employees of the highest caliber, and (5) support the economic viability of the communities in which it operates.
Most firms today have in place strong codes of ethical behavior, and they also conduct training programs
designed to ensure that employees understand the correct behavior in different business situations. However, it
is imperative that top management—the chairman, president, and vice-presidents—be openly committed to
ethical behavior, and that they communicate this commitment through their own personal actions as well as
through company policies, directives, and punishment/reward systems.
When conflicts arise between profits and ethics, sometimes the ethical considerations are so strong that they
clearly dominate. However, in many cases the choice between ethics and profits is not clear cut. For example,
suppose Norfolk Southern's managers know that its trains are polluting the air along its routes, but the amount
of pollution is within, legal limits and preventive actions would be costly. Axe the managers ethically bound to
reduce pollution? Similarly, suppose a medical products company's own research indicates that one of its new
products may cause problems. However, the evidence is relatively weak, other evidence regarding benefits to
patients is strong, and independent government tests show no adverse effects. Should the company make the
potential problem known to the public? If it does release the negative (but questionable) information, this will
hurt sales and profits, and possibly keep some patients who would benefit from the new product from using it.
There are no obvious answers to questions such as these, but companies must deal with them on a regular
basis, and a failure to handle the situation properly can lead to huge product liability suits and even to
bankruptcy.
Social Responsibility
Another issue that deserves consideration is social responsibility: Should businesses operate strictly in their
stockholders' best interests, or are firms also responsible for the welfare of their employees, customers, and the
communities in which they operate? Certainly firms have an ethical responsibility to provide a safe working
environment, to avoid polluting the air or water, and to produce safe products. However, socially responsible
197
actions have costs, and not ail businesses would voluntarily incur all such costs. If some firms act in a socially
responsible manner while others do not, then the socially responsible firms will be at a disadvantage in
attracting capital. To illustrate, suppose all firms in a given industry have close to "normal" profits and rates of
return on investment, that is, close to the average for all firms and just sufficient to attract capital. If one
company attempts to exercise social responsibility, it will have to raise prices to cover the added costs. If other
firms in its industry do not follow suit their costs and prices will be lower. The socially responsible firm will not be
able to compete, and it will be forced to abandon its efforts. Thus, any voluntary socially re sponsible acts that
raise costs will be difficult, if not impossible, in industries that are subject to keen competition.
What about oligopolistic firms with profits above normal levels—cannot such firms devote resources to social
projects? Undoubtedly they can, and many large, sue cessful firms do engage in community projects, employee
benefit programs, and the like to a greater degree than would appear to be called for by pure profit or weal™
maximization goals.4 Furthermore, many such firms contribute large sums to chari-ties. Still, publicly owned
firms are constrained by capital market forces.
RATIO ANALYSIS
Ratios are well-known and most widely used tools of financial analysis. A ratio gives the mathematical
relationship between one variable and another. Though the computation of a ratio involves only a simple
arithmetic operation, its interpretation is a difficult exercise. The analysis of a ratio can disclose relationships as
well as bases of comparison that reveal conditions and trends that cannot be detected by going through the
individual components of the ratio. The usefulness of ratios ultimately depends on their intelligent and skillful
interpretation.
Ratios are used by different people for various purposes. Ratio analysis mainly helps in valuing the firm in
quantitative terms. If two groups of people are interested in the valuation of the firm and they are creditors and
shareholders, creditors are again divided into short-term creditors and long-term creditors.
Short-term creditors hold obligations that will soon mature and they are concerned with the firm's ability to pay
its bills promptly. In the short run, the amount of liquid assets determines the ability to clear off current liabilities.
These persons are interested in liquidity. Long-term creditors hold bonds or mortgages against the firm and are
interested in current payments of interest and eventual repayment of principal. The firm must be sufficiently
liquid in the short-term and have adequate profits for the long-term. These persons examine liquidity and
profitability.
In addition to liquidity and profitability, the owners of the firm (shareholders) are concerned about the policies of
the firm that affect the market price of the firm's stock. Without liquidity, the firm cannot pay cash dividends.
Without profits, the firm would not be able to declare dividends. With poor policies, the common stock would
trade at low prices in the market.
Considering the above category of users financial ratios fall into three groups:
• Liquidity ratios
• Profitability or efficiency ratios
• Ownership ratios
LIQUIDITY RATIOS
Liquidity implies a firm's ability to pay its debts in the short run. This ability can be measured by the use of
liquidity ratios. Short-term liquidity involves the relationship between current assets and current liabilities. If a
firm has sufficient net working capital (excess of current assets over current liabilities) it is assumed to have
enough liquidity. The current ratio and the quick ratio are the two ratios, which directly measure liquidity. The
ratios like receivables turnover ratios and inventory turnover ratios indirectly measure the liquidity.
Current Ratio
Current Assets
The liquidity ratio is denned as;
Current Liabilitie s
198
Current assets include cash, marketable securities, debtors, inventories, loans and advances, and pre-paid
expenses. Current liabilities include loans and advances taken, trade creditors, accrued expenses, and
provisions.
From the balance sheet data given in table 6.1 for the year 5, the current ratio for the year 5 can be calculated
as:
= 121.1
36.6
As the current ratio measures the ability of the enterprise to meet its current obligations, a current ratio of 3.31:
1 implies that the firm has current assets which are 3.31 times the current liabilities. A current ratio of 3.31 is
considered to be healthy by normal standards which is 2:1.
In the operating cycle of the firm current assets are converted into cash to provide funds for the payment of
current liabilities. So higher the current ratio, higher the short-term liquidity. But in interpreting the current ratio
care should be taken in looking into the composition of current assets. A firm which has a large amount of cash
and accounts receivable is more liquid than a firm with a high amount of inventories in its current assets, though
both the firms may have the same current ratio. To overcome this a more stringent form of liquidity ratio referred
to as quick ratio can be calculated.
Quick Ratio
The quick ratio is a more stringent measure of liquidity because inventories, which are least liquid of current
assets, are excluded from the ratio. Inventories have to go through a two-step process of first being sold and
converted into receivables and secondly collected. The quick test is so named because it gives the abilities of
the firm to pay its liabilities without relying on the sale and recovery of its inventories.
From the above figures, we can infer that as the proportion of inventories in total current assets is 3S.23%, and
the liquidity ratio of the firm decreased from 3.31 to 2.04. Though there is no standard with which the ratio can
be compared, normally ratios are compared with the industry figures in the absence of predetermined
standards. In the above case, the quick ratio for the industry (dyes and pigments) is 2.26. As the quick ratio is
below the industry average, we can conclude that the liquidity position is below average though the current ratio
gives a different picture.
The current ratio is a static or stock concept of what resources are available at a given moment in time to meet
the obligations at that moment. The ratio has limitations in the following aspects:
1. Measuring and predicting the future fund flows.
2. Measuring the adequacy of future fund inflows in relation to outflows.
The existing pool of net funds does not have a logical or causative relationship to the future funds that will flow
through it. Yet it is the future flows that are the subject of our greatest interest in the assessment of liquidity.
These flows depend importantly on elements not included in the ratio, such as sales, cash costs and expenses,
profits, and changes in business conditions. This concept will be clear when the study of funds flow analysis is
done.
199
Bank Finance to Working Capital Gap Ratio
Where working capital gap is equal to current assets less current liabilities other than bank borrowings.
This ratio shows us the degree of the firm's reliance on short-term bank finance for financing the working capital
gap.
Turnover Ratios
Receivables turnover ratios and inventory turnover ratios measure the liquidity of a firm in an indirect way. Here
the measure of liquidity is concerned with the speed with which inventory is converted into sales and accounts
receivables converted into cash. The turnover ratios give the speed of conversion of current assets (liquidity)
into cash as shown above,
Two ratios are used to measure the liquidity of a firm's account receivables. They are:
The average accounts receivable is obtained by adding the beginning receivables of the period and the ending
receivable, and dividing the sum by two. The sales figure in the numerator is only credit sales, because firm
cash sales don't give any receivables. As the publicly available information on the firm, may not disclose the
credit sales details, the analyst in the external environment has to assume that cash sales are insignificant.
Normally the receivables ratios are useful for internal analysis.
Higher the receivables turnover ratio, greater the liquidity of the firm. However, care should be taken to see that
to project higher receivables turnover ratio, the firm does follow a strict credit policy.
The accounts receivables position of the Rainbow-chem Industries for two years is as follows:
5.99(6Approx.)
Turnover ratio gives, how many times on an average the receivables are generated and collected during the
year. In our case, the average receivables turnover ratios of 6 indicates that on an average receivables are
revolved 6 times during the year. When we compare this with the industry of 5.16 times, we can say that the
firm's liquidity of accounts receivables is on average 16.28% more than the industry.
One can get a sense of the speed of collections from receivables turnover ratio and it is valuable for
comparison purposes, but we cannot directly compare it with the terms of trade usually given by the firm. For
example, the firm may be having a policy of giving certain percent of discount if the debtor pays in certain
period of time. Such comparison is best made by converting the turnover into days of sales tied up in
200
receivables.
The ratio that gives the above comparison is average collection period, which is defined as the number of days
it takes to collect accounts receivable. It can be obtained by dividing 360 by the average receivables turnover
ratio calculated above. That is,
For Rainbow-chem Industries, assuming that there is only one sundry debtor the average collection period is
equal to 60 days (360/6). If the firm is having a credit policy of giving substantial discounts if the receivables are
collected within 30 days, the debtor will not be able to avail the discounts. If we compare the above with the
industry figure (i.e. 360/5.16 = 69.76 days), the firm is having above average collection period.
Evaluation
Accounts receivable turnover rates or collection periods can be compared to industry averages or to the credit
terms granted by the firm to find out whether customers are paying on time. If the terms, for example say the
average collection period is 30 days and the realized average collection period is 60 days, it could reflect the
following:
The first conclusion requires remedial managerial action, while the second and third conclusions convey the
quality and liquidity of the accounts receivables.
Inventory Turnover
The liquidity of a firm's inventory may be calculated by dividing the cost of goods sold by the firm's inventory.
The inventory turnover, or stock turnover, measures how fast the inventory is moving through the firm and
generating sales. Inventory turnover can be defined as:
Higher the ratio, greater the efficiency of inventory management. The importance of inventory turnover can also
be looked from a different point of view i.e. it helps the analyst measure the adequacy of goods available to sell
in comparison to the actual sales orders.
1. Running out of stock due to low inventory (high turnover) which may indicate future shortages.
2. Excessive carrying charges, because of high inventory (low turnover).
One has to manage carefully between running out of goods to sell and investing in excessive inventory
otherwise it will result in either a high or low ratio, which may be an indication of poor management. The analyst
should keep in mind that high and low turnovers are relative in nature. The current turnover must be compared
to previous periods or to some industry norms before it is designated as high, low, or normal. The nature of the
business should also be considered in analyzing the appropriateness of the size and turnover of the inventory.
For example, a manufacturing firm which has to import its key raw materials is justified in keeping high
inventory of raw materials if it finds out that its base currency has been depreciating against the exporting
country's currency consistently. In this case, high inventory is kept if the cost of imported raw materials because
of depreciation is more than the cost of storage.
In the case of Rainbow-chem Industries the inventory turnover could be calculated as follows. First for getting
the cost of goods sold, we have to add all the expenses in the profit and loss account including depreciation
charges and excluding interest expenses. Average inventory can be obtained by adding the closing inventory
(Year 5) and the opening inventory (Year 4) and dividing them by two.
201
The average industry inventory turnover is 4.3. A meaningful conclusion about the inventory turnover can be
arrived after studying its composition, its change over the years and comparing the turnover trends with the
industry.
Table 9.2
From the above table, it can be noticed that the Rainbow-chem's current and quick ratios are just below the
average industry figures, and receivables turnover ratios are above the industry averages to an extent.
Inventory turnover is in a better position compared to the industry which is concluded in the overall analysis of
inventory turnover in the respective section.
In conclusion, the liquidity position of the Rainbow-chem Industries Ltd. can be said to be above average.
202
PROFITABILITY OR EFFICIENCY RATIOS
These measure the efficiency of the firm's activities and its ability to generate profits. There are two types of
profitability ratios.
1. Profits in Relation to Sales: It is important from the profit standpoint that the firm be able to generate
adequate profit on each unit of sales. If sales lack a sufficient margin of profit, it is difficult for the firm to
cover its fixed charges on debt and to earn a profit for shareholders. Two popular ratios in this category are
gross profit margin ratio, and net profit margin ratio.
2. Pro/its in Relation to Assets: It is also important that profit be compared to the capital invested by owners
and creditors. If the firm cannot produce a satisfactory profit on its asset base, it might be misusing its
assets. They are also referred to as rate of return ratios and some of them are asset turnover ratio, earning
power and return on equity.
This ratio shows the profits relative to sales after the direct production costs are deducted. It may be used as an
indicator of the efficiency of the production operation and the relation between production costs and selling
price. GPM for Rainbow-chem Industries is calculated as:
GPM for industry is 10.60% which is less than GPM of Rainbow-chem Industries.
This ratio shows the earnings left for shareholders (both equity and preference) as a percentage of net sales. It
measures the overall efficiency of production, administration, selling, financing, pricing, and tax management.
Jointly considered, the gross and net profit margin ratios provide the analyst available tool to identify the
sources of business efficiency/inefficiency.
In comparison with the industry net profit, margin ratio is just above the average percentage figure. Had this
been below the industry average, it would have indicated some mismanagement in the areas excluding
production (as GPM is in line with the industry). Specific area could have been investigated by taking all the
aspects and analyzing respectively.
Asset Turnover
It highlights the amount of assets that the firm used to generate its total sales. The ability to generate a large
volume of sales on a small asset base is an important part of the firm's profit picture. Idle or improperly used
assets increase the firm's need tor costly financing and the expenses for maintenance and upkeep. By
achieving a high asset turnover, a firm reduces costs and increases the eventual profit to its owners.
Industry asset turnover is 1.15. An asset turnover ratio of 1.49 indicates the firm with an asset base of 1 unit
could produce 1.49 units of sales. This is a healthy both in absolute terms and also in comparison with the
industry as the turnover of the industry is only 1.15.
Earning Power
The earning power is a measure of the operating business performance which is not effected by interest
charges and tax payments. As it does not consider the effects of financial structure and tax rate it is well suited
for inter-firm comparisons.
30 .82
Rambow-chem s earning power =
175 .26
From the table, we can conclude that Rainbow-chem tops the industry with a percentage of 17.58%, whereas
the average is only 16.29%. Rainbow-chem is operationally very efficient in comparison to all the players in the
industry.
Return on Equity
The return on equity (ROE) is an important profit indicator to shareholders of the firm. It is calculated by the
formula:
Net income denotes profit after tax (PAT) and average equity is obtained by taking the average equities of year
5 and year 4. The return on equity measures the profitability of equity funds invested in the firm. It is regarded
as a very important measure because it reflects the productivity of capital employed in the firm. It is influenced
by several factors: earning power, debt-equity ratio, average cost of debt funds, and tax rate.
Return on equity for the industry is 13.18%. The firm s healthiness in this respect also can be easily seen from
204
the differences in returns of equity. Rainbow-chem is giving 20.68% return to the equity holders, whereas the
industry is giving only 13.18%.'Thus, we can conclude that Rainbow-chem has employed it resources
productively.
Rainbow-chem's profitability ratios are summarized in the following table against the industry.
As mentioned in the beginning of this section, profitability is analyzed in two respects. That is in relation to sales
and in relation to assets. The above table conveys that, Rainbow-chem Industries is able to generate profits in
relation to sales on an average scale, but in respect of efficient application of assets it performs well above the
average. This indicates that some remedial measures have to be taken from the sales' point of view.
OWNERSHIP RATIOS
Ownership ratios will help the stockholder to analyze his present and future investment in a firm. Stockholders
(owners) are interested to know how the value of their holdings are affected by certain variables. Ownership
ratios compare the investment value with factors such as debt, earnings, dividends and the stock's market
price. By understanding the liquidity and profitability ratios, one can gain insights into the soundness of the
firm's business activities, whereas by analyzing the ownership ratios, the analyst is able to assess the likely
future value of the market.
Ownership ratios are divided into three main groups. They are;
1. Earnings Ratios
2. Leverage Ratios
3. Dividend Ratios.
1. Earnings Ratios
The earnings ratios are earnings per share (EPS), price-earnings ratio (P/E ratio), and capitalization ratio. From
earnings ratios we can get information on earnings of the firm and their effect on price of common stock. In the
following paragraphs we will discuss the above ratios in detail.
Shareholders are concerned about the earnings of the firm in two ways. One is availability of funds with the firm
to pay their dividends and the other to expand their interest in the firm with the retained earnings. These
earnings are expressed on a per share basis which is in short called EPS, EPS is calculated by dividing the net
income by the number of shares outstanding. Mathematically, it is calculated as follows:
A cross-sectional and year-to-year analysis (will be discussed in later sections in detail) can be very informative
to the analyst. As an example let us take two firms Atul Products and Rainbow-chem Industries in the Dyes &
Pigm. (large) industries. Assuming the market price of each stock as Rs.50 per share, the earnings trend for the
two firms is as follows:
From the above table, it can be easily understood that the Rainbow-chem Industries began at a low EPS of
Rs.5.24 per share but steadily progressed and nearly tripled its EPS in 5 years. Whereas, Atul Products started
at a high EPS of Rs.12.16 per share but in 5 years declined up to Rs.5.97 per share. The trends of the two
earnings streams appear to forecast a brighter future for Rainbow-chem Industries than for Atul Products. If we
go further into the reasons behind this performance of Atul Products, we can find that over the years, the share
capital of Atul products has increased without proportionate increase in the net income. We will get an even
more clear picture if we compare all the players in the industry.
PRICE-EARNINGS RATIO
The price-earnings ratio (also P/E multiple) is calculated by taking the market price of the stock and dividing it
by earnings per share.
This ratio gives the relationship between the market price of the stock and its earnings by revealing how
earnings affect the market price of the firm's stock. If a stock has a low P/E multiple, for example 3/1, it may be
considered as an undervalued stock. If the ratio is 80/1, it may be viewed as overvalued. It is the most popular
financial ratio in the stock market for secondary market investors. The P/E ratio method is useful as long as the
firm is a viable business entity, and its real value is reflected in its profits.
Table 9.4
The main use of P/E ratio is it helps to determine the expected market value of a stock. For example, one firm A
may be having a P/E of 5/1 and another firm B of 9/1. If we assume the average industry P/E and EPS as 7/1,
Rs.3 respectively and earning per shares of both the firms as Rs.3, we will get the following results.
Market value of industry =7x3=21 Market value of firm A =5x3 = 15 Market value of firm B =9x3=27
The P/E ratio also may be used to calculate the rate of return investors expect before they purchase a stock.
The reciprocal of the P/E ratio, i.e. (market price/EPS) gives this return. For example, if a stock has Rs.12 EPS
and sells for Rs.100, the marketplace expects a return of 12/100, i.e. 12 percent. This is called the stock's
capitalization rate. A 12 percent capitalization implies that the firm is required to earn 12 percent on the
common stock value. If the investors require less than 12% return they will pay more for the stock and
capitalization rate would drop.
For Rainbow-chem Industries, rates are very low because of very high prices in comparison to earning per
shares.
2. Leverage Ratios
206
When we extend the analysis to the long-term solvency of a firm we have two types of leverage ratios. They are
structural ratios and coverage ratios. Structural ratios are based on the proportions of debt and equity in the
capital structure of the firm, whereas coverage ratios are derived from the relationships between debt servicing
commitments and sources of funds for meeting these obligations.
Debt-equity Ratio
The debt-equity ratio which indicates the relative contributions of creditors and owners can be defined as:
Debt
Equity
Depending on the type of the business and the patterns of cash flows the components in debt to equity ratio will
vary. Normally the debt component includes all liabilities including current. And the equity component consists
of net worth and preference capital. It includes only the preference shares not redeemable in one year. The
ratio of long-term debt (total debt-current liabilities) to equity could also be used, but what is important is that
consistency is followed when comparisons are made.
In the above case the debt-equity ratio stood as 1.33, which implies that the debt portion is more than equity.
The debt-equity ratio of the dyes & pigments industry on average is 1.424. In the manufacturing industry a debt-
equity ratio of 1.5:1 is considered to be healthy. By normal standards and also the industry's the debt-equity
ratio is within the limits. In the heavy engineering industries, petroleum industries, infrastructure industries like
railways, airways the ratio may even go more than 3:1 as the capital outlays required are in very huge sums.
In general, the lower the debt-equity ratio, the higher the degree of protection felt by the lenders. One of the
limitations of the above ratio is that the computation of the ratios is based on book value, as it is sometimes
useful to calculate these ratios using market values. At the time of mergers and acquisitions or rehabilitation
operations the valuation of the equity and debt will be affected by the basis of computation. For example, a sick
company whose equity is initially valued at book values may be a healthy one if its assets are valued at market
prices if it has large land property in its books.
The debt-equity ratio indicates the relative proportions of capital contribution by creditors and shareholders. It is
used as a screening device in the financial analysis. While analyzing the financial condition of a firm, when the
debt-equity ratio is less than 0.50, the analyst can go to other critical areas of analysis. But an analysis reveals
that debt is a significant amount in the total capitalization if further investigation is undertaken which will throw
light on firm's financial condition, results of operations and future prospects. This way, analysis of debt-equity
ratio has a very important position in the financial analysis of any firm.
Debt-Asset Ratio
The above ratio measures the extent to which borrowed funds support the firm's assets. It is defined as:
The denominator in the ratio is total of all assets as indicated in the balance sheet. The type of assets an
organization employs in its operations should determine to some extent the sources of funds used to finance
them. It is usually held that fixed and other long-term assets should not be financed by means of short-term
loans. In fact, the most appropriate source of funds for investment in such kind of assets is equity capital,
though financially very sound organization may go for debt finance.
A debt-asset ratio of 0.57 implies that 57% of the total assets are financed from debt sources. When we
compare this with the industry average debt-asset ratio of (0.69), we find that the firm is having a lower leverage
compared to the industry.
1. To Measure Financial Risk: One measure of the degree of risk resulting from debt financing is provided by
these ratios. If the firm has been increasing the percentage of debt in its capital structure over a period of
time, this may indicate an increase in risk for its long-term finance providers. As the debt content increases
most of firm's income will go for servicing the debt and net income will be reduced. This will affect the long-
term earnings prospects of the company as less funds are reemployed because of increased debt servicing
burden.
2. To Identify Sources of Funds: The firm finances all its requirements either from debt or equity sources.
Depending on the risk of different types the amount of requirements from each source is shown by these
ratios.
3. To Forecast Borrowing Prospects: If the firm is considering expansion and needs to raise additional
money, the capital structure ratios offer an indication of whether debt funds could be used. If the ratios are
too high, the firm may not be able to borrow,
Ratios
Coverage ratios give the relationship between the financial charges of a firm and its ability to service them.
Important coverage ratios are interest coverage ratio, fixed charges coverage ratio and debt-service coverage
ratio.
One measure of a firm's ability to handle financial burdens is the interest coverage ratio, also referred to as the
times interest-coverage ratio. This ratio tells us how many times the firm can cover or meet the interest
payments associated with debt.
The greater the interest coverage ratio, the higher the ability of the firm to pay its interest expense. An interest
coverage ratio of 4 means that the firm's earnings before interest and taxes are four times greater than its
interest payments.
Interest coverage ratio considers the coverage of interest of pure debt only. Fixed charges coverage ratio
measures debt servicing ability comprehensively because it considers all the interest, principal repayment
obligations, lease payments and preference dividends. This ratio shows how many times the pre-tax operating
income covers all fixed financing charges.
It is defined as:
208
Fixed charges that are not tax deductible must be tax adjusted. This is done by increasing them by an amount
equivalent to the sum that would be required to obtain an after-tax income sufficient to cover such fixed
charges. In the above ratio, preference-stock dividend requirement is one example of such non-tax deductible
fixed charges. To get the gross amount of preference dividends, it has to be divided by the factor (1 - tax rate).
For Rainbow-chem Industries the fixed charges coverage ratio is calculated for the year 5 as follows:
For Rainbow-chem there are no lease rental payments and preference dividend payments. The loan repayment
has been assumed to be Rs.7.37 crore. The fixed charges coverage ratio of 1.92 indicates that its pre-tax
operating income is 1.92 times all fixed financial obligations,
Normally used by term-lending financial institutions in India, the debt service coverage ratio, which is a post-tax
coverage is defined as:
For Rainbow-chem Industries the debt service coverage ratio for the year 1995-96 is:
A DSCR of 1.89 indicates the firm has post-tax earnings which are 1.89 times the total obligations (interest and
loan repayment) in the particular year to the financial institution.
3. Dividend Ratios
The common stockholder is very much concerned about the firm's policy regarding the payment of cash
dividends. If the firm is not paying enough dividends the stock may not be attractive to those who are interested
in current income from their investment in the company. If the firm is paying excessive dividends, it may not be
retaining adequate funds to finance future growth. So depending on the shareholder's aspirations a firm must
formulate its dividend policy in a balanced way.
The firm must be liquid and profitable to pay consistent and adequate dividends. Without profits, the firm will not
have sufficient resources to give dividends, without liquidity the firm cannot get cash to pay the dividends. In the
above respects, two dividend ratios are important. They are dividend pay-out ratio and dividend yield ratio.
This is the ratio of dividend per share (DPS) to earnings per share (EPS). It indicates what percentage of total
earnings are paid to shareholders. The percentage of the earnings that is not paid out (1 - dividend pay-out) is
retained for the firm's future needs. There is no guideline as to what percentage of earnings should be declared
as dividends and it varies according to firm's fund requirements to support its operations. If the firm is in need of
funds, then it may cut the dividends in relation to earnings and on the other hand if the firm finds that it lacks
opportunities to use the firms generated, it might increase the dividends. But in both the cases, consistency of
dividend payment is important to the shareholders.
DIVIDEND YIELD
This is the ratio of dividends per share (DPS) to market price of the share.
209
This ratio gives current return on his investment. This is mainly of interest to the investors who are desirous of
getting income from dividends. No dividend yield exists for firms which do not declare dividends.
Financial Ethics
Ethical conduct lies at the core of all businesses. Cases of wrong doing in business are not confined to
particular industries and occur almost across the board. However, companies are entrusted with an
extraordinary responsibility: managing other people's money. This basic fiduciary duty required is that the
financial managers on behalf of the company serve the interests of the clients not as a by-product, but as an
end in itself. And, the satisfaction of their self-interest is obtained as a by-product of a proper discharge of that
responsibility.
Ethical codes of conduct come into picture where regulations end. Because regulations are often specific to
events and activities, gaps exist. Ethics fill in the gaps where a specific regulation may not exist. In other words,
ethical behavior seeks to achieve compliance, not just to the letter of law, but also to the spirit of law. This is
particularly important for those activities that fall under the "grey areas" of regulations. While regulations might
aim at avoiding any actual unscrupulous activity or conflicts of interests, ethical conduct ensures avoidance of
even apparent or perceived conflicts of interests.
For the managers, being perceived as trustworthy is essential because transactions involve the exchange of
significant volumes of assets, often without a face-to-face meeting or the traditional handshake. Investors base
their trust on a firm's reputation for financial performance and ethical soundness. When an firm's ethical stance
comes into question, or its commitment to the honest conduct of business is in doubt, trust is extremely difficult
to rebuild, damaging the firm's reputation and ability to compete. To be engaged in questionable financial
dealings is not merely a breach of ethics and the law it is poor business. And it will surely imperil many client
relationships, and along with them the future profitability of the firm.
The high-voltage, high-velocity financial environment that has emerged in the recent years as a result of
globalization, convergence, consolidation, and e-business applications not only poses a particular threat to
investment banks that manage ethics as a strategic element rather than a core business principle but also
challenge even the most ethically vigilant and conservative firms. Managers thus need to be ever more diligent
to balance their entrepreneurial impulses with their fiduciary responsibilities and adhere to strong standards of
professionalism that require that the interest of clients be placed ahead of self-interest at all times.
CONCEPT OF ETHICS
Business ethics can he defined as an attempt to ascertain the responsibilities and ethical obligations of the
business professionals. It is based on broad principles of integrity and fairness and focuses in issues that a
company or an individual can actually influence. Some of these include honesty in financial transactions,
respect for company property, avoidance of conflicts of interest, honoring of contractual obligations, and respect
for the law. The focus here is on the individuals rather than on the issues, and the primary question deals with
how individuals conduct themselves in fulfilling these ethical requirements.
Being ethical would broadly include trying to be a good corporate citizen, trying as an organization to adhere to
certain ethical values like honesty, integrity, fairness, responsible citizenship and accountability, and trying to do
the right thing. While being ethical may in simple words imply choosing the good over the bad, the right over the
wrong and the fair over the unfair in an increasingly competitive business where the choice is not always the
simple one between what is right and what is wrong. It is more often between what is right and what is less right
- in other words between shades of grey. The key challenge therefore lies in making ethics a core value and
making values, not rules, the driver of a company's culture.
MANAGERIAL ETHICS
The ethical dilemma faced by the management centers on the continual conflict, or on the continual potential of
that conflict, that exists between the economic and social performance of an organization. The economic
function demands that the firms have to operate profitably in order to survive over the long-term while the social
function stresses the obligations of the firm towards its employees, customers, clients, stockholders and the
general public.
The management confronts an ethical dilemma when the improvements in economic performance namely, the
increase in profits or decrease in costs - can be made only at the expense of one or more of the groups to
210
whom the firm has some form of obligation. The question for the management therefore lies in finding a balance
between social performance and economic performance when faced by an ethical dilemma. Managerial
decisions have extended consequences impacting both the organization and the customers. Managers confront
many ethical dilemmas while making the decisions because many of the times someone to whom the firm has
some form of obligation is going to be hurt or harmed in some manner, while the company is going to profit.
Managers face ethical dilemmas while taking decisions on aspects like downsizing, marketing policies, or
mergers and acquisitions.
DOWNSIZING
One of the most common ethical dilemmas faced by the management is layoffs or downsizing in the event of a
market downturn or increasing competitive pressures. Sluggish pace of activities in the capital markets that
include decline in both equity and debt markets, decline in the stock prices prompt the investment banks to
layoffs in different business segments. For say, as the wave of consolidation, mergers and acquisitions sweeps
through the financial services industry downsizing in order to avoid duplication of efforts has become a common
practice. Similarly shutting down a business segment that has not been profitable also leads to job cuts.
The employees who are forced to leave their jobs feel betrayed and the organization and its leadership may be
thought to be impatient, premature and lacking integrity. The surviving employees will often share perceptions
about ethics of this decision with those who are being forced to leave. They also feel tremendous pressure
when asked to do more work or learn new tasks. The management must therefore demonstrate empathy for all
of those affected by this decision. While being fired is always traumatic, companies can cause extra pain to
employees by bumbling their way through the process. The management must try to soften the blow as much
as possible. When companies are forced to downsize because of economic considerations, it is crucial to keep
in mind the human impact. By granting generous packages to the fired employees, or helping the employees
find new jobs firms can reduce the emotional impact of the downsizing.
MARKETING STRATEGIES
Marketing strategies is another area where the increasing competitive pressures prompt the corporations to
engage in activities at the expense of consumer education about their products and services in order to
generate sales. While most of the commercial advertising is regulated there continue to be instances where the
corporations stretch the truth, engage in subtle forms of deception, or make claims about their services that are
exaggerations or worse. Instances of such practices have been seen in the online brokerage industry where the
advertising practices and the advertising content of the online brokers could color the expectations of the online
investors.
Archie B. Carroll, an eminent researcher in the field of corporate social responsibility, broadly divided the
managements into three types: (i) Moral management (ii) Amoral management (iii) Immoral management.
Moral management strives to follow ethical principles and precepts. Even while fighting it tough for business
success, it never sacrifices the sense of fairness, justice and ethics. It does attach due weight to profits and
financial results, but within the purview of legal and moral compliance.
Amoral management is a middle path between moral and immoral. Basically, it does not have a stance on
issues relating, to morality or ethical considerations, nor do they seem to bother about them does it. When the
management is intentionally amoral, it thinks that general ethical standards are not applicable to business.
Therefore, it excludes ethical issues from the decision-making process. It does care about the letter of law, if
not the spirit of it. An amoral management intervenes in the matters when the actions of the employees lead to
external pressures. Immoral management is synonymous with the "unethical practices" in business. Such a
management not only ignores ethical consideration in business operations, but also actively opposes ethical
behavior. It supports extreme pragmatism and displays short-term tendencies.
It is generally agreed that law cannot substitute either ethical standards or corporate governance. However, law
can support the cause by specifying the basic minimum standards to be followed. Over and above the
compliance, strictly confined to the letter of the law, it is left to the firm to follow judicious and ethical practices.
In this context, it is worthy to note the following: "The law states minimum standards of conduct. But it does not
and cannot embody the whole duty of man, and mere compliance with the law does not necessarily make a
good citizen or a good company."
211
Given the limitations of law in enforcing high ethical standards, it still attempts to legally govern a few aspects of
business and corporate functioning. The following are the areas that can be subjected to the purview of law with
effective supervision:
i. Disclosure practices
ii. Transparency of operations
iii. Maintaining confidentiality of client information
The company law in any country can also govern the conduct and behavior of the management and
accountability, to the others within and outside the company. Regulations pertaining to conduct of meetings
address the issue of management participation and also pave the way for shareholder activism. The limit on the
maximum number of directorships attempts to define a span of attention and devotion for an individual.
Misrepresentations in prospectuses are dealt with under both civil and criminal laws.
TRANSPARENCY OF OPERATIONS
There is a major difference in terms of transparency of operations, between corporates and partnership firms.
This aspect is peculiar due to their historical adherence to the partnership form of business organization.
Visionary and ethical managements resorted to corporatization as a means of communicating their commitment
to greater transparency, responsibility and accountability. Investment banks also allowed the market to value
them.
The evolution of technology has provided greater access to information to the companies. Information
technology has enabled the exchanges to broadcast the trade and quotation information to all the market
participants. This allows all the market participants to have equal access to the same market information and
removes the disparity of information between the clients and brokers to a certain extent. The compensation
policies in the industry are peculiar and highly subjected to multiple counting of transaction value that can be
credited to each employee. It is important to design a compensation determination policy that would instill a
sense of confidence and fair play among the employees. Ethical practices mean "justice should not only be
done, but also seem to be done."
ENFORCING ETHICS
Companies face the daunting task of promoting ethical behavior among all the employees and ethics programs
seek to promote awareness of legal and ethical concerns and to encourage ethical behavior among the
employees. The key challenge for every financial managers lies in making ethics the core value and making
values, not rules the drivers of a company's culture. This is because in a purely rule-driven culture, people may
conclude, "what is not forbidden, is permitted" which is a dangerous assumption.
Though there can be no hard and fast rules on the tools that would bring in ethical business practices, there can
be a set of guidelines acting as general tools and techniques to create an ethical climate. According to
Theodore Parcell and James Weber, the management can incorporate ethical practices techniques in three
steps: establish appropriate company policy or code of ethical conduct, appoint a formal ethical committee and,
impart education on ethics during management development programs. The tools for enforcing ethics seek to
promote awareness of legal and ethical concerns and to encourage ethical behavior among employees at work.
The tools include:
1. Top management commitment: Managers can prove their commitment and dedication to work by employing
the other tools that would inject a sense of ethics in the staff.
2. Code of Ethics: This is a formal document that states an organization's primary values and the ethical rules
it expects employees to follow.
3. Ethics Committees: Codifying the ethical practices would alone not suffice if it is not supplemented
by an exclusive body for monitoring and steering the operations. An ethics committee needs to be
constituted with both the internal and external directors. This way, the firm can try to institutionalize ethical
behavior. The role of the committee becomes significant when the firm faces situations of dilemma
regarding policy matters. The ethics committee organizes regular meetings to discuss ethical issues,
communicate the code to all members of the organization, identifies possible violations of the code,
enforces the code, rewards ethical behavior and punishes those who violate corporate ethics, reviews and
update the code of ethics and reports activities of the committee to me board of directors.
212
4. Ethics Audits: This involves scrutiny and assessment of activities and their conformance with the
predetermined ethical guidelines. Audits attempt to identify and correct any deviations from the standards
set.
5. Ethics Training: The goal of ethics training is to encourage ethical behavior. This can be conducted both
during the induction of the employee as well as during the periodic employee/management development
programs.
6. Ethics Hotline: Ethics Hotline enables employees to deviate from the normal chain of command and reach
the top management with their observations, experiences, problems and opinions, relating to the ethical
validity of any of the firm's activity. For example, a country head may resort to unethical practices by
manipulating the equities market (underlying) to suit the derivatives operations of the firm. An employee
who comes to know of it can contact the region-head or the global corporate headquarters and complain
about the matter. Such a whistle-blower might even go public with the information, if he perceives that no
remedial action might be available inside the firm. The hotline helps in gathering the information from the
whistle-blower, thereby controlling the damage that adverse publicity can cause to the firm.
While a commitment to ethics is among the most valuable assets a firm can possess, it is also among the most
difficult of assets to acquire and maintain, as well as among the easiest to lose. Even a company with a long
tradition of being committed to ethics has no assurance that it will remain so committed. This requires the
managers to develop a strong ethical culture that imbibes and reflects the values, attitudes and beliefs, which
have the single greatest influence on how the investment bank works'. The future growth and success of any
company depends on its commitment to these values, attitudes and beliefs and its ability to instill them in its
employees.
213