Journal April 2011 Edition
Journal April 2011 Edition
Journal April 2011 Edition
IJSER
Volume 2, Issue 4, April 2011
ISSN 2229-5518
Research
Publication
IJSER
http://www.ijser.org
http://www.ijser.org/xplore.html
http://www.ijser.org/forum
E-mail: [email protected]
International Journal of Scientific and Engineering Research (IJSER)
Journal Information
SUBSCRIPTIONS
The International Journal of Scientific and Engineering Research (Online at www.ijser.org) is published
monthly by IJSER Publishing, Inc., France/USA/India
Subscription rates:
SERVICES
Advertisements
Advertisement Sales Department, E-mail: [email protected]
COPYRIGHT
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as de-
scribed below, without the permission in writing of the Publisher.
Copying of articles is not permitted except for personal and internal use, to the extent permitted by national copy-
right law, or under the terms of a license issued by the national Reproduction Rights Organization.
Requests for permission for other kinds of copying, such as copying for general distribution, for advertising or pro-
motional purposes, for creating new collective works or for resale, and other enquiries should be addressed to the
Publisher.
Statements and opinions expressed in the articles and communications are those of the individual contributors and
not the statements and opinion of IJSER Publishing, Inc. We assumes no responsibility or liability for any damage
or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained
herein. We expressly disclaim any implied warranties of merchantability or fitness for a particular purpose. If ex-
pert assistance is required, the services of a competent professional person should be sought.
PRODUCTION INFORMATION
For manuscripts that have been accepted for publication, please contact:
E-mail: [email protected]
Contents
22. Impact Fatigue Behaviour of Fully Dense Alumina Ceramics with Different
Grain Sizes
Manoj Kumar Barai, Jagabandhu Shit, Abhijit Chanda, Manoj Kr Mitra.129-133
34. An Innovative Quality Of Service (Qos) Based Service Selection For Service
Orchestration In Soa
K.Vivekanandan, S.Neelavathi211-218
Contents
41. Peripheral Interface Controller based the Display Unit of Remote Display
System
May Thwe Oo.255-263
49. An Adaptive and Efficient XML Parser Tool for Domain Specific Languages
W. Jai Singh, S. Nithya Bala.304-307
1
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract There are many research works on robotic devices to assist in movement training following neurologic injuries such
as stroke with effects on upper limbs. Conventional neurorehabilitation appears to have little impact on spontaneous biological
recovery; to this end robotic neurorehabilitation has the potential for a greater impact. Clinical evidence regarding the relative
effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies
are more effective than others. This paper consider the contribution on a haptic training method based on kinesthetic guidance
scheme with a non linear control law (proxybased second order sliding mode control) with the human in the loop, and with
purpose to guide a human users movement to move a tool (pen in this case) along a predetermined smooth trajectory with
finite time tracking, the task is a real maze. The path planning can compensate for the inertial dynamics of changes in direction,
minimizing the consumed energy and increasing the manipulability of the haptic device with the human in the loop. The
Phantom haptic device is used as experimental platform, and the experimental results demonstrate.
Index TermsDiagnosis and rehabilitation, haptic guidance, sliding mode control, path planning, haptic interface, passivity and
control design.
1 INTRODUCTION
Where mi R are the i mass of the i link, vi R n are With the Lemma 1, and the dynamic properties, we can
the i velocity of the i link. follow a control law design. In this paper we obtain the
The potencial energy is obtained by: control technique via Lyapunov theory, dynamical and
passivity properties.
n
u mi hi g (4) 4 DESIGN OF A NONOLINEAR JOINT
i 1
CONTROL BASED ON PASSIVIT
n
Where hi R are the i height of the i link respect to Given a designated trajectory qd (t ) , can be considered as a
mass center and g is a gravitational constant. By differen- Lyapunov function as follows [14]:
tiating (2) we obtain 1 T
V ( s) s D(q ) s s T K L tanh( s )dsds c (10)
1 2
q D (q) q q T D (q )q q T G ( q)
T
V (s) sT Yr (q, q.qr , qr ) Kd s KL tanh(s)ds (18)
TABLE 1
INITIAL AND CONVERGENCE TIME OF THE TASKS PRESENTED IN
FIG. 4
However D(q) corresponds to the matrix of inertia of techniques to analyze stability is through the behavior of
the haptic device. In P0, P1, P2, P3, P4, P5, P6, P7, P8, P9, the Lyapunov function (10) and their respective deriva-
P10, P11, P12 and P13 inertia is minimal due to the benefit tive (20).
of the task planning, and the design of the control law
that allows stable tracking [16].
6.1 The Experiments
Rehabilitation process consists on patient performance of
ten exercises, which have different characteristics under
certain parameters. Here are the characteristics of each
exercise:
1. Exercise 0.-The patient is guided to follow up
the trajectory, in this exercise haptic device ac-
tuators are zero, and this way the patient vo-
luntarily moves the PHANToM end effector
(see Fig.6).
Fig. 10. The high disturbances generated by the patient did not
prevent the system was unstable.
IJSER 2011
http://www.ijser.org
8
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
9
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstractaim of this research is to study the prevalence of eating disorder amongst female students of Tonekabon
University. The community being studied is the female students of Tonekabon University.300 students were randomly
selected and requested to complete the eating attitude test-26.
1 INTRODUCTION
2 METHOD
This is a descriptive study and aims to objectively de-
scribe the features of the desired community.
The research took place over a period of 4 months.
(March 2010 - June 2010). Fig. 1.
In this research, based on the total population -6000 fe-
male students-a sample of 300 were randomly selected.
Subjects were students who attended classes at university
throughout the week. The team, attended classes on dif- Among the three sub-scales namely: Oral Control food
ferent hours of different days and selected few names habits and Esurience (desire to eat), the most points be-
from the participants of the class-regardless of their BMI, longed to food habit sub-scale (86.88%) second is oral con-
body shape and weight-afterwards, subjects completed trol and the rest were high in esurience sub-scale.
the questionnaire. EAT-26 was used as the main method The main problem and barrier throughout the research
of data collection. was the subjects orientation towards the questions on the
The eating attitude test (EAT-26) has been proposed as an test which was resolved by the researchers explanation in
objective, self report, measuring the symptoms of anorex- person.
ia nervosa. It has been used as a screening instrument for
detecting previously undiagnosed cases of anorexia ner- 4 DISCUSSION
vosa in populations at high risk of the disorder. [21] The purpose of this study is to measure the prevalence of
Subjects can score between 0 78.any score above 20 is eating disorder in female students of Tonekabon Univer-
diagnosed with eating disorder. sity. According to the results 20.33% of subjects were
The EAT-26 can only show some symptoms of eating dis- identified with eating disorder.
order. To classify the type of disorder (AN or BN) a diag- In our country, in the field of eating disorder, few studies
nostic interview will need to take place. have been carried out.[22]
In measuring the prevalence of eating disorders amongst
high school students (male and female) in Sari- North of
3 RESULTS Iran-In the academic year of 2002-2003 in 10.5% of the
The results of this study showed that among 300 cases students, abnormal attitude towards eating was observed.
under review, 61 of them received a score higher than 20 This conclusion suggests the frequency of abnormal atti-
on the EAT-26 test and were diagnosed with eating dis- tude toward eating disorder is more or less similar to that
IJSER 2011
http://www.ijser.org
11
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
13
Abstract - The present paper is aimed at analyzing the impact of leverage on firms investment decision of Indian pharmaceutical
companies during the period from 1998 to 2009. To measure the impact of leverage on firms investment decision, pooling
regression, random and fixed effect models are used by taking, leverage, sales, cash flow, Return on Asset, Tobins Q, liquidity and
retained earnings as independent variable and investment as dependent variable. In addition , we demarcate between three types of
firms (i) Small firms, (ii) Medium firms and (iii)Large firms. The results reveal that a significant positive relationship between leverage
and investment, while we found a negative relationship between leverage investment for medium firms and positive relationship
between leverage and investment in large firms. Our econometric results reveal an insignificant relationship between the two
variables for medium and large firms.
Index Terms-- Investment, Tobins Q, Cash flow, Liquidity, ROA, Size and Retained Earnings.
industrial sectors. It requires capital for financing As far as the hierarchy of financing
firms assets. Among the different sources of fund, sources as it exists in the economic literature, is
debt is a cheaper source because of its lowest cost concerned, cash flow is the cheapest financing
of capital. The investment decision of the firm is of sources followed by debts and in the end, by its
three categories that can be adopted by firms issuing of new shares. Debts can be cheaper than
management besides the financing decision and the issue of new shares because the loan contract
the net profit allocation decision. The investment can be created as to minimize the consequences of
IJSER 2011
http://www.ijser.org
14
information problem. Giving the fact the degree of Second, several emerging economies, even until
information asymmetry and the agent costs the late 1980s, suffered from financial depression,
depend on the peculiarities of every firm, such with negative real rates of interest as well as high
firms are more sensitive to financial factors than levels of statutory pre-emption. This could have
other. The debt limit of the firms is determined in meant a restricted play of market force for resource
interface between capital structure of firms and THE BACKROUND OF THE STUDY
finance constraints could have been largely
Several authors have studied the impact of
constraint- driven and have less illuminating.
financial leverage on investment. They reached
IJSER 2011
http://www.ijser.org
15
conflicting conclusions using various approaches. commitments invest less no matter what their
that the investment policy of a firm should be corrective measures. Ultimately, leverage is
based only on those factors that would increase the lowered if future growth opportunities are
profitability, cash flow or net worth of a firm. recognized sufficiently early.
net present value of investment opportunities, investment? The issuance of debt commits a firm to
since the benefits accrue to the bondholders rather pay cash as interest and principal. Managers are
than the shareholders thus ,highly levered firm are forced to service such commitments .too much
less likely to exploit valuable growth opportunities debt also is not considered to be good as it may
as compared to firm with low levels of leverage a lead to financial distress and agency problems.
leverage an investment increase would lower investment and sell off unprofitable business
and the rate of growth of companies, point out that focused counterparts and diversified companies
for companies with fewer investment make larger investments (net cost of capital/sales)
opportunities (i.e. companies with a low Tobins than focused counterparts. They also point out that
Q), there is a negative correlation between the debt debt ratio influence management decisions on
ratio and the investment. The estimation results investment and that diversified companies can
from their studies do not find a negative overcome debt ratios through the distribution of
correlation between the debt ratio and the growth liabilities by corporate managers.
exchange Mauritians for the year 1990 04. They the profitability of the firm i; LIQiyt-1 represents
found that leverage has a significant negative effect liquidity of firm i : RETESi,t-1 is the retained
Where Ii t represents the net investment of firm i TOBINS Q: we use prefect and wiles (1994) simple
during the period t; Ki, t-1is the net fixed asset; Q (market value + liabilities / book value of assets)
CFit is t5he cash flow of firm i time t: Qi, t-1is the as a proxy for growth opportunities defined as the
Tobins Q: LEVi,t-1 represents the leverage: market value of total assets of the firm divided by
SALEi,t-1 stands for net sales of firm i ; ROAi,t-1 is the book value of assets. Market value of the firm
IJSER 2011
http://www.ijser.org
19
is the sum of total liabilities, the value of equity projects. The purpose of allocating money to
shares and the estimated value of preference project is to generate a cash inflow in the future,
shares. The market value of preference share is significantly greater than the amount invested.
calculated as preference divided multiply by ten That is the objective of investment is to create
which measures growth opportunities and it, shareholders wealth. In order to eliminate any size
compare the value of a company given by financial effect. We normalize this measure by taking the
market with the value of a company, Tobins q book value of assets; this method was utilized by
would be 1.0 if Tobin q is greater than 1.0 then the Lehn and Poulson (1989) and Lan et al (1991).
SALE: sale is measured as net sales deflated by net investment in assets contributes to the profitability
fixed assets. Which measures the efficiency with and we can proxy high profitability with high
net fixed assets is measured. A high ratio indicates growth firms.
CASH FLOW: Cash flow is measured as the total liability and is the ability of firms to meet its
of earning before extraordinary items and current obligations. Firms should ensure that they
depreciation and is an important determinant for do not suffer from lack of liquidity as this may
growth opportunities. If firms have enough cash result in to a sate of financial distress ultimately
inflows it can be utilized in investing activities. It leading to bankruptcy. Lack of liquidity can lead to
also provides evidence that investment is related to a struggle in term of current obligations, which can
the availability of internal funds. Cash flow may be affect firms credit worthiness, Bernake and Gerler
termed as the amount of money in excess of that (1990) argued that both the quantity of
needed to finance all positive net present value of investment spending and its expected return will
IJSER 2011
http://www.ijser.org
20
be sensitive to the credit worthiness of borrowers. performed with a large chi-square values indicate
That leads us to say that investment decisions of of low P-value. We reject that the pooled estimate
firms are sensitive to current liquidity. However, is appropriate. The second to compare random
Firms with high liquidity give the signal that funds effect estimate with fixed effect estimate, the
are tied up in the current assets. Hausman test is performed. If the model is
This section portrays the result from the firms. It shows that the leverage has a positive
regression estimation, we present result for the impact on investment at the 5% significant level.
small size, medium size and larger sized firm is The impacts of other variables on have the
classified based on the size. The smaller size is expected signs. The retained earnings have a
deviation of total asset and larger size is obtained identify which empirical methodology pooling
by adding mean value of asset to standard random effect or fixed effect regression is most
deviation. The median sized firms are those firms suitable, we perform two statistical test the first the
which are not belong to both categories of the firm. Lagrangian Multiplier (LM) test of the random
The econometric result for the sample firms is effect model. The null hypothesis is that individual
showed the pooled estimates; random effect effect ui is 0. The chi-square value is 25.74 thus the
estimates and fixed effect estimates on the T values null hypothesis is rejected at 1% level of
are shown in the parenthesis. Two statistics are significance. The results suggest that the rho effect
used in order to identify, which methodology is is not zero and the pooling regression is not
appropriate to establish the relationship between suitable in this case the regression co-efficient
leverage and investment. First we compare the leverage on small firms from the pooling
pooled estimates and random effect estimates. The regression is equal to 1.3451 and is not significant.
second Lagrangian Multiplier (LM) test is The regression co-efficient of leverage of firms
from
IJSER 2011
http://www.ijser.org
21
Random effect and fixed effect model are 3.4868 effect and fixed effect models suggesting that
and 1.8200 respectively. The regression ignoring individual firm effects leads to an
co-efficient from the polling regression are much underestimation of the impact of leverage on
smaller than those estimated from the random investment.
We conduct the Hausman specification test hypothesis is rejected at the 1% significance level.
to compare the fixed effect and the random effect The results suggest that the fixed effect model is most
models .If the model is correctly specified and of appropriate in estimating the investment equation.
individual effects are uncorrelated with independent
Leverage is statistically significant at 1%
variables the statistics are showed that the null
and 5% level of significant and is positively related to
IJSER 2011
http://www.ijser.org
22
investment. A 1 unit increase in the leverage leads to worthiness loss of creditors confidence and this is not
an increase by 3.4568 units in investments this the case as shown by the results from the above table.
implies that a leverage increases in small firms is also
From the table it is observed that TobinQ is
increase a investment of firms because firms do not
negatively related with investments and not
have a adequate asset cushion for financing the
statistically significant.
projects. Thus, in a small sized firm tend to because
more dependent on debt as a source of finance to RESULT OF MEDIUM FIRMS
finance the projects.
Table No 2 Reveals that the regression
The table also reveals that small firms are results of medium firms. The calculated f value is
under utilizing their fixed assets and it would affect greater than table value. Hence the selected variables
the ability in generating the volume of sales and the are significantly associated with investment during
co-efficient value is -0.001 and it is not statistically the period. Further it shows that the leverage has no
significant. impact on investment in medium firm but it has
negative relationship with investment during the
The co efficient value of ROA is 0.0003
period of study. In order to identify which
and is not statistically significant but positively
methodology-pooling random effect or fixed effect
related with investment. It indicates the operating
regression model is most suitable, we perform two
efficiency of the employed funds over investment is
statistical tests, the first the LM test of the random
positive. Higher the ROA is also attracting funds
effect model. The null hypothesis is that individual
from investors for expansion and growth.
effect ui is o. The chi-square value is 4.15. Thus the
Cash flow and retained earnings are null hypothesis is rejected at 1% level of significance.
positively related with investments not statistically The results suggest that the rho effect is zero and the
significant and coefficient value is 0.2264 and 0.0020 pooling regression is suitable in this case. The
respectively. This implies that the issuance of debt regression the co efficient of leverage on medium
engages the firm to pay cash as interest and principal firms from the pooling regression equal to 1.6543 and
with availability of free cash flow and internally is not significant. The regression co-efficient on
IJSER 2011
http://www.ijser.org
23
We conduct the Hausman specification test statistics are reported that the fixed effect model is
to compare the fixed effect and random effect most appropriate in estimating the investment
models. If the model is correctly specified and if equation because the R2 value of fixed effect model is
individual effects are uncorrelated with independent greater than random effect model.
variable, the fixed effect and random effect estimates
should not be statistically different. Further these
IJSER 2011
http://www.ijser.org
24
Leverage is not statistically significant at 1% regression model is most suitable. we perform two
and five per cent level of significance and is statistical test, the first the LM test of the random
negatively related with investment. This implies that effect model. The null hypothesis is that individual
leverage has no impact in medium firms investment effect ui is 0. The chi-square value is 2.26. Thus null
decision. It is because of inadequate cash flow and hypothesis is rejected @ 1% level of significance.
ploughing back of funds. Hence medium sized firms The results suggest that the rho effect is not zero and
are making investment decision based on the internal the pooling regression is not suitable in this case. The
financial resources. The table further reveals that the regression co-efficient of leverage on large firms
medium firms are under utilizing there fixed assets from the pooling regression is equal to 23.7516 and is
and it would effects the ability in generating the not significant. The regression co-efficient on
volume of sales and coefficient value is -0.0016 and leverage from random effect and fixed effect model
is not statistically significant. The co efficient value 9.5758 and 23.7516 respectively. The regression co-
of ROA is -0.0012 and is not statistically significant efficient from the pooling regression Model is greater
but negatively related with investment. The cash than those estimated from the random and fixed
flow and Retained associated with in order earnings effect model suggesting that the individual effect of a
are positively associated with investment and they are firm leads to an estimation of the impact of leverage
statistically significant at 1% and 5% level of on investment.
significant with investment. It indicates that higher
We conduct the Hausman specification test to
the cash flow and retained funds higher will be the
compare the fixed effect and random effect model if
investment. Liquidity and Tobinq are not
the model is correctly specified and if individual
statistically significant with investment the TobinQ
effect are an correlated with independent variable the
also requested firm value and hence may be affected
fixed and random effect are un correlated with
by leverage. But proxies in this do not Influence the
independent variable the, fixed and random effect
investment because the leverage has no impact on
estimate should not be statistically different further
investment in medium firms.
these model is most appropriate that the fixed effect
RESULT OF LARGE FIRMS model is most appropriate in estimating the
investment equation because the R2 value of fixed
Table No 3 Shows that the regression results of large
effect model is greater than the random effect model.
firms. The calculated f value is greater than table
value. Hence the selected variables significantly The table also revels that the co-efficient
associated with investments during the period of value of variables like sales, ROA and TobinQ are
study. Further it shows that the leverage has no negatively related with investment and also they are
impact on investment In large firms but it has not significant in the leverage firms.
positive relationship with investments during the
Cash flow and Retained earnings are
period of study. In order to identify which
positively associated with investment in large firms
methodology-polling, random effect fixed effect
and are statistically significant it is because of heavy
IJSER 2011
http://www.ijser.org
25
demand for its product in national and international investment. We conclude that the leverage is not
market. Liquidity is negatively related with influenced the investment decisions in large sized
investment and is not statistically significant with pharmaceutical firms in India.
IJSER 2011
http://www.ijser.org
26
2009. Prior theoretical work posits that financial hack of funds. Hence we conclude that the leverage
leverage can have either a positive or a negative has no impact of pharmaceutical industry in India.
impact on the value of the firm because of its Cash flow and Retained earning play significant role
influence on corporate investment decisions. The in determing the investment the decisions due to the
investigation is motivated by the theoretical work of change in the monetary policy of the country. Cash
Myers(1977) Jen Seen (1986), Stulz (1988,1990) and flow effect investment decisions due to the
by an analytical work of McConnell and Servases imperfections of the capital market and due to the
(1990). We examined whether financing fact internal financing is cheaper than external
consideration affects firm investment decisions. We financing. These financing sources are far more
found that leverage is positively related to the level of important for small and highly leveraged firms. Our
investment and that this positive effect is results support Hite (1977) Who found that leverage
significantly stronger for firms with small firms and and investment are positively associated with given
negative impact on medium firms but positive impact the level of financing if an investment increase would
on large firms and this is not satirically significant. lower financial risk and hence the cost of bond
Further we inferred that the Indian pharmaceutical financing.
industry has heavy market demand for its product, so
that Industry had enormous cash flow and bloughing
REFERENCES
8. Johnson, Shane, A., (2003), Debt maturity and the effects of
1. Aivazian, V.A Callen, J.L., 1980, Corporate leverage and growth opportunities and liquidity risk on leverage, Review of
growth: the game theoretic issues, Journal of financial Financial Studies vol16, pp.209-236.
Economics 8, 379399.
9. Kopcke and Howrey, (1994), A panel study of investment:
2. Beush, T., Pagan, A., 1980,The language multiplier test and its Sales, cash flow, the cost of capital, and leverage, New England
applications to model specifications in econometirs, Review of Review, Jan/Feb., pp. 9-30
economic studies 47, 239, 253.Econommics of information and
Uncertainness, University of chicago press, chicago, pp 107-140. 10. Lang, L.E.; Ofek, E.; Stulz, R., (1996), Leverage, investment,
and firm growth, Journal of Financial Economics, Vol40, pp. 3-
3. Cantor; Richard, (1990), Effects of leverage on corporate 29.
investment and hiring decisions, Federal Bank of New York
Quarterly Review, pp. 31-41. 11. McConnell, John, J. and Servaes, H., (1995), Equity
ownership and the two faces of debt, Journal of Financial
4. Hausman, J.A., 1978, Specification tests in econometric, Economics, vol39, pp.131-157. .
Econometrica 46, 1251 1271.
12. Modigliani; Franco and Merton, H.; Miller (1958), The cost
5. Himmelberg, C.P., Hubbrad, R.G., Palia,D., 1999,
of Capital, corporation finance, and the theory of investment,
Understanding the determinants of managerial ownership and American Economic Review vol48, pp. 261-297.
the link between ownership and performance, Journal of financial
economics, 53, 353-384. 13. Modigliani; Franco and Merton, H.; Miller (1963), Corporate
income taxes and the cost of capital, a correction, American
6. Jensen; Michael, C., (1986), Agency costs of free cash flow, Economic Review vol48, pp. 261-297.
corporate finance and takeovers, American Economic Review
vol76, pp. 323-329. 14. Modigliani, F., Miller. M.H., 1958, The cost of capital,
corporation finance, and the theory of investment, American
7. Jensen, M.C., 1986, Ageny cost of free cash flow, corporate Economic Review 53, 433-443.
finance, and take-overs, American economic review 79, 323-329.
IJSER 2011
http://www.ijser.org
28
IJSER 2011
http://www.ijser.org
29
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Nomenclature
A Surface area, m2
FR Filling ratio
I Electric current, Amp
k thermal conductivity, W/m.K
N Number of thermocouples
Q Input heat rate, W
q Heat flux, W/m2
R Total thermal resistance of heat pipe, K/W
T Temperature, K
V Applied voltage, Volt
X Horizontal coordinate parallel to the test section, mm
Greek Symbols
Volume fraction of nanoparticles, %
Density, kg/m3
Dynamic viscosity, N.s/m2
Subscript
c Condenser
e Evaporator
ef Effective
e Evaporator
l Liquid
m Base fluid
n Nanofluid
p Particles
water Pure water
Dimensionless Numbers
RR Reduction factor in thermal resistance
K ef Le T
Kq Dimensionless heat transfer rate ,
Q
ef Cpef
Pr Prandtl number,
K ef
IJSER 2011
http://www.ijser.org
30
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
INTRODUCTION
plate heat pipe (FPHP). Temperature fields in the heat pipe
T o solve the growing problem of heat generation by
were measured for different filling ratios, heat fluxes and
vapor space thicknesses. Experimental results showed that
electronic equipment, two-phase change devices such as heat the liquid distribution in the FPHP and consequently its
pipe and thermosyohon cooling systems are now used in thermal performance depends strongly on both the filling
electronic industry. Heat pipes are passive devices that ratio and the vapor space thickness. A small vapor space
transport heat from a heat source to a heat sink over thickness induces liquid retention and thus reduces the
relatively long distances via the latent heat of vaporization of thermal resistance of the system. Nevertheless, the vapor
a working fluid. The heat pipe generally consists of three space thickness influences the level of the meniscus
sections; evaporator, adiabatic section and condenser. In the curvature radii in the grooves and hence reduces the
evaporator, the working fluid evaporates as it absorbs an maximum capillary pressure. Thus, it must be, carefully,
amount of heat equivalent to the latent heat of vaporization. optimized to improve the performance of the FPHP. In all
The working fluid vapor condenses in the condenser and the cases, the optimum filling ratio obtained, was in the
then, returns back to the evaporator. Nanofluids, produced range of one to two times the total volume of the grooves. A
by suspending nano-particles with average sizes below 100 theoretical approach, in non-working conditions, was
nm in traditional heat transfer fluids such as water and developed to model the distribution of the liquid inside the
ethylene glycol provide new working fluids that can be used FPHP as a function of the filling ratio and the vapor space
in heat pipes. A very small amount of guest nano-particles, thickness.
when uniformly and suspended stably in host fluids, can Das et al. [7-8] and Lee et al. [9]) found great enhancement of
provide dramatic improvement in working fluid thermal thermal conductivity (5-60%) over the volume fraction range
properties. The goal of using nanofluids is to achieve the of 0.1 to 5%.
highest possible thermal properties using the smallest All these features indicate the potential of nanofluids in
possible volume fraction of the nano-particles (preferably < applications involving heat removal. Issues, concerning
1% and with particle size<50 nm) in the host fluid. stability of nanofluids, have to be addressed before they can
Kaya et al. [1] developed a numerical model to simulate the be put to use. Ironically, nanofluids of oxide particles are
transient performance characteristics of a loop heat pipe. more stable but less effective in enhancing thermal
Kang et al. [2], investigated experimentally, the performance conductivity in comparison with nanofluids of metal
of a conventional circular heat pipe provided with deep particles.
grooves using nanofluid. The nanofluid used in their study The aim of the present work is to investigate,
was aqueous solution of 35 nm diameter silver nano- experimentally, the thermal performance of a heat pipe. The
particles. It is reported that, the thermal resistance decreased affecting parameters on thermal performance of heat pipe are
by 10-80% compared to that of pure water. studied. The type of working fluid (pure water and Al2O3-
Pastukhov et al. [3], experimentally, investigated the water based nanofluid), filling ratio of the working fluid,
performance of a loop heat pipe in which the heat sink was volume fraction of nano-particles in the base fluid, and heat
an external air-cooled radiator. The study showed that the input rate are considered as experimental parameters.
use of additional active cooling in combination with loop Empirical correlation for heat pipe thermal performance,
heat pipe increases the value of dissipated heat up to 180 W taking into account the various operating parameters, is
and decreases the system thermal resistance down to 0.29 presented.
K/W. 2. Experimental Setup and Procedure
Chang et al. [4] investigated, experimentally, the thermal A schematic layout of the experimental test rig is
performance of a heat pipe cooling system with thermal shown in Fig.1. This research adopts pure water and Al2O3-
resistance model. An experimental investigation of water based nanofluid as working fluids. The size of nano-
thermosyphon thermal performance considering water and particles is 40 nm. The test nanofluid is obtained by
dielectric heat transfer liquids as the working fluids was dispersing the nano-particles in pure water. The working
performed by Jouhara et al. [5]. The copper thermosyphon fluid is charged through the charging line (6). In the heat
was 200 mm long with an inner diameter of 6 mm. Each pipe, heat is generated using an electric heater (12). The
thermosyphon was charged with 1.8 ml of working fluid and vapor generated in the evaporator section (8) is moved
tested with an evaporator length of 40 mm and a condenser towards the condenser section (4) via an adiabatic tube (5)
length of 60 mm. The thermal performance of the water whose diameter and length are 20 mm and 40 mm,
charged thermosyphon is compared with the three other respectively. Both evaporator and condenser sections have
working fluids (FC-84, FC-77 and FC-3283). The parameters the size of 40 mm-diameter and 60 mm-height. The
considered were the effective thermal resistance as well as condensate is allowed to return back to evaporator section by
the maximum heat transport. These fluids have the capillary action "wick structure" through the adiabatic tube.
advantage of being dielectric which may be better suited for The surfaces of the evaporator section, adiabatic section, and
sensitive electronics cooling applications. Furthermore, they condenser section sides are covered with 25 mm-thick glass
provide adequate thermal performance up to approximately wool insulation (3). Seventeen calibrated cooper-constantan
50 W, after which liquid entrainment compromises the thermocouples (T-type) are glued to the heat pipe surface
thermosyphon performance. and distributed along its length to measure the local
Lips et al. [6], studied experimentally, the performance of temperatures (Fig. 2). Two thermocouples are used to
IJSER 2011
http://www.ijser.org
31
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
measure ambient temperature. All thermocouples are maintain the continuity of the interfacial evaporation,
connected to a digital temperature recorder via a multi-point capillary pressure must satisfy the following relation [2]
switch. The non condensable gases are evacuated by a
vacuum pump. The heat pipe is evacuated u to 0.01 bar via P Pl Pv Pe Pc
the vacuum line (10). The power supplied to the electric
(1)
heater (12) is measured by a multi-meter (13). The input
voltage was adjusted, using an autotransformer (2). The
Boiling limit is directly related to bubble formation in the
voltage drops across the heater were varied from 5 to 45
liquid. In order that a bubble can exist and grow in liquid, a
Volts. The A.C. voltage stabilizer (1) is used to ensure that
certain amount of superheat is required. Accurately
there is no voltage fluctuation during experiments. The
characterizing the thermal power transfer, Q is a complicated
pressure inside the evaporator was measured by a pressure
task because it is difficult to accurately quantify the energy
gage with a resolution of 0.01 bar.
loss to the ambient surroundings. Therefore, the whole
Thermocouples (with the uncertainty lower than 0.20 oC) are
surface of the heat pipe is well insulated so that the rate of
distributed along the surfaces of the heat pipe sections as
heat loss can be ignored. The heat input rate can be
follows: six thermocouples are attached to the evaporator
calculated using the supplied voltage and measured current
section, two thermocouples are attached to the adiabatic
such that,
section, and nine thermocouples are attached to the
condenser section. The obtained data for temperatures and
input heat rate are used to calculate the thermal resistance. Q I V
One can define the filling ratio, FR, as the volume (2)
of charged fluid to the total evaporator volume. The Where V and I are the applied voltage in Volt and current in
working fluid is charged at 30 oC. Amp, respectively.
The effects of working fluid type, filling ratio, volume
fraction of nano-particles in the base fluid, and heat input The experimental determination of the thermal
rate on the thermal performance of the heat pipe are performance of the heat pipe requires accurate
investigated in the experimental work. The experimental measurements of evaporator and condenser surface
runs are executed according to the following steps: temperatures as well as the power transferred. Calculating
1. The heat pipe is evacuated and charged with a certain evaporator and condenser temperatures is, relatively, a
amount of working fluid straightforward task. They are obtained by simply averaging
2. The supplied electrical power is adjusted manually at the the temperature measurements along the evaporator and
desired rate using the autotransformer. condenser surfaces. Thus, evaporator and condenser
3. The steady state condition is achieved after, temperatures can be expressed as:
approximately, one hour of running time using necessary i N te i N tc
adjustments to the input heat rate. After reaching the steady
state condition, the readings of thermocouples are recorded,
Ti 1
ei T
i 1
ci
3. Data Reduction
ef 1 m p
The effective dynamic viscosity of nanofluids can be
Although heat pipes are very efficient heat transfer
calculated using different existing equations that have been
devices, they are subject to a number of heat transfer
obtained for two-phase mixtures. The following relation is
limitation. For high heat flux heat pipes operating in low to
the well-known Einsteins equation for a viscous fluid
moderate temperature range, the capillary effect and boiling
containing a dilute suspension of small, rigid, spherical
limits are commonly the dominant factor. For a given
particles.
capillary wick structure and working fluid combination, the
pumping ability of the capillary structure to provide the ef m (1 2.5 )
circulation for a given working fluid is limited. In order to (6)
IJSER 2011
http://www.ijser.org
32
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
34
IJSER 2011
http://www.ijser.org
35
INTERNATIONAL JOURNAL OF SCIENTIFIC & ENGINEERING RESEARCH, VOLUME , ISSUE, - 7
ISSN 2229-5518
42 0.2
38
0.12
36
0.08
34
0.04
0 0.4 0.8 1.2
32
0 40 80 120
Volume Fractionof nanoparticle,
Distance, mm
Figure 6 Variation of thermal resistance with volume
Figure 3 Temperature distribution along the heat fraction of nano-particles.
pipe surface for different nanofluid concentration 80
46 Q=40 W
42
o
40
40
38
20
36
0
34 0 0.4 0.8 1.2
0 40 80 120
Q=40 W
Thermal resistance, R, K/W
0.2
0.24
0.16
0.2
0.12
0.16 0.08
0.04
0.12
0.2 0.4 0.6 0.8 1
F i l l i n g R a t i o, FR 0
0 0.05 0.1 0.15 0.2 0.25
Figure 5 Variation of thermal resistance with
filling ratio at different heat input rates Corelated Thermal Resistance, K/W
Figure 8 Experimental Nusselt number versus correlated
IJSER 2011
http://www.ijser.org
36
INTERNATIONAL JOURNAL OF SCIENTIFIC & ENGINEERING RESEARCH, VOLUME , ISSUE, - 8
ISSN 2229-5518
0.2
Thermal Resistance ( oC / W)
0.16
0.12
0.04
0 20 40 60
Heat Input ( W )
0.28
0.24
0.2
0.16
0.12
0 0.2 0.4 0.6 0.8 1
F i l l i n g R a t i o, FR
Figure 10 Comparison between present data and
previous one for the variation of thermal
resistance with filling ratio
IJSER 2011
http://www.ijser.org
37
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract The biometric person identification technique based on the pattern of the human iris is well suited to be applied to access
control. Security systems having realized the value of biometrics for two basic purposes: to verify or identify users. In this busy world,
identification should be fast and efficient. In this paper we focus on an efficient methodology for identification and verification for iris
detection using Haar wavelet and the classifier used is Minimum hamming distance, even when the images have obstructions, visual noise
and different levels of illuminations.
1. Introduction the criminals or malicious intruders to fool recognition
system or program. Iris cannot be spoofed easily.
its function is to control the amount of light entering further implementation we will be using Chinese
through the pupil. Iris is composed of elastic connective academy of science-Institute of automation (CASIA) iris
tissue such as trabecular meshwork. The agglomeration image database available in the public domain [7].
of pigment is formed during the first year of life, and 3. Preprocessing
pigmentation of the stroma occurs in the first few years 3.1 Locating Iris
[7][8]. The first processing step consists in locating the inner
and outer boundaries of the iris and second step to
normalize iris and third step to enhance the original
image as in Fig.4 [4][6]. The Daugmans system, Integro
differential operators as in (1) is used to detect the center
and diameter of iris and pupil respectively.
2
max(r, x0, y0)
r 0 I (r * cos x0, r * sin y 0) --- (1)
Fig. 2: Structure of Eye Where (x0, y0) denotes the potential center of the
searched circular boundary, and r its radius.
The highly randomized appearance of the iris makes its 3.2 Cartesian to polar reference transform
use as a biometric well recognized. Its suitability as an Cartesian to polar reference transform suggested by
exceptionally accurate biometric derives from [4]: J.Daugman authorizes equivalent rectangular
i. The difficulty of forging and using as an
representation of the zone of interest as in Fig.4, remaps
imposter person;
each pixel in the pair of polar co-ordinates(r, ) where r
ii. It is intrinsic isolation and protection from the and are on interval [0,1] and [0,] respectively. The
external environment;
unwrapping in formulated as in (2) [1].
iii. Its extremely data-rich physical structure;
iv. Its genetic propertiesno two eyes are the I ( x(r, ), y(r, )) I (r , ) --- (2)
same. The characteristic that is dependent on
Such that
genetics is the pigmentation of the iris, which
determines its color and determines the gross x(r, ) (1 r ) xp( ) rxi( ),
anatomy. Details of development, that are --- (3)
y(r , ) (1 r ) yp( ) ryi( )
unique to each case, determine the detailed
morphology; where I(x, y), (x, y), (r, ), (xp, yp), (xi, yi) are the
v. its stability over time; the impossibility of iris region, Cartesian coordinates, corresponding polar
surgically modifying it without unacceptable coordinates, coordinates of the pupil, and iris
risk to vision and its physiological response to boundaries along the direction, respectively.
light, which provides a natural test against
Image Acquisition
artifice.
After the discovery of iris, John G. Daugman, a
professor of Cambridge University [8] ,[9], suggested an Preprocessing
image-processing algorithm that can encode the iris
pattern into 256 bytes based on the Gabor transform.
In general, the iris recognition system is composed Feature Extraction
of the following five steps as depicted in Fig. 3
According to this flow chart, preprocessing including
image enhancement. Data Base : Reference Code Pattern Matching
Fig. 3: General steps of the iris recognition system process will be fast and simple accordingly. Performance
of classifiers is based on minimum Hamming Distance
(MHD) as in (4).
IJSER 2011
http://www.ijser.org
41
IJSER 2011
http://www.ijser.org
42
(a)
(b)
Fig.5 :( a) wavelet decomposition steps diagram and (b) 4-level decomposition of typical image with db2 wavelet
IJSER 2011
http://www.ijser.org
43
Index Terms -- Mn doped SnO2, Spray Pyrolysis, XRD, UV, Electrical study.
1. INTRODUCTION
2. EXPERIMANTAL
Mn doped SnO2 thin films were prepared by spray for preparing nanocrystalline films plays an important role. Here the
pyrolysis method. Mn doped SnO2 thin films were prepared by temperature of the substrate kept at 450C and the solution was
spray pyrolysis method. The starting materials were SnCl4.5H2O sprayed using atmospheric air as carrier gas. Then the film was
for Tin and Mn(CHOO3)2.4H2O for Manganese.The allowed to natural cool down. The structural studies on as deposited
concentration of 0.5m of Stannous chloride and 0.1m of Manganese manganese doped tin oxide thin films were analyzed using X-Ray
acetate was taken in two different beakers with double distilled diffractometer (Shimadzu XRD-6000). Using EDAX (JSM
water. Then 98% of Stannous chloride solution and 2% of 6390) the elemental composition of the films was carried out. The
manganese acetate solution was mixed together and stirred using optical and electrical properties of the films done by UV-Vis
magnetic stirrer for 4 hours and allowed to aging for ten days. The spectrometer (Jasco-570 UV/VIS/ NIR) and Hall (Ecopia HMS-
clear solution of the mixer was taken for film preparation by spray 3000) measurement system.
pyrolysis method. The temperature of the substrate in this method
IJSER 2011
http://www.ijser.org
44
The structural studies on as deposited Mn doped Tin from large number of diffracted peaks. The tetragonal structure of
oxide were analyzed by X-Ray spectrophotometer and the graph the sample with the three strong peaks of (1 1 0), (1 0 1) and (2 1 1)
between 2 theta versus diffracted ray intensity is shown in figure 1. correspond with peak position of 2=26.3609, 33.6541 and 51.6145
The polycrystalline natures of the prepared samples were observed respectively were identified using standard JCPDS files.
Substrate temperature is one of the main parameters, which of as deposited films were calculated using Debye-Scherrers
determine the structural properties of the films. The crystalline size formula given by,
--------------------- (1)
Where is the wavelength of X-ray used (1.54 A), is the full angle. The lattice constant of the spray coated Tin oxide
width half maximum (FWHM) of the peak and is the glancing films calculated using the formula
,
-------------------- (2)
Whered is the interplanar distance, (h k l) are the Miller indices The calculated crystalline size (D) and lattice constant (a and c) of
and a and c are the lattice constant for the Tetragonal structure. spray coated Mn doped Tin oxide are tabulated in table 1.
IJSER 2011
http://www.ijser.org
45
Fig 2. The EDAX spectrum of as deposited Mn doped Tin Oxide thin films
Fig 2 shows,The EDAX spectrum shows the compositional wt % of atomic percentage of doped Mn was observed as 1.09 % and 0.45 %
the used materials. The weight and atomic percentage of Sn was respectively.
observed as 122.96 % and 23.65 % respectively. The weight and
The optical studies of the Mn doped films were studies by the films. Also the absorption peaks around 400 nm and 550 nm
UV Vis spectrometer in the range of 200-900 nm. The absorption (indicated by arrow) observed in the graph shown in fig 3.
edge starts with 294 nm reveals that the Nanocrystalline effect of
1.6
45
1.4 40
35
1.2
30
Transmittance %
Absorption %
1.0
25
0.8 20
15
0.6
10
0.4
5
0.2 0
300 400 500 600 700 800 300 400 500 600 700 800 900
Wavelength (nm) Wavelength (nm)
Fig 3. Absorption and transmittance spectra of Mn doped Tin oxide thin films
--------------------------- (3)
IJSER 2011
http://www.ijser.org
46
(4)
100
90
80
70
60
13
50
h) *10
2
40
30
20
10
0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
Photon energy (eV)
Fig 4. Optical band gap plot between photon energy versus absorption coefficient
thin film. The straight nature of the films over the wide range of clearly shows the observed value of band gap is greater than the
photon energy indicates the direct type of transition. The optical gap bulk band gap (2.5 eV) of Tin oxide.
The electrical properties of the prepared films carried out as 2.161x103 -1 cm-1 and 4.628 x10-4 .cm respectively. The
using Hall measurement system at room temperature with the given carrier concentration of the Mn doped SnO2 have the value of
input voltage of 1 mA. The negative sign of the Hall coefficient minus 1.703x1021cm-2. The mobility of the films were found as
-3
value of -3.666x10 shows the n-type semiconducting nature of the 7.922 cm2/V.sec . From these result is observed that the Mn doped
films. The conductivity () and resistivity () of the film observed SnO2 films have good electrical properties.
4. CONCLUSION
Manganese doped Tin oxide thin films were prepared by The optical studies reveals that the presence of nanoparticle on the
spray pyrolysis method. The X-ray diffractogram shows the films. The signature of nanocrystalline effect of as deposited film is
polycrystalline nature of as deposited films with tetragonal absorption edge (294 nm) and the rise in transmittance spectra. The
structure. The crystalline size of the film was calculated using calculated band gap of 3.25 is greater than the bulk band of value of
Debye-Scherer formula is varies from 16-22 nm corresponds to Tin oxide. The n-type semiconducting nature of the films observed
three strong peaks. The calculated lattice constant of the films from from negative sign of the Hall coefficient. The conductivity of
interplanar distance and peak plane is a=4.73A and c=3.17A . 2.161x103 -1 cm-1 was observed on as deposited films.
REFERNCES
[1].Arivazhagan.V , Rajesh.S, Journal of Ovonic research, [3] R. S. Rusu, G. I. Russia, J. Optoelectron. Adv. Mater 7(2), 823
Vol.6,No.5 ,221-226 ,(2010) (2005).
[2] J. B. Yoo, A. L. Fahrenbruch, R. H. Bube, J Appl Phys. 68, 4694 [4] M. Penza, S. Cozzi, M. A. Tagliente, A. Quirini, Thin Solid
(1990). Films, 71, 349 (1999).
IJSER 2011
http://www.ijser.org
47
[5] S. Ishibashi, Y. Higuchi, K. Nakamura, J. Vac. Sci. Technol., [13]. Badawy W.A et al Electrochem, Soc., Vol.137, 1592-
A8, 1403 (1998). 1595,(1990)
[6] J. Joseph, V, K. E. Abraham, Chinese Journal of Physics, 45, [14]. Bruneaux J et al,Thin Solid Films, Vol.197, 129-142,(1991)
No.1, 84 (2007). [15]. Chitra Agashe et al. J. Appl. Phys. Vol.70, 7382-7386,(1991)
[7] E. Elangovan, K. Ramamurthi, Cryst. Res. Technol., 38(9), 779 [16]. Chitra Agashe et al, Solar Energy Mat.,Vol.17 ,99-117,(1988)
(2003). [17]. Datazoglov O. Thin Solid Films, Vol.302, 204-213,(1997)
[8]. Datazoglov O. Thin Solid Films, Vol.302, 204-213,(1997) [18]. Fantini M. and Torriani I. Thin Solid Films, Vol.138, 255-265
[9]. Fantini M. and Torriani I. Thin Solid Films, Vol.138, 255-265 ,(1986).
,(1986). [19]. Garcia F.J., Muci J. and Tomar M.S. Thin Solid Films, Vol.97,
[10]. Garcia F.J., Muci J. and Tomar M.S. Thin Solid Films, Vol.97, 47-51,(1982)
47-51,(1982) [20]. Ghoshtagore R.N. J. Electrochem. Soc., Vol.125, 110-
[11]. Z. C. Jin, J. Hamberg, C. G. Granqvist, J Appl Phys.64, 5117 17,(1978)
(1988). [21]. Segal and Woodhead J L Proc.Br.Ceram.Soc.38, 245, 1986
[12].Advani G.N et al, Thin Solid Films, 361 367,(1974)
*Corresponding Author
K.Vadivel*, V.Arivazhagan, S.Rajesh- Research Department of
Physics, Karunya University, Coimbatore, Tamilnadu, India-641
114. *Email: [email protected]
IJSER 2011
http://www.ijser.org
48
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract This article describes a comprehensive system for surveillance and monitoring applications. The development of an
efficient real time video motion detection system is motivated by their potential for deployment in the areas where security is the main
concern. The paper presents a platform for real time video motion detection and subsequent generation of an alarm condition as set by
the parameters of the control system. The prototype consists of a mobile platform mounted with RF camera which provides continuous
feedback of the environment. The received visual information is then analyzed by user for appropriate control action, thus enabling the
user to operate the system from a remote location. The system is also equipped with the ability to process the image of an object and
generate control signals which are automatically transmitted to the mobile platform to track the object.
Index Terms Graphic User Interface, object tracking, Monitoring, Spying, Surveillance, video motion detection
1 INTRODUCTION
V
ideo Motion Detection Security Systems (VMDss) making use of the difference of consecutive frames
have been available for many years. Motion (two or three) in a video sequence.
detection is a feature that allows the camera to This method is highly adaptive to dynamic
detect any movement in front of it and transmit the environments hence it is suitable for present
image of the detected motion application with certain modification. Presently
to the user. VMDss are based on the ability to respond advanced surveillance systems are available in the
to the temporal and/or spatial variations in contrast market at a very high cost. This paper aims at the low
caused by movement in a video image. Several cost efficient security system having user friendly
techniques for motion detection have been proposed, functional features which can also be controlled from a
among them the three widely used approaches are remote location. In addition the system can also be
background subtraction optical flow and temporal used to track the object of a predefined color rendering
differencing. Background subtraction is the most it useful for spying purposes.
commonly used approach in present systems. The
principle of this method is to use a model of the 2 HARDWARE SETUP
background and compare the current image with a
reference. In this way the foreground objects present in The proposed system comprises of two sections. The
the scene are detected. Optical flow is an transmitter section consists of a computer , RS232
approximation of the local image motion and specifies Interface, microcontroller, RF Transmitter, RF video
how much each image pixel moves between adjacent receiver. The Receiver section consists of a Mobile
images. It can achieve success of motion detection in Platform, RF receiver, microcontroller, RF camera,
the presence of camera motion or background motor driver, IR LEDs. The computer at the
changing. According to the smoothness constraint, the transmitter section which receives the visual
corresponding points in the two successive frames information from camera mounted on mobile platform
should not move more than a few pixels. For an works as control centre. Another function of control
uncertain environment, this means that the camera centre is to act as the web server that enables access to
motion or background changing should be relatively system from a remote location by using internet. The
small. Temporal differencing based on frame control centre is also responsible for transmitting the
difference, attempts to detect moving regions by necessary control signal to the mobile platform.
3 MODES OF OPERATION
Sumita Mishra is currently pursing doctoral degree in Electronics at DRML
Avadh University, India and working as a lecturer in electronics and The system can operate in four independent modes.
communication engineering department at Amity University, India
E mail: [email protected]
Prabhat Mishra is currently pursuing masters degree program in electronics 3.1 PC Controlled Mode
and communication engineering in Amity University, India
IJSER 2011
http://www.ijser.org
49
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
This mode is an extension to the PC Controlled mode 3.4 Motion Detection Mode
where client-server architecture is incorporated. This
mode enables an authorized client computer to control In this mode the platform is made to focus on a
the mobile platform from a remote location via particular object whose security is our concern. The
internet. Client logs onto the control centre which mobile platform transmits the visual information of the
provides all control tools for maneuvering the mobile object to the control centre for analysis. A Program
platform. Instant images of the environment developed using MATLAB at the control centre is then
transmitted from the camera mounted on the mobile
IJSER 2011
http://www.ijser.org
50
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
ACKNOWLEDGMENT
Abstract UMTS is one of the third generation mobile telecommunication technologies. It supports various multimedia applications and
services at an enhanced data rate with better security. It also supports mobile users and for that there is a process called handover
where new channels are assigned to the user when it moves from a region covered by one node to a region covered by other node. In
this paper we are analysing the effect of handover over the perform ance of the system.
multimedia here can reach data rates up to 2 megabits per There is handoff margin which needs to be optimized for
second (Mbps). It also offers a consistent set of services to proper synchronization. It is the difference between signal
mobile computer and phone users, no matter where they are strength at which handover should occur and the minimum
located in the world. It is based on the Global System for required signal strength. If it is too low then there will be
Mobile Communications (GSM) standard i.e. it is overlaid on insufficient time to complete the process and if it is too large
GSM. It is also endorsed by major standard bodies and then unnecessary handovers will occur. The most important
manufacturers as the planned standard for mobile users around thing is the handovers are not visible to the users.
the world. It can ensure a better Grade of Service and Quality
of Service on roaming to both mobile and computer users. Handover types
Users will have access through a combination of terrestrial
wireless and satellite transmissions. Handovers can be broadly classified into two types namely:
Intracellular and Intercellular handover. In the Intracellular
Cellular telephone systems used previously [3] were mainly handover, mobile or user terminal moves from one cellular
circuit-switched, meaning connections were always dependent system to another. And in the Intercellular handover, user
on availability of circuits. A packet-switched connection uses terminal moves from one cell to the other. This is further
the Internet Protocol (IP) [4-5] which uses concept of virtual classified into soft and hard handover.
circuit i.e. a virtual connection is always available to connect
an endpoint to the other end point in the network. UMTS has Soft handover
made it possible to provide new services like alternative
billing methods or calling plans. For instance, users can now Here we follow make before break concept where the user
choose to pay-per-bit, pay-per-session, flat rate, or asymmetric terminal is allocated new channels first and then previous
bandwidth options. The higher bandwidth of UMTS also channels are withdrawn. The chances of losing continuity are
enabled other new services like video conferencing. It may very less but it needs user terminal or mobile to be capable of
allow the Virtual Home Environment to fully develop, where toning to two different frequencies. The complexity at user
a roaming user can have the same services to either at home, end increases a lot. It is quite reliable technique but here
in the office or in the field through a combination of channel capacity reduces.
transparent terrestrial and satellite connections.
Hard Handover
I. OVERVIEW
Here we follow break before make concept where from the
user terminal previously allocated channels are first
The term handover [6] is also known as handoff. Whenever withdrawn and then new channels are allocated. The chances
a user terminal moves into area covered by a different RNC of call termination are more than in soft handover. At the user
while the conversation is still going on, then new channels are terminal complexity is less as it need not be capable of toning
allocated to the user terminal which is now under different to two different frequencies. It provides advantage over soft
control node or MSC. This is carried out to ensure continuity handover in terms of channel capacity but it is not as reliable
of communication and to avoid call dropping. For this to take as soft handover.
IJSER 2011
http://www.ijser.org
53
IJSER 2011
http://www.ijser.org
54
Average end to
Fast UT Slow UT
end delay
IJSER 2011
http://www.ijser.org
55
III. CONCLUSION
IV. REFERENCES
IJSER 2011
http://www.ijser.org
56
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Nomenclature
BSEC : Brake Specific Energy Consumption
B.T.E : Brake thermal efficiency
B10 : Blend with 10%bio fuel
CBF : Cardnol Bio Fuel
CI : Compression Ignition
CO : Carbon Monoxide
DR-CNSL: Double Refined Cashew nut Shell Liquid
EGT : Exhaust Gas Temperature
HC : Hydro Carbons
IC : Internal Combustion
NOx : Nitrogen oxide
ppm : Parts per million
Cs : Centistokes
1 INTRODUCTION
in the entire coastal region of India. While the tree is na- carboxylated oil by heating the oil to a temperature of
tive to central and Southern America it is now widely 170C to 175C under reduced pressure of 30-40 mm.
distributed throughout the tropics, particularly in many mercury. The next two steps are the same as above for the
parts of Africa and Asia. In India Cashew nut cultivation production of both cardol or cordnol and anacardol.
now covers a total area of 0.70 million hectares of land,
producing over 0.40 million metric tons of raw Cashew
nuts. The Cashew (Anacardium Occidentale) is a tree in 1.2.1Cardnol
the flowering plant family Anacardiaceae. The plant is DR-CNSL - Double Refined Cashew nut Shell Liquid. The
native to northeastern Brazil, where it is called by its Por- Cashew Nut Shell Liquid (CNSL) obtained by pyrolysis.
tuguese name Caju (the fruit) or Cajueiro (the tree). It is It mainly consists two naturally produced phenolic com-
now widely grown in tropical climates for its cashew pounds: Anacardic acid 90% Cardol or cardnol 10%.
"nuts" and cashew apples.
1.1 Specification of Cashew nut shell
The shell is about 0.3 cm thick, having a soft feathery out-
er skin and a thin hard inner skin. Between these skins is
the honeycomb structure containing the phenolic material
known as CNSL. Inside the shell is the kernel wrapped in
a thin skin known as the teesta.
1.2 Composition of cashew nut Cardnol obtained by pyrolysis from dr-csnl oil was uti-
The shell is about 0.3 cm thick, having a soft feathery out- lized for testing purposes. Cardnol is a naturally occur-
er skin and a thin hard inner skin. Between these skins is ring phenol manufactured from cnsl. It is a monohydrox-
the honeycomb structure containing the phenolic material yl phenol having a long hydrocarbon chain in the Meta
known as CNSL. Inside the shell is the kernel wrapped in position.
a thin skin known as the testa.The nut consists of the fol- C6H4 (OH)-(CH2)7-CH=CH-CH2-CH=CH (CH2)2 -CH3
lowing kernel 20 to 25%, kernel liquid 20 to 25%, testa 2%,
others rest being the shell. The raw material for the manu-
facture of CNSL is the Cashew.
According to the invention [6] CNSL is subjected to
fractional distillation at 200 to 240C under reduced
pressure not exceeding 5mm. mercury in the shortest
possible time which gives a distillate containing cardol Reason for using the cardnol as alternative fuel: - it
and the residual tarry matter, for example, in the case of a is renewable, it is cost effective, easily produced inexpen-
small quantity of oil, say 200 ml/ the distillation period is sively in most regions of the world, results in reduced [up
about 10 to 15 minutes. A semi-commercial or commercial to certain extent] emissions compared with petro-diesels,
scale distillation of CNSL may however take longer times. results in no detrimental effects to the engine, non edible
It has been found that there are certain difficulties of op- and it is extracted from the cashew nut shell not from the
eration with regard to single-stage fractional distillation seed.
method, i.e. frothing of the oil which renders difficult the 2 EXPERIMENTAL
fractionation of cardol and also formation of polymerised The main objective was to study the performance and
resin. These difficulties can be over come in the two-stage emission characteristics of the CI engine when Cardnol
distillation, if care is taken not to prolong the heating; this and pure diesel volumetric blends were used and also to
is to avoid the undue formation of polymerised resins investigate which combination of fuel blend is suitable
and possible destruction partially or completely of the for diesel engine at all load conditions from both perfor-
cardol or anacardol. When CNSL is distilled at a reduced mance and emission point of view. Experimentation has
pressure of about 2 to 2.5 mm. mercury, the distillate con- been conducted up to cardnol bio fuel volumetric blends
taining anacardol and cardol distils firstly at about 200C like 0, 10, 15, 20%, and 25%, because the viscosities (refer
to 240C. This first distillate is then subjected to a second table 1 for properties of cardnol bio fuel blends) of higher
distillation under the same identical conditions of tem- blends are more than the international standard limits
perature and pressure when the anacardol distils over at [ASTM-allowable limits only up to 4-5 centistokes].
a temperature of 205C to 210C and the cardol distils
over at a temperature of 230C to 235C. In practice it has
been found that the preliminary decarboxylation of the oil Properties Diesel B10 B15 B20 B25 B30
is essential, since there will be excessive frothing, which
renders the distillation procedure unproductive and un- Flash point (C) 50 53 55 56 58 61
economical. A specific feature of this invention is that
Density(Kg/m3) 817 823 829 836 841 846
both cardol and anacardol may be obtained by a three-
step process. The first step of the process is to get the de-
Viscosity at 2 2.5 3.1 3.5 4.2 5.5
IJSER 2011400C
http://www.ijser.org
(Centistokes)
Calorific value 40000 40130 40196 40261 40326 40392
(KJ/Kg)
58
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
2.3 Transesterification
Selection of raw materials: cardnol oil sample, anhydrous
methyl alcohol 99% grade laboratory reagent type, So-
dium Hydroxide was selected as the catalyst.
2.3.1 Procedure
About 4 grams of Naoh (catalyst) is dissolved in 200 ml
methanol to prepare alkoxide, which is required to acti-
vate the alcohol. Then stirring is done vigorously in a
covered container until the alkali is dissolved completely
for twenty minutes. Mixture is protected from atmos-
phere carbon dioxide and moisture as both destroy the Fig 1 Experimental setup
catalyst. The alcohol catalyst (Naoh) mixture is then In this investigation the various performance and emis-
transferred to the reactor containing 700 ml moisture free sion tests were conducted on four strokes single cylinder
crude cardnol oil. Stirring of the mixture is continued for engine manufactured by M/s Kirloskar (as shown in
90 minutes at a temperature between 60-65 degrees. The fig 1) Company limited. The parameter involved in per-
round bottom flask was connected to a reactor condenser formance analysis has been measured using the engine
and the mixture was heated for approximately three software supplied by the manufacturer.
hours. 2.4.1 Specifications of the engine
Name of the engine: KIRLOSKAR, TV1
General details: 4 stroke, C.I, Vertical
2.3.2 Inference and observation Type of cooling: Water cooled
The mixture was distilling and condensing within the Number of cylinders: 1
reactor Condenser, no glycerin, because CNSL is ex- Bore: 87.5 mm
tracted from honeycomb structure (shell) of a cashew nut. Stroke: 110mm
The color of cardnol oil slightly changed from dark Rated power: 5.2 B.H.P at 1500 rpm
brown to light brown color and an average of 95% recov- Dynamometer: Eddy current dynamometer
ery of bio fuel was possible. Compression ratio: 12:1 to 17.5:1
IJSER 2011
http://www.ijser.org
59
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
800
N O x E m iss io n s i n P P m
700
B10Texh,NOx
600
500 B15DTexh,NOx
400 B20Texh,NOx
300 B25Texh,NOx
200
Dtexh,NOx
100
0
Fig 2.Brake specific Energy consumption 0 100 200 300 400 500 600
Temperture in oC
(*103Kj/Kw-hr) v/s load
Fig2.Depicts that, the brake specific energy consumption
decreases by 30 to 40% approximately with increases in
load conditions. This reverse trend was observed due to Fig 4.Exhaust gas temp. v/s NOx Emissions
lower calorific value with increase in bio fuel percentage
in the blends. The variations of exhaust gas temperature and Nox emis-
sions with respect to engine loading are presented in the
in fig.4.The exhaust gas temperature increases linearly
form 180o C at no load to 480 o C at full load conditions.
This increasing trend of EGT is mainly because of gene-
rating more power and consumptions of more fuel at
higher loads.
40
35
HC Emissions in %
30
B10
25
B15
20
B20
15
B25
10
5
0
0 5 10 15 20
Load in N-m
CONCLUSIONS
The cnsl and its extracts showed promising results in
terms of engine performance in par with conventional CI
engine fuels. Based on the results of the study the follow- REFERENCES
ing conclusions were drawn.
[1] Alan C, Lloyd B, Thomas A. Cackette. Diesel Engines:
The significant factor of cardnol bio fuel is its low
Environmental Impact and Control California Air
cost, its abundance and it is a byproduct of
Resources Board, Sacramento, California ISSN1047-
cashew nut industries.
3289 J. Air &Waste Manage. Assoc. 51:809-847
The brake specific energy consumption decreases by 30 [2] Ayhan Demirbas Studies on biodiesel from vegetable
to 40% approximately with increases in load conditions. oils via transesterifications in supercritical
This reverse trend was observed dueto lower calorific Methanol Energy Conversion and Management 44
value with increase in bio fuel percentage in the blends. (2003) 20932109
[3] Fernando Netoda Silvaa,*, Ant_onio Salgado Pratab,
The brake thermal efficiency increases with higher
Jorge Rocha Teixeiraca Technical feasibilityassess
loads. In all cases, it increased with increase in load. This
ment of oleic sunflower methyl ester utilization in
was due to reduction in heat losses and increase in brake
diesel bus engines. Energy conversion and manage
power with increase in load.The maximum thermal effi-
ment 44 (2003) 2857-2878
ciency for B20 (31%) was higher than that of the diesel.
[4] K.Pramanik Properties and use of jatropha curcas oil
The brakethermal efficiency obtained for B25 was less and diesel fuel blends In CI engine, Renewable Ener
than that of diesel. This lower brake thermal efficiency gy 28 (2003) 239248
obtained could be due to lower calorific value and in- [5] N. Stalin and H .J. Prabhu Performance test of IC en
crease in fuel consumption as compared to B20. gine using Karanja bio diesel blendingwith diesel
ARPN Journal of engineering and applied science
vol.2, no5, October 2007
IJSER 2011
http://www.ijser.org
61
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
62
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Mathematical Morphology in its original form is a set theoretical approach to image analysis.It studies image
transformations with a simple geometrical interpretation and their algebraic decomposition and synthesis in terms of elementary
set operations.Mathematical Morphology has taken concepts and tools from different branches of Mathematics like algebra
(lattice theory) ,Topology,Discrete geometry ,Integral Geometry,Geometrical Probability,Partial Differential Equations etc.In this
paper ,a generalization of Morphological terms is introduced.In connection with algebraic generalization,Morphological
operators can easily be defined by using this structure.This can provide information about operators and other tools within the
system
,., )
(V, ) (V, ) are groups then (V,
the set of all image signals defined on the continuous or ticular cases exist corresponding to the algebra or geome-
discrete images Plane X and taking values in a set U .The try under consideration.
IJSER 2011
http://www.ijser.org
63
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Proposition 3.Let ( X ,Wu , ) & (Y ,Wu , ) be a morpholog- processed by erosion systems 2) A single valued slope
transform for signals processed by dilation systems 3) A
ical spaces with operators dilation and erosion on A. Then
multi valued transform that results by replacing the su-
( X ) Y X (Y ) .
prema and infima of signals with the signal values at sta-
Proposition 4(for lattice). Let ( X ,Wu , A) & (Y ,Wu , A) be tionary points.
a morphological spaces. The pair ( A, A) is called an adjunc- 2.2 Special Case Continuous Time Signals
tion iff u , v X , an adjunction (lu , v , mv , u ) on U such All the three transforms stated above coincide for conti-
They are based on eigen functions of morphological sys- er envelope of all its tangent lines . The Legendre trans-
tems that are lines parameterized by their slope. Dilation form [12] of x is based on this concept .The tangent at a
and Erosions are the fundamental operators in Mathemat- point (t , x(t )) on the graph has slope and intercept
ical Morphology. These operators are defined on lattice equal to X x (t ) (t )
algebraic structure also. Based on this, Slope transforms
X L ( ) x[( x ) 1 ( )] ( x ) 1 ( ) where f 1
de-
are generally divided into three.
notes the inverse.
They are 1) A single valued slope transform for signals
IJSER 2011
http://www.ijser.org
64
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
The function XL of the tangents intercept versus the 3 RESULTS BASED ON THE GENERALIZED
STRUCTURE
slope is the Legendre transform of x [12]
3.1 Definition:Self Conjugate Operator Space
and x(t ) X L [( X L ) 1 (t )) t ( X L ) 1 (t )] If the An operator space (Wu, A) is called self conjugate if it has
envelope of its tangent lines. Example4. A clodum V has conjugate a * for every a
2.5 Definition:Upper Slope Transform such that (avb)* = a* b* and (a b) = (a* * b*) [11]
Example5. If V is a blog [4] then it becomes self conjugate
For any signal x:R R its upper slope transform [12]
by setting
is the function X : R R with a 1 , whenV inf a V sup
a*=
inf a
[11]
V sup, whenV
X ( ) x (t ) t , R .The mapping between V inf, whenV
sup a
tR
equal to its Legendre Transform. morphological space (X, Wu, A) is called a self conjugate
derivative. For each real , the intercept of the line pass- 3.3 Definition:Operatable Functions
ing from the point (t , x(t )) in the signals graph with Let (X, Wu, A) be a morphological space. The collection K
Thus, if the signal x(t ) is concave and has an invertible Remark1.Since K is an operatable space,
IJSER 2011
http://www.ijser.org
65
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
3.5 Definition:Morphological Slope Transform phological dilation) into an addition .This is similar to the
System
concept in Fourier transform transforms. In Fourier trans-
If A= Av in the previous definition,then K is called a
Morphological slope transform system where Av is the form a linear convolution changed into a multiplication.
Difference between the Fourier transform and its mor-
upper slope transform.
Let (X, Wu, A) be a self conjugate morphological space. If phological counterpart, the slope transform is that the
X is a concave class then A * (x(t) )= x(-t) where A*= A (A Fourier transform is invertible but the slope transform
Proposition 7(Characterization of Slope Transforms). in the sub collection. In this paper we made an attempt
A Slope transform is an extended real valued function Av (or for generalizing the algebraic structures related to the
A ) defined on a Morphogenetic field Wu such that theory of Signal processing using Mathematical Mor-
Ramkumar P.B is working as Assistant Professor in Mathematics at Raja-
giri School of Engineering& Technology , Mahatma Gandhi University,
India, PH-04842432058. E-mail: [email protected]
Pramod K.V is working as Professor at Department of Computer Applica-
tions,Cochin University, India, PH-01123456789.
E-mail :[email protected]
IJSER 2011
http://www.ijser.org
67
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract It is always a good idea that a bus commuter waiting at a stop gets to know how far a bus is. If his route of travel happens
to be common for more than one bus- route number, it is even better for him to know which is the nearest bus or the earliest arriving
bus. This will enable him to opt for the bus or some other mode of commuting. This becomes very useful for the physically challenged
commuter, as after knowing in advance the bus arrival s/he will be ready to accommodate in the bus.
A thought of project Bus Proximity Indicator is the best solution for the above situation and is best suitable for the B.E.S.T.
(The Brihanmumbai Electric Supply & Transport) In this a wireless RF linkage between a certain bus and a bus stop can be
used for determination of the bus proximity that helps commuter to know how far his bus is. This project tells him the Bus
number, bus name and the approaching time by displaying it on the LCD which is on the bus stop. This project also satisfies
the need of automization in bus services.
Index Terms Amplitude Shift Keying, Atmels AT89C52 Microcontroller, RF encoder/ decoder IC ST12CODEC, C51 Cross
Compiler, Radio frequency transmitter, Timer astable multivibrator.
1. INTRODUCTION An
2 DESCRIPTION BATTERY
LEGEND:
TAMV Timer astable multivibrator,
RF TX Radio frequency transmitter
a) TAMV 555:
The 555 timer IC is used as an astable multivibrator and as an address setter for triggering an IC ST12CODEC which is
used as an encoder
(Figure 1: Transmitter Section of Bus Proximity Indicator)
b) RF Encoder:
A logic circuit that produces coded binary outputs from encoded inputs. This uses ST CODEC 12BT for encoding the
data. The encoder encodes the data and sends it to RF Transmitter. The IC ST12 CODEC is a single chip telemetry de-
IJSER 2011
http://www.ijser.org
68
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
vice, which may be an encoder or a decoder. When combined with a Radio transmitter / receiver it may be used to pro-
vide encryption standard for data communication system The IC ST12CODEC performs all the necessary data manipu-
lation and encryption for an optimum range reliable radio link.
Transmitter and receiver use same IC ST12 CODEC in RF encoder mode for serial communication. This IC is
capable of transmitting 12 bits containing 4 bit address bit and 8 bit data. The transmitted information is sent by RF
with 434 MHZ RF transmitter. ST12 CODEC works on 5v.
RF Transmitter:
RF transmitters uses ASK (Amplitude Shift Keying) for modulating the data send by ST12 CODEC .This modulated
information is then transmitted with 433 MHz frequency through RF antenna to receiver. It helps in transmitting data
present in encoder via antenna at particular frequency.
c) Battery:
A single 9V battery is used to supply power to the transmitter section.
LEGEND:
RF RX: Radio frequency receiver
LCD: Liquid crystal display
RFDC: RF Decoder
C: Microcontroller AT 89C51
b) RF Decoder:
A logic circuit that used to decode coded binary word. This uses IC ST12 CODEC for decoding the data which is
transmitted by IC RWS 434. The decoder converts the serial data which has been sent from RF receiver to parallel form
and sends it to microcontroller. The coded data decoded by this block is given to LCD.
IJSER 2011
http://www.ijser.org
69
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
d) Power Supply:
The performance of the master box depends on the proper functioning of the power supply unit. The power supply
converts not only A.C into D.C, but also provides output voltage of 5V, 1 amp. The essential components of the power
supply are Transformer, four diodes which forms bridge rectifier, capacitor which work as a filter and positive voltage
regulator IC 7805. It provides 5v to each block of the transmitter.
e) 16 X 2 LCD:
LCD modules are useful for displaying the information from a system.
These modules are of two types, Text LCD and Graphical LCD. In this project a Text LCD of size (16 x 2) with a two
line by sixteen character display is used to display the various sequence of operations during the operation of the
project. This is used for visual information purpose. The LCD will display the data coming from normal keyboard or
form microcontroller as a visual indication.
3 SOFTWARE TOOLS
A
(Figure 3:
A block diagram of the complete 8051 tool set may best illustrate the development cycle.
As demonstrated in this document, the numbering for sections upper case Arabic numerals, then upper case Arabic
numerals, separated by periods. Initial paragraphs after the section title are not indented. Only the initial, introductory
paragraph has a drop cap.
bus. The transmitter and receiver used works on 434MHz at 2 12 volt and hence have dual advantage of power sav-
ing as well as a range of around 500 feet. The 500 feet (150 Meter) is quite a high range for the detection of the city bus
arrival
REFERENCES
[1] 8051 Microcontroller Architecture Programming and Application by Kenneth J Ayala
[2] 8051 Microcontroller and Embedded systems by Mazidi and Mazidi
[3] Embedded Controller Forth for the 8051 Family
[4] 8051 Microcontroller, The: Hardware, Software and Interfacing & Applications
[5] WWW. Google.Com
[6] WWW.datasheetcatlog.com
[7] digikey.com/1/parts/638617
[8] http://www.sunrom.com
[9] http://www.sparkfun.com/products/8946
Prof. A.P. Thakare Head of Department of Electronics & Telecommunication Sipnas College of Engineering & Technology Amravati 444701 Maharash-
tra India Email:- [email protected]
Mr. Vinod H. Yadav is currently pursuing masters degree program in Digital Electronics engineering in Sant Gadgebaba Amravati University,
Amravati 444701 Maharashtra India, E-mail: [email protected], [email protected], [email protected]
IJSER 2011
http://www.ijser.org
71
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract In this paper important fundamental steps in applying artificial neural network in the design of intelligent control
systems is discussed. Architecture including single layered and multi layered of neural networks are examined for controls
applications. The importance of different learning algorithms for both linear and nonlinear neural networks is developed. The
problem of generalization of the neural networks in control systems together with some possible solutions are also included.
Index Terms Artificial, neural network, adaline algorithm, levenberg gradient, forward propagation, backward propagation,
weight update algorithm.
1 INTRODUCTION
2.
Randomly choose the value of weights in the
range -1 to 1.
While stopping condition is false,follow steps 3.
system design has become an important aspect of 3. For each bipolar training pair S:t, do step 4-7.
intelligent control as it can replace mathematical models. 4. Select activations to the input units. X0=1,
It is a distinctive computational paradigm to learn linear xi=si(i=1,2..n).
or nonlinear mapping from a priori data and knowledge. 5. Calculate net input or y.
The models are developed using computer, the control 6. update the bias and weights.
design produces controllers,that can be implemented W0=w0(old)+alpha(t-y)
online.The paper includes both the nonlinear multi-layer Wnew=wi(old)+alpha(t-y)xi.
feed-forward architecture and the linear single-layer
architecture of artificial neural networks for application in 7. If the largest weight change that occurred in step
control system design. In the nonlinear multi-layer feed- 3 is smaller than a specified value, stop else
forward case, the two major problems are the long continue.
training process and the poor generalization. To
overcome these problems, a number of data analysis X1 X w1
strategies before training and several improvement
generalization techniques are used. 0 or 1
X
f(x)=tan(x/2)=(1-e-x) / (1+e-x) When the network has a group of inputs, the updating of
activation values propagates forward from the input
Hyperbolic tangent(Tan-sigmoid) and logistic(log- neurons, through the hidden layer of neurons to the
sigmoid) functions approximate the signum and step output neurons that provide the network response. The
functions,respectively,and yet provide smooth, nonzero outputs can be mathematically represented by:
derivatives with respect to the input signals. These two M N
activation function called sigmoid functions because there Yp = f( ((fXnWnm))*Kmp))
S-shaped curves exhibit smoothness and asymptotic m=-1 n=-1
properties. The activation function fh of the hidden units Yp = The pth output of the network
have to be differentiable functions. If fh is linear, one can Xn = The nth input to the network
always collapse the net to a single layer and thus lose the Wnm = The mth weight factor applied to the nth input
universal approximation/mapping capabilities. Each unit to the network
of the output layer is assumed to have the same activation Kmp = The pth weight factor applied to the mth
function. output of the hidden layer
F( ) = Transfer function (i.e., sigmoid, etc.)
3 BACK PROPAGATION LEARNING The ANS becomes a powerful tool that can be used to
solve difficult process control applications
Error correction learning is most commonly used in
Figure 3 depicts the designing procedure of Artificial
neural networks. The technique of back propagation,
Neural Network Controller.
apply error-correction learning to neural network with
hidden layers.It also determine the value of the learning
rate, ..Values for is restricted such that 0<<1.Back 4 LEARNING ALGORITHMS
propagation requires a perception neural network, (no A gradient-basedalgorithms necessary in the
interlayer or recurrent connection). Each layer must feed development of learning algorithms are presented in this
sequentially into the next layer.In this paper only the section. Learning in neural network is known as learning
three-layer, A, B, and C are investigated. Feeding into rule, in which weights of the networks are incrementally
layer a is the input vector I. Thus layer a has L nodes, ai adjusted so as to improve a predefined performance
(i=1to L), one node for each input parameter. Layer B, the measure over time. Learning process is an optimization
hidden layer, has m nodes, bj (j =1 to m).L = m = 3; in process, it is a search in the multidimensional parameter
practice L m. Each layer may have a different number of (weight) space for solution, Which gradually optimizes an
nodes. Layer C, the output layer, has n nodes, ck (k = 1 to objective(cost)function.
n), with one node for each output parameter. The
interconnecting weight between the ith node of layer A
and the jth node of layer B is denoted as vij, and that
between the jth node of layer B and the kth node of layer C
is wjk.Each node has an internal threshold value. For layer
A, the threshold is TAi, for layer B, TBi, and for layer C,
Tck.The Back propagation neural network is shown in
figure 2..
IJSER 2011
http://www.ijser.org
73
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
>0
Change the weights
where
()= E( new +d).
Figure 3: Flow chart for general neural network
algorithm
6 LEVENBERG-MARQUARDT METHOD
The Levenberg_Marquardt algorithm can handle ill-
TRAINING DATA
conditioned matrices well, like nonquadratic objective
functions. Also, if the Hessian matrix is not positive
Target definite, the Newton direction may point towards a local
Error maximum, or a saddle point. The Hessian can be changed
by adding a positive definite matrix I to H in order to
NETWORK + make H positive definite.
IN
Thus,
OUT
next = now (H + I)-1g ,
COST
Weight changes
where I is the identity matrix and H is the Hessian matrix
which is given in terms of Jacobian matrix J as H = JT J.
TRAINING ALGORITHM
Levenberg-marquardt is the modification of the Gauss-
(OPTIMIZATION METHOD)
Newton algorithm as
next = now (JT J)-1 JTr = now (JT J + I)-1 JT r.
The Levenberg-Marquardt algorithm performs initially
small, but robust steps along the steepest descent
direction, and switches to more efficient quadratic Gauss-
IJSER 2011
http://www.ijser.org
74
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Newton steps as the minimum is approached. This networks, all delta values are first evaluated at the output
method combines the speed of Gauss-Newton with the nodes based on the current pattern errors, the hidden
everywhere convergence of gradient descent, and appears values is then evaluated based on the output delta values,
to be fastest for training moderate-sized feedforward and so on backwards to the input layer. Having obtained
neural networks. the node deltas, it is an easy step to find the partial
derivatives dEp/dWij with respect to the weights. The
second factor is dak/dwij because ak is a linear sum, this is
zero if k = i; otherwise
7 FORWARD-PROPAGATION AND BACK- dai
PROPAGATION = Xj
During training, a forward pass takes place.The network dwij
computes an output based on its current inputs.Each The derivative of pattern error Ep with respect to weight
node i computes a weighted ai of its inputs and passes wij is then
this through a nonlinearity to obtain the node input yi dEp
.The error between actual and desired network outputs is = ixj
given by dwij
First the derivative of the network training error with
E = 1 (dpi ypi)2 respect to the weights are calculated. Then a training
2 p i algorithm is performed. This procedure is called back-
where p indexes the pattern in the training set, I indexes propagation since the error signals are obtained
the output nodes, and dpi and ypi are, respectively, the sequentially from the output layer back to the input layer.
desired target and actual network output for the error
with respect to the weights is the sum of the individual
pattern errors and is given as
8 WEIGHT UPDATE ALGORITHM
The reason for updating the weights is to decrease the
dE = dEp = dEp dak error. The weight update relationship is
dWij p dWij p,k dak dWij
where the index k represent all outputs nodes. It is wij = dEp = (dpi ypi) fixj
convenient to first calculate a value i for each node i as
i = dEp = dEp dyk dwij
dai k dyk dai where the learning rate >0 is a small positive constant.
Sometimes is also called the step size parameter.
which measures the contribution of ai to the error on the
current pattern. For simplicity, pattern index p are The Delta Rule is weight update algorithm in the training
omitted on yi, ai and other variables in the subsequent of neural networks. The algorithm progresses
equations. sequentially layer by layer, updating weights as it goes.
For output nodes, dEp / dak, is obtained directly as The update equation is provided by the gradient descent
= -(dpk - ypk) f (for output node). method as
The first term in this equation is obtained from error
equation, and the second term which is wij = wij (k+1)- wij(k) = - dEp
dyk = f (ak) = fk dwij
dak dEp = -(dij ypi)dypi
is just the slope of the node nonlinearity as its current dwij dwij
value. For hidden nodes, i is obtain indirectly as for linear output unit, where
i = dEp = dEp dak = k dak ypi = wij xi
dai k dak dai p dai i
and
where the second factor is obtained by noting that if the dypi = xi
node I connects directly to node k then dak / dai = fiwki,
otherwise it is zero. Thus, dwij
i = fi wk k
k so,
for hidden nodes.i is a weighted sum of the k values of wij = wij (k+1)-wij (k) = (dpi ypi)xi
nodes k to which it has connections wki. The way the The adaptation of those weights which connects the input
nodes are indexed, all delta values can be updated units and the i th output unit is determined by the
through the nodes in the reverse order. In layered corresponding error ei = 1 (dpi ypi)2.
IJSER 2011
http://www.ijser.org
75
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
9 CONCLUSION
The fundamentals of neural network based control
system design are developed in this paper and are
applied to intelligent control of the advanced process.
Intelligent control can also be used for fast and complex
process control problems.
REFERENCES
[1] Rajesh Kumar, Application of artificial neural network in paper
industry, A Ph.D thesis, I.I.T.Roorkee, 2009.
[2] S.I.Amari,N.Murata,K.R.Mullar,M.Fincke,H.H.Yang(1997),Asyptotic
Statistical Theory of overtraining and cross validation,IEEE
Trans.Neural Networks,8(5),985-993.
[3] C.H.Dagli,M.Akay,O.Ersoy,B.R.Fernandez,A.Smith(1997),Intelligent
Engineering Systems Through Artificial Neural Networks,vol.7 of
Neural Networks fuzzy logic Data mining evolutionary Programming.
[4] H.Demuth and M.Beale(1997), Neural Networks toolbox user
guide,mathworks.
[5] L.Fu, Neural Networks in Computer Intelligence(1994).
[6] M.T.Hang.andM.B.Menhaj(1994),Training feedforward Networks with
the Marquardt Algorithm,IEEE Trans. Neural Networks,5(6),989-993.
[7] Hong HelenaMu,Y.P.Kakad,B.G.Sherlock,Application of artificial
Neural Networks in the design of control systems.
IJSER 2011
http://www.ijser.org
76
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Objective is this paper is recognize the characters in a given scanned documents and study the effects of changing
the Models of ANN. Today Neural Networks are mostly used for Pattern Recognition task. The paper describes the behaviors of
different Models of Neural Network used in OCR. OCR is widespread use of Neural Network. We have considered parameters
like number of Hidden Layer, size of Hidden Layer and epochs. We have used Multilayer Feed Forward network with Back
propagation. In Preprocessing we have applied some basic algorithms for segmentation of characters, normalizing of characters
and De-skewing. We have used different Models of Neural Network and applied the test set on each to find the accuracy of the
respective Neural Network.
Index Terms Optical Character Recognition, Artificial Nueral Network, Backpropogation Network, Skew Detection.
1 INTRODUCTION
sion we calculate M= (nxiyi - xiyi) / (nxi2-(xi)2). are 26. The no. of nodes of input layer are 100 and the no.
This angle is equivalent to the skewed angle so by rotat- of node of output layer are 26. The no. of hidden layer and
ing the image by opposite of this angle will remove the the size of hidden layer vary.
skew ness. This is a very crude way of removing skew Fig 7. The general Model of ANN used.
ness there are other highly efficient ways of removing
skew ness. But for Characters that have very low Skew
angles this gets the thing done.
Fig
6(a)
Skew
ed
Im-
age Fig 6(b) Corrected Image.
Table 3. Model 3
IJSER 2011
http://www.ijser.org
81
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
1000 4 78-26-78-52 94
8 CONCLUSION
The backpropagation neural network discussed and im-
Table 6. Model 6
plemented in this paper can also be used for almost any
general image recognition applications such as face detec-
Epochs Number of Configuration
Accuracy
tion and fingerprint detection. The implementation of the
Hidden (No. of nodes (%)
fully connected backpropagation network gave reasona-
Layer in HL)
ble results toward recognizing characters.
300 4 26-52-78-104 35
The most notable is the fact that it cannot handle major
600 4 26-52-78-104 79
variations in translation, rotation, or scale. While a few
1000 4 26-52-78-104 96 pre-processing steps can be implemented in order to ac-
count for these variances, as we did. In general they are
300 4 26-52-78-26 30 difficult to solve completely.
600 4 26-52-78-26 61
REFERENCES
1000 4 26-52-78-26 86
300 4 78-26-52-104 43 [1] S. Basavaraj Patil, N. V. Subbareddy Neural network based
system for script identification in Indian documents in Sadha-
600 4 78-26-52-104 82
na Vol. 27, Part 1, February 2002, pp. 8397.
1000 4 78-26-52-104 98 [2] T. V. Ashwin, P. S. Sastry A font and size-independent OCR
system for printed Kannada documents using support vector
300 4 78-26-78-52 31
machines in Sadhana Vol. 27, Part 1, February 2002, pp. 3558.
600 4 78-26-78-52 88 [3] Kavallieratou, E.; Fakotakis, N.; Kokkinakis, G., New algo-
rithms for skewing correction and slant removal on word-level
1000 4 78-26-78-52 94
[OCR] in Proceedings of ICECS '99.
[4] Simmon Tanner, Deciding whether Optical Character Recogni-
We have used sigmoid transfer function in all the layers. tion is Feasible.
We have used same dataset for training all the different [5] Matthew Ziegler, Handwritten Numeral Recognition via
Models while testing character set was changed. Neural Networks with Novel Preprocessing Schemes.
IJSER 2011
http://www.ijser.org
82
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Atmospheric turbulence induced fading is one of the main impairments affecting the operation of free-space optical
(FSO) communication systems. In this paper, the bit error rate (BER) of M-ary pulse position modulation (M-ary PPM) of direct-
detection and avalanche photodiode (APD) based is analyzed. Both log-normal and negative exponential fading channels are
evaluated. The investigation discusses how the BER performance is affected by the atmospheric conditions and other
parameters such as the forward error correction using Reed Solomon (RS) codes and increasing Modulation level. Results
strongly indicate that, RS-coded M-ary PPM are well performing for the FSO links as it reduces the average power required per
-9
bit to achieve a BER below 10 in both turbulence channels.
Index Terms Free Space Optics (FSO), M-ary Pulse Position Modulation (M-ary PPM), Reed Solomon (RS) codes, Log
Normal Channel, Negative Exponential Channel, Avalanche Photodiode (APD).
1 INTRODUCTION
IJSER 2011
83
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
analysis of the M-ary PPM is carried out in Section 3. This The variance, 2n, of the thermal noise in a PPM slot is
is followed by the main conclusions in Section 4. defined by [5]
2 K T Tslot
2 MODELS OF FSO CHANNELS 2n (6)
RL
2.1 Log-Normal Channel
The log-normal channel is classified as weak turbu- where T is the effective absolute temperature of the rceiv-
lence, which is characterized by a scintillation index less er, K is Boltzmann constant, RL is the APD load resistance
than 0.75. In general, the scintillation index is a compli- and Tslot is the PPM slot duration which is related to the
cated function of the beam parameters, propagation dis- data rate, Rb, by [5]
tance, heights of the transmitter and receiver, and the
fluctuations in the index of refraction. In fact, the main log 2 M
Tslot (7)
source of scintillation is due to fluctuations (due to tem- M Rb
perature variations) in the index of refraction, which is
commonly known as optical turbulence. The log-normal In case of coding, Rb must be multiplied by (n/k), where
model is also valid for propagation distances less than 100 n is the codeword length and k is the message length.
m [5].
The bit error rate (BER) of an M-ary PPM in log-normal The symbol error rate (Psymbol) can be calculated from bit
channel is given by [6] error rate (Pb) as [8]
N
M e 2( 2 k x i m k ) 2 M 1
PbM
2 i N, i 0
w i Q
Fe 2 k x i m k K n
(1)
Psymbol Pb
M
(8)
1 n n i
The scintillation index (2SI) as a function of the variance
i Pq 1 Pq
ni
of the log-normal channel (2k) is given by [5]
Pues
n i t 1 k
(9)
2
2
SI e k 1 (2) where t =((n-k)/2) is the symbol error correcting capabili-
ty, Pq is the q-bit RS symbol error probability.
The average photons per PPM slot (E{Ks}) are functions of
the mean (mk) and the variance of the log-normal channel The BER after coding (Pbc) is given by [9]
and have the form [5].
n 1
2k
mk
Pbc Pues (10)
EK s e
2
(3) 2 n
The total noise photons per PPM slot, Kn, which results 2.2 Negative Exponential Channel
from background noise and thermal noise, is [5] The negative exponential channel is classified as strong
turbulence, which is characterized by a scintillation index
2 2n greater than 1. The negative exponential model is valid for
Kn 2FK b (4) propagation distances more than 100 m or several kilometers
Egq 2 [5, 8].
The BER of the negative exponential channel, PbM, is giv-
where Kb is the average background noise photons per
en by [5].
PPM slot, E{g}is the average gain of the APD and q is the
electron charge.
M N EK s x i2
The noise factor, F, of the APD is defined by [5] PbM i i
w x Q (11
F EK s x i K n
2 i N,i 0 2
F 2 Eg (5) )
where is the ionization factor. To get the BER after coding (Pbc) due to negative exponen-
tial channel using RS (255,207), we apply the same proce-
dure of the log-normal channel coding steps, which are
IJSER 2011
84
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
mentioned in (7), (8), (9), (10) and use this in (11). At a BER of 10-9, the value of average photons per PPM
bit in strong turbulence is found 728 without coding and
141 with coding giving an improvement of 11.61 dB.
3 NUMERICAL RESULTS AND DISCUSSIONS While the value of average photons per PPM bit in weak
turbulence is found 543 without coding and 32 with cod-
Based on the described model, the BER of 10-9, which ing which gives an improvement of 16.6 dB. It is shown
is considered as a practical performance target for FSO that, coded strong turbulence 8-PPM outperforms weak
link [11], is calculated for log-normal channel and nega- turbulence 8-PPM without coding at BER less than 10-4.
tive exponential channel and the obtained results are dis-
played in Figs. 1-3. In these figures, the value of scintilla-
tion index (2SI) is taken 0.3 for weak turbulence and 1 for
strong turbulence. The values of other parameters are
taken as: BER=2.4 Gbps, Kb =10 photons per PPM slot,
RL=50 , =0.028, T=300K, E{g}=150, n=255, k=207, t=24
symbols.
Variations of BER with the average number of pho-
tons received per PPM bit (logarithmic), which are equal
log10(E{Ks}/number of bits in PPM symbol), are shown in
the following figures. In discussions, all results of average
photons per PPM bit are in numerical values.
In Fig. 1, binary PPM (BPPM) is used to compare be-
tween the effect of weak turbulence and strong turbu-
lence without coding and with RS (255,207) coding.
IJSER 2011
85
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
4 CONCLUSION
In this paper, the performance of FSO system with M-ary
PPM based on RS codes scheme has been numerically
analyzed in weak and strong atmospheric turbulence. The
results show that, the average number of photons per bit
at 10-9 BER has been improved by 14.08 dB (compared
with the value of BPPM without coding) in strong turbu-
lence and to 17.67 dB in weak turbulence using RS
(255,207) + 256-PPM.
The results also show that, coded strong turbulence out-
performs the non-coded weak turbulence without coding
for BPPM, 8-PPM and 256-PPM at 10-9 BER, which indi-
cates a great improvement of the performance of the sys-
tem tolerance for the intensity fluctuations induced by
atmospheric turbulence. RS codes can be combated with
matched interleaver concatenated coding to solve the
problem in low photons per bit range.
IJSER 2011
86
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Electricity Market from economic, regulatory and engineering perspective is a very demanding system to control
.There is requirement of provision of cost efficiency, lower impact of environment alongwith maintenance of security of supply for
use of competition and regulation in the electricity market. Many countries due to failure of its system for adequately
management of electricity companies, followed restructuring for its electricity sector. In various countries, different restructuring
models were experimented but in the initial phase restructuring was opposed by the parties favouring existing vertically
integrated electricity sector. In the paper , restructuring experience of different countries are outlined .
Index TermsDeregulation, Wholesale Electricity Market, Forward Markets, Independent system Operator, Power Exchange.
1 INTRODUCTION
sumption) from transmission as considering different
Electric utilities have been vertically integrated mo- functions associated with selling and buying electric
nopolies that have combined generation, transmission energy. The reason behind separation of transmission
and distribution facilities to serve the needs of the cus- which is the means of transporting the tradable com-
tomer in their service territories. The price of electricity modity and ability to influence the transmission- use
through, for example, line ratings, line maintenance
was traditionally set by a regulatory process, rather than
schedules and network data would be to avoid very
using market forces, which were designed to recover the
powerful competitive advantage to a participant. Beside
cost of producing and delivering electricity to customers this, another important function is system operation
as well as the capital cost. Due to this monopolistic ser- which is traditionally viewed as a genera-
vice regime, customers had no choice of supplier; and tion/transmission function. This function has evolved to
suppliers were not free to pursue outside their designat- the Independent System Operator (ISO) in the most elec-
ed service territories. The main reason for deregulation tricity markets presently which is responsible for coor-
in developing countries has been to provide electricity dinating maintenance schedules and performing securi-
to customers at lower prices, and to open the market for ty assessment.
competition by allowing smaller players to have access The deregulation processes have been started with
to the electricity market by reducing the share of large debate for defending the vertically integrated model
state owned utilities. On the other hand ,high growth in from opposition by private and state monopolies [3].
demand and irrational tariff policies have been the driv- The first was Chile to start effort in 1980s for restructur-
ing its electricity sector. The most discussed deregula-
ing forces for the deregulation in developing countries
tion was the British one, with more interest in Norway
.Technical and managerial inefficiencies in these coun-
Model and much attention to actions in Unites States,
tries have made it difficult to sustain generation and especially California State. In South America a major
transmission expansions and hence many utilities were transformation took place throughout the electric power
forced by international funding agencies to restructure industry from 1980 onwards (chronological progress-
their power industries[1]. shown in fig.1).
Eletricity markets are having a very important characte-
ristic of its organizational structure which has been ac-
commodated as the most significant change in the in-
2 RESTRUCTUING EXPERIENCE OF DIFFERENT
dustry. Vertically integrated industry structure (a regu- COUNTRIES
lated monopoly) as the traditional industry structure The electricity sector reform in many developed countries
was owned and operated as a single organization for have already undertaken since the 1980s. Initially it was
distribution, transmission, and generation functions[2]. not clear to how to increase efficiency by electricity sector
However, the vertically integrated structure, by virtue of reform. As a matter of fact over various countries, there
the fact that it is a monopolistic structure, is not amend- exists diversity in the wholesale electricity market opera-
able to introduction of competition. tion. A transparent, open marketplace would encourage
Current industry structure primarily requires separate competition among generators and reveal the inefficien-
functions of the generation and distribution (or con- cies of the current system to improve the efficiency of the
IJSER 2011
http://www.ijser.org
87
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
market) competitive entities in generation and in retail independent company. Swedish electricity market un-
supply and development of regulatory arrangements ap- bundled in 1996. Thereafter, a common electricity ex-
propriate to the new regime [7]. Table1 below summariz- change for Norway and Sweden was established under
es the deregulated structure in Australia. the name of Nord Pool. In 1998, Finland effectively en-
NSW(New Victoria South Queensland tered into Nordic Market[8]. Denmark joined Nord Pool
South Australia subsequently. Nord Pool is owned by the Transmission
Wales) System operators (TSO) of Norway and Sweden. Nord
Pool provides freedom of choice to the large consumers. It
Genera- NSWs Five ex- ET- Three gene- organizes trade in standardized physical and financial
tion three ex- SECV(Sta SA(Electr rating com- power contracts. Close cooperation between the system
ECNSW(Ele te Elec- icity panies. operation and market operation is the key feature of Nord
ctricity tricty Trans- Pool. Major contractual relation among Nordic countries
Commis- Commis- mission is given in figure 2.
sion)genera sion) System
ting com- generat- Authori-
panies and ing com- ty) Gen-
SMHEA(Sn panies eration
owy moun- plus corpora-
tain hydro SMHEA tion trad-
electricity ing as
authority) optima
energy.
Trans- Transmis- Trans- Trans- Transmission
mission sion com- mission mission company.
wires pany:Trans compa- compa-
grid ny:Power ny:ETSA
Net Vic- Trans-
toria mission
corp.
Bulk NSW Re- Victoria Partici- State
Fig2. Nordic Market Major Contractual Relationship
Market gion of region of pating in poll,separate
NEM1 NEM1 the Vic- market com-
2.5.1 Nord Power (Pool Group)
torian pany
Region of
NEM1. (i) Nord Pool Spot :
Distribu- 6 distribu- 5 distrib- ETSA 7 distribu-
tion tors with utors power tion wires
It consist of Nord Pool Spot AS and its wholly owned
wires ring-fenced with corpora- business
subsidiary Nord Pool Finland Oy, operates the physical
retailers ring- tion
day-ahead market Elspot in whole Nordic region and the
fenced
physical intra-day market Elbas in Finland, Sweden and
retailers
Zealand (Eastern Denmark). Elspot and Elbas are Nord
Retail 6 host re- 5 host ETSA 3 host retail-
Pool Spot auction based markets for trade in power con-
Supply tailers and retailers power ers and un-
tracts for physical delivery. On Elspot, hourly power con-
unlimited and un- corp.(hos limited
tracts are traded daily for physical delivery in the next
indepen- limited t retail- supply li-
day's 24-hour period. On Elbas, continuous adjustment
dent supply supply ers);unli censes
trading in hourly contracts can be performed until one
licenses licenses mited
hour before the delivery hour. Its function is to be the
supply
aftermarket to the Elspot market at the Nord Pool.
licenses
(ii) Nord Pool ASA - Financial Market :
Table1.Electricty Restructuring in Australia
Nord Pool Financial market is a regulated market
2.5 Nordic Power Market place which trade in standardized derivative instruments
like forward and future contracts going out several years,
and has now started trade in options. Outside this mar-
In Norway, the electricity reforms were initiated in 1991.
ket, there is quite a large and liquid market for over the
In 1993, Nordic power exchange was established as an
counter forward and option contracts. The objective of
IJSER 2011
http://www.ijser.org
89
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
financial market is to provide an efficient market, with are given the freedom and responsibility of controlling
excellent liquidity and a high level of security to offer a (scheduling) their resources, and have to optimize the
number of financial power contracts that can be used utilization of their physical and contractual as-
profitably by a variety of customer groups. This market is sets.Transmission system operations are organized on a
wholly owned by Nord Pool Group. national basis for Nordic countries.The Five TSOs in the
Nordic area are owner of respective main national grid
(iii) Nord Pool Clearing ASA : The National Transmission System Operators (TSOs) are
responsible for reliability and balance settlements.
It is a licensed and regulated clearing-house. It is cen- The Elspot market is formed as a day-ahead physical-
tral counter party for all derivative contracts traded delivery power market and the deadline for submitting
through exchange and OTC. It guarantees settlement for bids for the following days delivery hours is fixed as 12
trade and anonymity for participants. It is wholly owned am (noon). There are three types of bids available in Els-
subsidiary of Nord Pool Group. pot; the hourly bid or single bid, block bid and flexible
hourly bid.Participants can submit bids to Nord Pool Spot
(iv) Nord Pool Consulting AS : electronically either through EDIEL communication or
through the internet application ElwWeb.Nord Pool of-
It is a consulting firm specializing in development of fers futures contracts for one to nine days ahead and for
power market worldwide. It is also a wholly owned sub- one to six weeks ahead in time. These futures contracts
sidiary of Nord Pool Group. are settled daily. All these futures and forward contracts
use the daily average system price as reference. There are
(v) Nordel: also contracts to hedge zonal price differences, either one
quarter or one year ahead.Prices for real-time are deter-
Nordel is an association for electricity cooperation mined by the marginal bid like in the day-ahead spot
between forum for market participants, nordic system market.Real time market is also known as Regulated
operators and TSOs of nordic countries. The primary ob- Power Market in Norway.Nord Pool PX has a market
jectives of organization are to create and maintain the share of 43% of the physical Nordic demand; the remain-
necessary conditions for an effective nordic electricity ing 57% is traded bi-laterally.Nord Pool also operates a
market. trading platform for financial derivatives as well as clear-
ing house for bi-lateral contracts.
Congestion Management is done by market splitting
i.e. resolving congestion in day ahead market and counter
trade i.e. resolving congestion in real time.Point of Con-
2.5.2 Nord pool Features nection tariff structure is followed to promote space.UI
pricing mechanism is followed for deviation from sche-
Nord pool is first multinational commodity exchange dule.Dr. Per Christer,Senior Vice President ,Nord Pool
for Electric sector in the world .It provide open market to Consultancy has given the Nord Pool Market Model
all Nordic Countries with common framework.-Nordic (shown in fig3.).
and Europian markets are example of decentralized day
ahead spot market.There are no general cross border tariff
among Nordic Countries.Trading of electricity generated
by hydropower dominates the cross-border exchanges
between the Nordic countries. The balance of electricity
trade between the four countries depends on rainfall con-
ditions because of great variation in fuel type capacity of
Nordic countries. If hydropower potential is good, Swe-
den and Norway record trade surpluses, if hydro re-
source is poor Denmark and Finland will benefit from the
electricity trading.-There are only one Market Operator
(MO)-Nord Pool and five System Operator (SO) which
are Svenska Kraftnt in Sweden, Fingrid in Finland, Stat-
nett in Norway, Eltra in western Denmark, and Elkraft
System in eastern Denmark[9].There are separate regula- Figure 3.Nord Pool Model as given by Dr. Per Christer
tory agencies in the four countries. The MO is in principle
only responsible for facilitating the trade of electricity as
the commodity, but within the physical constraints set by 2.6 Pennsylvania New Jersey Maryland (PJM)
the SO. The operation of the physical system is the sole Market
responsibility of the SO. Further, the market participants The Pennsylvania New Jersey Maryland in-
IJSER 2011
http://www.ijser.org
90
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
terconnection (PJM) has been a pool between the Figure 4. PJM Model
three founding utilities that enables co-ordination of
trade since 1927. PJM is responsible for the manage- The Comprehensive National Energy Strategy an-
ment of a competitive wholesale electricity market nounced on April 1998 and stressed that it relies as
across the control areas of its members and for safe much as possible on free markets and competition.
and reliable operation of the unified transmission sys- An Independent System Operator (ISO) and Power
tem. All generators defined as a capacity resource in Exchange (PX) have been established in 1998 based
PJM system are obliged to submit an offer into the on market structure after the CPUCs decision in 1995
day-ahead PJM market. Market participants are al- which became watershed for the road towards com-
lowed to self schedule. Transmission system security petitive market. The outlining of the proposed Cali-
and reliability considerations are taken into account fornia Model filed with FERC on April 29, 1996 with-
for the total market clearing operation. A marginal three investor owned utilities (IOUs) in California
pricing principle is used for market clearing. Each which were Pacific Gas & Electric or PG&E, Southern
generator at its specific node is paid market clearing California Edison and San Diego Gas Electric. There
price . All loads at their specific nodes are charged as are three significant characteristic in California Mod-
per the market-clearing price . PJM Interconnection is el-
a non-profit company ,a limited liability, governed by a) To simplify the transmission pricing scheme in-
a board of managers. There is a specific unit Market cluding nodal and congestion charge assessing, Zonal
Monitoring Unit (MMU) within PJM to oversee the Approach is applied.
functioning of the market. States have public utility b) A Scheduling coordinator (SC) or PX have been
commissions (PUCs) and the Federal Energy Regula- introduced to manage multiple separate energy for-
tory Commission (FERC)(shown in fig4). PUCs regu- ward markets (each with a supply and demand port-
late generation and distributions intra-state utility folio). An adjustment approach is adopted to perform
business. The FERC regulates interstate energy trans- inter-zonal congestion management.
actions including wholesale power transactions on c) An adjustment bid approach is adopted to per-
transmission lines[10]. form inter-zonal congestion management.
An Independent System Operator (ISO) and a
2.7 California State Power Exchange (PX) have been established in 1998
based on market structure and rules governed by
Public, Political pressure and higher electricity cost FERC.Multiple separate energy forward markets,
have resulted in ending the regulated monopolies of
each with a supply and demand portfolio managed
vertically integrated utilities. Deregulation in US pro-
ceeded with the Public Utility Regulating Policies Act by a Scheduling Coordinator (SC) or PX have been
approval in 1978 and the Energy Policy Act (EPAct) introduced .The total separation of the wholesale
in 1992.Federal Energy Regulatory Commis- power exchange and the market participant was done
sion(FERC) approved non-discriminatory open from ISO.
access to transmission services in 1995.Utilities and Power Exchange will be independent entitiy for
Regulators, including American Electric Power (AEP) managing bid of energy for each half-hour on a day
, the California Public Utilities Commission ahead basis for ISO dispatch decision. The ISO will
(CPUC),the New England Electric System (NEES) and
the Pennsylvania/New Jersey/Maryland(PJM) pool
control the power dispatch and the transmission sys-
have formulated several proposal for change[11]. tem [12]. It will have no financial interest in the Pow-
er Exchange or in any generation, load, and
transmission or in distribution facilities. The
ISO will coordinate the information exchange
in an open market and willwork as per North
American Reliability Council (NERC) and
Western System Coordinating Council (WSSC)
reliability standards. The ISO will coordinate
day-ahead scheduling and balancing for all us-
ers of the transmission grid and also will pro-
cure ancillary services.
IJSER 2011
http://www.ijser.org
91
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Scheduling Coordinators (SCs) aggregate partici- and a spot market for electricity.The EMCO operates
pants in the energy trade and are free to use protocols the market througha bidding system and is the clear-
that may differ from pool rules. SCs run a forward ing house for market transactions.TransPower , the
market in which parties can bid to buy and sell ener- operator and developer of the national grid ,performs
gy and submit the preferred schedule to the ISO and the various services like provision of reliable national
work with the latter to adjust schedules when neces- grid ,efficient scheduling and dispatch generation to
sary(figure 5). satisfy market demand,purchasing of ancillary ser-
vices and providing information to the grid users in
an open ,non-discriminatory manner in the whole-
sale electricity market.
program to determine the merit order of dispatching demand of SEB through respective RLDCs at regional
generation alongwith reserve capacity[17]. level. But it is mandatory cost based (non bid) Power
Pool.ABT mechanism facilitates Balancing market in
3.1 Indian Market an inherent way but it has got some limitations al-
so.Current trading occur between ISGS and states
Indian Power sector is in a transition phase from a STU/SEBS,
regulated sector to a competitive market (taken as
author is from India). A competitive market provides
State
the participants with benefits of psrice determination
Genco
by market forces ,easy access to market, transparent CGS
working however it also brings with it many changes
that need to be taken care of by the market partici-
pants at various stages of development.
The power sector in India has seen significant de-
velopments post the enactment of the Electricity Act
2003[18]. The policy and regulatory efforts have also
been synchronized to ensure rapid development of
the power markets in the country.
In this direction, Electricity Act 2003 has come into
force from June 2003 in India. It introduces the con-
cept of trading bulk electricity. The Act has enabled
consumers and the distribution companies to have
choice in the selection of electricity supplies . Similar- Figure 6.Indian Market Structure after Act-2003
ly, the generator also has choice to select among the
between states, through international import
distribution companies (shown in figure 6.). The Act
/export (Bhutan, Nepal) and also by state embedded
specifies the provisions for non-discriminatory use of
transmission lines or distribution system or asso- generators/IPPs / Loads and others.The RLDCS or-
ganize the day ahead scheduling of the ISGS[20]
ciated facilities with such lines or system by any li-
censee or consumer or a person engaged in genera- .Short term bilateral contracts are taking place
through traders but they are lacking formal market
tion.
and real time information. Often Sellers call for sepa-
At regional level, there are five regional load dis-
rate tenders for surplus available with them and trad-
patch centers( NR,WR,ER,SR,NER) which are operat-
ers compete with each others on prices to get the
ed by Power Grid..At state level, there are 28 states
supply. This situation has resulted in prices of traded
which are responsible for their generation, transmis-
sion and distribution. States purchase power from power moving only in one direction (higher). The
root cause is one-sided competition. On the other
Independent State Generation Supply (ISGS).Trade
hand, buyers are not getting adequate response
between states is facilitated by trading firms like
PTC,NVVL and others[19].Distribution licenses and against tenders called by them. A platform for wide
sellers and buyers is not available.
Government do not need trading license and trans-
mission licenses and load dispatch centers can not
4 CONCLUSION
trade power.About 2.5 % of total power generated in
country is being traded presently.There are 17 li-
In the various countries, most of the electric power
censed electricity traders for inter state trading till
industry has been going through a process of transi-
now.Most generation capacity (56%
tion and restructuring by moving away from vertical-
State,36%CGS,11%Private) tied up with long term
ly integrated monopolies and towards more competi-
contracts. Only surplus can be traded.The present
tive market models since nineties. This has been
inter-regional capacity is 11500MW which is planned
achieved through as creating competition at each lev-
to go upto 37000 MW upto 2012.India have Pool type
el in the power industry and having a clear separa-
centralized mechanism for dispatching central gene-
tion between its generation, transmission and distri-
rating plants. On day ahead basis to meet forecast
bution activities as well. Different countries are im-
IJSER 2011
http://www.ijser.org
93
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
References
[1] M,Illic, F. Galiana and L. Fink, Power System Restructur-
ing:Engineering and Economics,Kluwer Academy Publisher,1998.
[2] http://class.ece.iastate.edu.
[3] A.K.Izaguirre, Private participation in the electricity sector-
recent trends,World Bank report-Private Sector, December
1998,pp.5-12.
[4] H. Rudnick , Chile: Pioneer in deregulation of the electric
power sector,IEEEE Power Engineering Review ,June 1994,pp.
28-30.
[5] M.I.Dussan ,Restructuring the electric power sector in Colom-
bia,IEEE Power Engineering Review,June 1994,pp21-22.
[6] C.M. Bastos, Electric energy sector in Argentina,IEEE Power
Engineering Review June 1994 ,pp.13-14.
[7] H. Outhred, A review of electricity industry restructuring in
Australia,Electric Power Systems Research,vol.44,1998,pp.15-
25.
[8] R.D.Christie and I. Wangesteen,The energy market in Norway
and sweden:Introduction,IEEE Power Engineering Re-
view,February 1998,pp.44-45.
[9] http://www.nordpool.com.
[10] PJM Interconnection LLC,PJM Open Access Transmission
Tariff: Schedule-2, Fourth Revised, Vol1, Issued February
2001.
[11] Z.Alaywan and J.Alen,California electric restructuring ;a
broad description of the development of the California
ISO,IEEE Trans on Power Systems ,Vol.13,No.4,November
1998,pp.1445-1451.
[12] http://www.caiso.com.
[13] T.Alvey,D.Goodwin,X. Ma, D. Streiffert and D Sun,A security
constrained bid-clearing system for the New Zealand wholesale elec-
tricity market,IEEE Trans on Power Sys-
tems,Vol.13,No.2,May1998,pp.340-346.
[14] J.R. Frey,Restructuring the electric power industry in Alberta
,IEEE Power Engineering Review ,February 1996,pp.8.
[15] National Grid Electricity Transmission (NGET) plc, The connection
and use of system code (CUSC), Issued Feb.2006.
[16] http://www.nationalgrid.com/uk.
[17] http://www.iitk.ac.in.ime/anoops.
[18] http://www.cerc.org.
[19] http://www.ee.iitb.ac.in/wiki/faculty/sak.
[20] http://www.jbic.go.jp/en/research/report.
IJSER 2011
http://www.ijser.org
94
IJSER 2011
http://www.ijser.org
95
wide Index to Computerized Archives) provided a This problem might be considered to be a mild form of
keyword search of most Gopher menu titles in the linkrot, and Google's handling of it increases usability
entire Gopher listings. Jughead (Jonzy's Universal by satisfying user expectations that the search terms
Gopher Hierarchy Excavation And Display) was a tool will be on the returned webpage. This satisfies the
for obtaining menu information from various Gopher principle of least astonishment since the user normally
servers. expects the search terms to be on the returned pages.
Increased search relevance makes these cached pages
In 1993, MIT student Matthew Gray created very useful, even beyond the fact that they may contain
what is considered the first robot, called World Wide data that may no longer be available elsewhere.
Web Wanderer. It was initially used for counting Web
servers to measure the size of the Web. The Wanderer When a user enters a query into a search
ran monthly from 1993 to 1995. Later, it was used to engine (typically by using key words), the engine
obtain URLs, forming the first database of Web sites examines its index and provides a listing of best-
called Wandex. matching web pages according to its criteria, usually
with a short summary containing the document's title
In 1993, Martijn Koster created ALIWEB and sometimes parts of the text. The index is built from
(Archie-Like Indexing of the Web). ALIWEB allowed the information stored with the data and the method
users to submit their own pages to be indexed. by which the information is indexed. Unfortunately,
According to Koster, "ALIWEB was a search engine there are currently no known public search engines
based on automated meta-data collection, for the Web." that allow documents to be searched by date. Most
1.3. How Search Engine Works? search engines support the use of the Boolean operators
AND, OR and NOT to further specify the search query.
A search engine operates, in the following order Boolean operators are for literal searches that allow the
user to refine and extend the terms of the search. The
Web crawling
engine looks for the words or phrases exactly as
Indexing entered. Some search engines provide an advanced
feature called proximity search which allows users to
Searching define the distance between keywords. There is also
concept-based searching where the research involves
Web search engines work by storing information about using statistical analysis on pages containing the words
many web pages, which they retrieve from the html or phrases you search for. As well, natural language
itself. These pages are retrieved by a Web crawler queries allow the user to type a question in the same
(sometimes also known as a spider) an automated form one would ask it to a human. A site like this
Web browser which follows every link on the site. would be ask.com.
Exclusions can be made by the use of robots.txt. The
contents of each page are then analyzed to determine 2. Importance of Search Engine
how it should be indexed (for example, words are The usefulness of a search engine depends on the
extracted from the titles, headings, or special fields relevance of the result set it gives back. While there
called Meta tags). Data about web pages are stored in may be millions of web pages that include a particular
an index database for use in later queries. A query can word or phrase, some pages may be more relevant,
be a single word. The purpose of an index is to allow popular, or authoritative than others. Most search
information to be found as quickly as possible. Some engines employ methods to rank the results to provide
search engines, such as Google, store all or part of the the "best" results first. How a search engine decides
source page (referred to as a cache) as well as which pages are the best matches, and what order the
information about the web pages, whereas others, such results should be shown in, varies widely from one
as AltaVista, store every word of every page they find. engine to another.
This cached page always holds the actual search text
since it is the one that was actually indexed, so it can be In cyberspace, there's no place to "turn." I have
very useful when the content of the current page has only my computer screen in front of me. Somehow, I
been updated and the search terms are no longer in it. need to find a place to purchase the book I want.
IJSER 2011
http://www.ijser.org
96
There's no street on my screen so I can't drive around 3. Occasionally, they find a site by hearing about it
on the Web (I could "surf," but that's hit and miss; even from a friend or reading in an article.
then I still need to know where to start). Sometimes it's Thus its obvious that the most popular way to find a
obvious: type in the name of the bookstore, add a site is search engine.
.COM and it's a pretty good bet you're going to end up
where you want to go. But what if it's a specialty Table 1.1 Top Ten Search Engines
bookstore and doesn't have a Web site with an obvious
URL? Search Engine No. of Percentage
Respondents
One solution to this problem is the search Google.com 112 57.5
engine. In fact, it's probably one of the most widely Yahoo.com 39 19.5
used methods for navigating in cyberspace.
Considering the amount of information that's available Bing.com 13 6.5
from a good search engine, it's similar to having the Ask.com 11 5.5
Yellow Pages, a guide book and a road map all-in-one. AOL.com 9 4.5
AltaVista.com 7 3.5
Search engines can provide much more Alltheweb.com 4 2
information than just the URL of a Web site. Typing in Lycos.com 3 1.5
"books" into the Google search engine returns about Excite.com 1 0.5
9,270,000 results. If we refine the search to "books, HotBot.com 1 0.5
Internet", we end up with about 6,070,000 results. If we
know the book's author, let's say E.Balguruswamy
books , search engines now returns About 80,500
Top Ten Search Engines
results within 0.18 seconds (of course, these results will
No. Respondents(%)
HotBot.com
AltaVista.com
Alltheweb.com
Lycos.com
Excite.com
It is the search engines that finally bring your website
to the notice of the prospective customers. When a
topic is typed for search, nearly instantly, the search
engine will sift through the millions of pages it has
indexed about and present you with ones that match Search Engine
your topic. The searched matches are also ranked, so
that the most relevant ones come first.
It is the Keywords that play an important role than any As you can see from the statistics, Google absolutely
expensive online or offline advertising of your website. dominates the search engine market. Its closest
It is found by surveys that a when customers want to competitor is Yahoo.com but they seem to be endlessly
find a website for information or to buy a product or buying old search technologies that do not provide any
service, they find their site in one of the following innovative techniques. This bodes well for Googles
ways: continued dominion.
1.The first option is they find their site through a
4. Role of Search Engine in Higher
search engine.
2. Secondly they find their site by clicking on a link
Education
from another website or page that relates to the topic in To conduct an effective search, the researcher must
which they are interested. understand the structure of the various search engines.
Search engines do not always provide the right
IJSER 2011
http://www.ijser.org
97
information, but rather often subject the user to a almost 4 hours a day online (see table 1.3) and the
deluge of disjointed irrelevant data. majority of that time is spent at work (see table1.4)
All search engines support single-word Table 3 Daily Time spent online
queries. The user simply types in a keyword and
presses the search button. Most engines also support Daily Time spent online Percent
multiple-word queries. However, the engines differ as
to whether and to what extent they support Boolean < 1 hour 10.4
operators (such as "and" and "or") and the level of 1-2 hours 24.8
detail supported in the query. More specific queries 2-3 hours 33.1
will enhance the relevance of the user's results. 3+ hours 31.1
4.1. Variations on the Search Engine Table 4 Work VS Personal Internet Use by the
Respondents
A search engine is not the same as a "subject directory."
A subject directory does not visit the web, at least not Work VS Personal Internet Use Percent
by using the programmed, automated tools of a search By The Respondents
engine. Websites must be submitted to a staff of trained 0% personal/ 100%work 7.8
individuals entrusted with the task of reviewing,
50% personal/ 50%work 68.6
classifying, and ranking the sites. Content has been
75% personal/ 25%work 21.2
screened for quality, and the sites have been
100% personal/ 0%work 2.4
categorized and organized so as to provide the user
with the most logical access. Their advantage is that
Respondents were asked the first place they
they typically yield a smaller, but more focused, set of
would go online to learn more about the product or
results.
service they were considering. Search was the clear
Table 2 Search result. winner over manufacturers sites and information
portals, with 66.3 % of respondents. (see Table 1.5).
Bharati
Notes on Indian
Keyword Vidyapeeth Table 5 first place to find out educational
JAVA Railway
information
Google 56,000,000 272,000
2,580,000
Where would be the first place Percent
Yahoo 59,600,000 293,000 you would go online to find out
17,200,000
educational information
Bing 8,41,00,000 Search Engine 66.3
16,00,000 208,000
Independent Web site 21.6
AltaVista 92,800,000
17,300,000 64,200 Educational Portal 8.3
AOL 3,820,000 Other 4.8
2,210,000 95,100
IJSER 2011
http://www.ijser.org
98
Respondents
Google 90.9
Yahoo 4.7
MSN 4.3
AltaVista 0.3
Lycos 0.2
5. Conclusion
Table 7 Conclusion
Daily Time spent online
IJSER 2011
http://www.ijser.org
99
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Cancers are generally caused by abnormalities in the genetic material of the transformed cells. Cancer has a reputation as a
deadly disease hence cancer research is intense scientific effort to understand disease. Classification is a machine learning technique used
to predict group membership for data instances. There are several classification techniques such as decision tree induction, Bayesian
classifier, k-nearest neighbor (k-NN), case-based reasoning, support vector machine (SVM), genetic algorithm etc. Feature selection for
classification of cancer data is to discover gene expression profiles of diseased and healthy tissues and use the knowledge to predict the
health state of new sample. It is usually impractical to go through all the details of the features before picking up the right features. This paper
provides a model for feature selection using signal-to-noise ratio (SNR) ranking. Basically we have proposed two approaches of feature
selection. In first approach, the genes of microarray data is clustered by k-means clustering and then SNR ranking is implemented to get top
ranked features from each cluster and given to two classifiers for validation such as SVM and k-NN. In the second approach the features
(genes) of microarray data set is ranked by implementing only SNR ranking and top scored feature are given to the classifier and validated.
We have tested Leukemia data set for the proposed approach and 10fold cross validation method to validate the classifiers. The 10fold
validation result of two approaches is compared with hold out validation result and again with results of leave one out cross validation
(LOOCV) of different approaches in the literature. From the experimental evaluation we got 99.3% accuracy in first approach for both k-NN
and SVM classifiers with five numbers of genes and with 10fold cross validation method. The accuracy result is compared with the accuracy
of different methods available in the literature for leukemia data set with LOOCV, where only multiple-filter-multiple wrapper approach gives
100% accuracy in LOOCV with leukemia data set.
1 INTRODUCTION
section 1, section 2 deals with preliminary concept of pairs of data items. The major advantage of k-NN is its
microarray, classification techniques, SNR ranking, k- simplicity.
means clustering. Section 3 deals with related work on
feature selection of cancer data using SNR approach, Support Vector Machine (SVM): Support vector machines
section 4 deals with the proposed model, section 5 (SVM) is a supervised learning techniques which analyze
contains experimental evaluation, section 6 explains the data and recognize patterns, used for statistical methods
validation and comparison of our work and section 7 and regression analysis[7]. SVM training algorithm builds
concludes the paper. a model that predicts whether a new sample falls into one
category or the other. SVM model is a representation of
the samples as points in space, mapped so that the
2 PRELIMINARIES
samples of the separate categories are divided by a clear
2.1 Microarray gap that is as wide as possible. New samples are then
All cells in an organism carry the same genetic mapped into that same space and predicted to belong to a
information and only a subset of the genes is active category based on which side of the gap they fall on.
(expressed). Analyzing the gene with respect to whether Support vector machine constructs a hyper plane or a set
and to what degree they are expressed can help of hyper planes in a high or infinite dimensional space,
characterize and understand their functions. It can further which can be used for classification, regression or other
be analyzed how the activation level of genes changes tasks.
under different conditions such as for specific diseases
[3][4]. 2.3 k-means clustering Algorithm
Microarray data are generally high dimensional data
having large number of genes in comparison to the
number of samples or conditions. There are many Input: k = Number of clusters
efficient methods for the analysis of microarray data such P= A data set containing n features (n
as clustering, classification and feature selection. number of genes)
Feature selection is the preprocessing task for both
clustering and classification. Different types of 1. Select number of cluster k.
experiment can be done by microarray technology. 2. Randomly choose k features from the data set as
Microarray technology measures the expression level of the initial cluster center.
genes. That can be used in the diagnosis, through the 3. Repeat until the termination criteria fulfilled
classification of different types of cancerous genes leading 3.1 Assign each feature to one of the clusters
to a cancer type[5].Basically, genes of microarray data are according to the similarity measure
treated as features, a set of features(genes) give rise to a 3.2 Update the cluster means.
pattern. If we could get the correct pattern from the data 4. until no change in the value of clusters mean
set it is easier to classify an unknown sample based on
that pattern. In this approach we have used Euclidean distance as
distance measure.
2.2 Classification Technique Revisited
Our study is mainly based on feature selection and 2.4 Signals-to-Noise Ratio
pattern classification for gene expression data related to
cancer diagnosis. There are several classification The signal to noise ratio (SNR) test identifies the
techniques such as SVM, k-NN, neural network, nave expression patterns with a maximal difference in mean
bayesian, decision tree, random forest, top scoring pair. expression between two groups and minimal variation of
expression within each group [8]. In this method genes
k-NN: k-NN is the simplest ML technique for classifying are first ranked according to their expression levels using
objects based on closest training examples in the feature SNR test Statistic. The SNR is defined as follows:
space[6]. It is instance based learning. It gathers all Signal to noise ratio= (1 +/2 )( 1+ 2) (1)
training data and classifiers often via a majority voting, a
new data point with respect to the class of its k-nearest Where 1 and 2 denote the mean expression values
neighbor in the given data set. k-NN obtain the neighbors for the sample class 1 and class 2 respectively. 1 and 2
in the given data set. k-NN obtain the neighbors for each
are the standard deviations for the samples in each class.
data by using Euclidian or Mahalanobis distance between
IJSER 2011
http://www.ijser.org
101
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
3 RELATED WORK
Wai-Ho et al.[9] presents an attribute clustering method
Apply SNR ranking
,which is able to group genes based on their Select top ranked
to clusters
interdependence to mine meaningful patterns from genes
microarray data. Gene selection methods used are
Attribute clustering, t-value, k-means, Biclustering, Collect top scored
MRMR, RBF and Classifiers used are C5.0, Neural genes from clusters
Networks, Nearest Neighbor, Nave Bayes. Data sets used
for the experiment are Colon cancer and Leukemia. Supoj
Train the classifiers
Hengpraprohm et al.[10] proposed a method which
with filtered genes
yields higher accuracy than using the SNR ranking alone
and higher than using all of the genes in classification.
Selection of informative features using k-means and Validation
SNR ranking. DLBCL, Ovarian, Colon, Prostrate, Breast
cancer, CNS, Leukemia, Lung Cancer are the data sets
used for the experiments. Hualong Yu et al. [11]
Accuracy Accuracy
demonstrated that a modified discrete PSO is a useful tool
for selecting marker genes and mining high dimensional
Fig.1. Model for the comparison of accuracies of SVM and kNN in
data.
two approaches with 10 fold cross validation method
SNR ranking is used to select top ranked informative
genes. Then PSO is applied to select few marker
genes.SVM is used for evaluation of prediction. Colon
cancer data set is used for the experiment. Yukee Leung et
5 EXPERIMENTAL EVALUATION
al.[12] make use of multiple filter and multiple wrappers
to improve the accuracy of the classifiers. We have used leukemia data set of cancer microarray
data from Biological data analysis web site [21]. The Data
Some of MFMW selected genes have been conformed
set contains 7,129 genes and 72 samples (47 ALLs, 25
to be biomarkers. Multiple filters are SNR, Pearson
AMLs). For our approach we have taken 50 genes and 72
correlation, t-statistics. Multiple wrappers are SVM, WV, samples (47 class1, 25 class2) of original data set. The
3NN and data sets used are LEU [13], COL62 [14], experiment is done in MATLAB version 7.6.0.324
BRER49 [15], LYM77 [16], PROS102 [17], LUNG182 [18]. (R2008a), windows XP, PC of Intel Pentium dual CPU.
Shamsul Huda et.al. [19] proposed a hybrid wrapper and We have implemented two different approaches of
filter feature selection algorithm by introducing filters feature selection used for classification model to discover
feature ranking score in wrapper stage to get a more differentially expressed genes.
compact feature set. They have hybridized mutual
information based maximum relevance filter ranking 5.1 First Approach for Feature Selection
method with artificial neural network based wrapper
approach to get the accuracy. Chenn-jung Huang et.al [20] Step 1: First, the features of data are clustered by
Have under gone a comprehensive study on the applying k-means clustering algorithm. As by
capability of probabilistic neural network associated with applying clustering technique we can group
SNR scoring method for cancer classification. The similar type of features in same cluster so that
experimental results show that the combination of PNN best feature from each cluster can be selected. In
with the SNR method can achieve better results for our approach we have tested the model with 5,
Leukemia data set. 10 and 20 clusters.
TABLE 1 10fold CV
No of
Method Data set accuracy
ACCURACY OF SVM AND K-NN IN FIRST METHOD WITH genes
(%)
DIFFERENT CLUSTERS
5 97.5
SNR+SVM Leukemia 10 96.1
From the above table 1 we can see that both SVM and 20 91.4
kNN classifiers are giving same accuracy with 5 numbers
5 95.4
of genes in 10fold cross validation method i.e 99.3%. The
SNR+ k-NN Leukemia 10 90.0
comparison of accuracy of two classifiers are given bellow
20 98.1
in fig.2
IJSER 2011
http://www.ijser.org
103
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
Hold out
No of validation
Method Data set
clusters accuracy
(%)
5 100
Kmeans+ SNR+
Leukemia 10 96
SVM
20 96
5 96
Kmeans+ SNR+
Leukemia 10 83
kNN
20 87
Fig.5. Accuracy of k-NN in first approach with hold out
TABLE 4
HOLD OUT VALIDATION ACCURACY OF SVM AND K-NN IN and 10fold cross validation
SECOND METHOD
Hold out
Method Data set No of validation
genes (%)
SNR+SVM Leukemia 5 96
10 96
20 96
SNR+kNN Leukemia 5 96
10 96
20 96
IJSER 2011
http://www.ijser.org
104
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
REFERENCES
Method Accuracy(%)LOOCV
MFMW[16] 100
[1] Gregory Piatetsky-Shapiro, Pablo Tamayo, Microarray
MLP+SNR[11] 76.5 Data Mining: Facing the Challenges, SIGKDD Explorations,
Volume5, Issue 2, pp. 1-5, June 2003
SVM(linear)+SNR[11] 58.8 [2] Minca Mramor Gregor Leban, Janez Demar and Bla
Zupan, 2007, "Visualization-based cancer microarray
kNN(Pearson)+SNR[11] 97.1 data classification analysis", Bioinformatics, Vol. 23,
No.16, pp.2147-2154, 2007.
GPC+ clus[11] 90.3 [3] Wolfgang Huber, Anja Von Hey debreck, Martin
Vingron,Analysis of microarray gene expression data, Hand
book of statistics genetics, 2nd edition, Wiley.2003
[4] Hong-Hai Do,Toralf Kirsten,Erhard Ralm,Comparative
Evaluation of Microarray-based Gene expression Database,GI-
Proceedings, pp 26-34.
[5] Ana C.lorena, Ivan G.costa, Marcilio c. p. de Souto,On the
complexity of gene expression classification data sets, Eighth
International Conference on Hybrid intelligent System,pp
825-830.2008
[6] V.N. Vapnik, Statistical Learning Theory, Wiley-
Interscience Publications, 1998
IJSER 2011
http://www.ijser.org
105
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
[7] Vapnik VN.The nature of statistical Theory.Springer- Research, vol. 62, no. 17, pp. 4963-4967, 2002.
Verlag;1995 [19] Shamsul Hunda, John Yearwood, Andrew Strainieri,
[8] Miroslava Cuperlovic-Cuf, Nabil Belacel, Rodney. j. Hybrid wrapper-filter approach for input feature
Ouellette, Determination of Tumour marker genes from gene selection using Maximum Revalance and Artificial Neural
expression data, DDT, Vol-10, Number 6 pp429-437, 2005 Network Input Gain Measurement Approximation,
[9] Wai-Ho Au,Keith C.C.Chan,Andrew K.C. Wong, Yang Fourth International conference on Network and system
Wang. Attribute clustering for Grouping, IEEE/ACM security, pp442-449, 2010.
Transactions on computational biology and [20] Chenn-Jung Huang ,Wei-Chen Liao, A Comparative
Bioinformatics, Vol 2.,No 2, pp83-101,2005 Study of Feature Selection Methods for Probabilistic
[10] Supoj Hengpraprohm, Prabhas Chongstitvatana, Selecting Neural Networks in Cancer Classification, Proceedings of
Informative Genes from Microarray Data for Cancer the 15th IEEE International Conference on Tools with Artificial
Classification with Genetic Programming Classifier using K- Intelligence (ICTAI03),Vol 3, pp1082-3409, 2003.
Means Clustering and SNR Ranking, Frontiers in the [21] http://sdmc.lit.org.sg/GEDatasets/
Convergence of Bioscience and Information Technologies , [22] Debahuti Mishra, Barnali Sahu,A signal to noise
pp211-216, 2007. classification model for identification of differentially expressed
[11] Hualong Yu,Guochang Gu,Haibo Liu,Jing Shen, genes from gene expression data,3rd International conference
Changming Zhu,. A Novel Discrete Particle Swarm on electronics computer technology,2011(Accepted)
Optimization Algorithm for Microarray Data-based Tumor
Marker Gene Selection, International Conference on Computer
science and software Engineering, pp. 1057-1060, 2008
[12] Yukyee Leung, Yeungsam Hung, A Multi-Filter-Multi-
Wrapper Approach to Gene Selection and Microarray Date
Classification, IEEE/ACM Transactions on Computational
Biology and Bioinformatics, Vo1. 7, No .1, pp.108-117, 2010.
[13] T.R. Golub, D.K. Slonim, P. Tamayo, C. Huard, M.
Gaasenbeek, J.P. Mesirov, H. Coller, M.L. Loh, J.R.
Downing, M.A. Caligiuri, C.D. Bloomfield, and E.S.
Lander, Molecular Classification of Cancer: Class Discovery
and Class Prediction by Gene ExpressionMonitoring, Science,
vol. 286, no. 5439, pp. 531-537, 1999.
[14] U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D.
Mack,and A.J. Levine, Broad Patterns of Gene Expression
Revealed by Clustering Analysis of Tumor and Normal Colon
Tissues Probed by Oligonucleotide Arrays, Proc. Natl
Academy of Sciences USA, vol. 96, no. 12, pp. 6745-6750,
1999
[15] M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida,
R. Spang, H. Zuzan, J.A. Olson Jr., J.R. Marks, and J.R.
Nevins, Predictingthe Clinical Status of Human Breast
Cancer by Using Gene Expression Profiles, Proc. Natl
Academy of Sciences USA, vol. 98, no. 20, pp. 11462-11467,
2001.
[16] M.A. Shipp, K.N. Ross, P. Tamayo, A.P. Weng, J.L. Kutok,
R.C.T. Aguiar, M. Gaasenbeek, M. Angelo, M. Reich, G.S.
Pinkus, T.S. Ray, M.A. Koval, K.W. Last, A. Norton, T.A.
Lister, J. Mesirov, D.S. Neuberg, E.S. Lander, J.C. Aster,
and T.R. Golub, Diffuse Large B-Cell Lymphoma Outcome
Prediction by Gene-Expression Profiling and Supervised
Machine Learning, Nature Medicine, vol. 8, pp. 68-74,
2002.
[17] D. Singh, P. Febbo, K. Ross, D. Jackson, J. Manola, C. Ladd,
P. Tamayo, A. Renshaw, A. DAmico, and J. Richie, Gene
Expression Correlates of Clinical Prostate Cancer Behavior,
Cancer Cell, vol. 1, no. 2, pp. 203-209, 2002.
[18] G.J. Gordon, R.V. Jensen, L.L. Hsiao, S.R. Gullans, J.E.
Blumenstock, S. Ramaswamy, W.G. Richards, D.J.
Sugarbaker, and R. Bueno, Translation of Microarray Data
into Clinically Relevant Cancer Diagnostic Tests Using Gene
Expression Ratios in Lung Cancer and Mesothelioma, Cancer
IJSER 2011
http://www.ijser.org
106
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract To measure the overall security of a network, a crucial issue is to correctly compose the measure of individual
components. Incorrect compositions may lead to misleading results. The detection of vulnerabilities and security level estimation
are the main and important tasks of protecting computer networks. To obtain correct compositions of individual measures, we
need to first understand the interplay between network components. For example, how vulnerabilities can be combined by
attackers in advancing an intrusion. This paper takes into consideration the models and architecture of intelligent components
intended for analyzing computer network vulnerabilities and costing the security level of the computer networks
Index Terms Security Metrics, Vulnerability Assessment, Attack Model, Security level Assessment.
.
1 INTRODUCTION
IJSER 2011
http://www.ijser.org
107
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
puter network. Section 6 gives an overview of Experimen- ate on the stage of exploitation. Examples are NetRecon,
tal approach. Section 7 Draws the conclusion. bv-Control for Internet Security (HackerShield), Retina,
Internet Scanner, CyberCop Scanner, Nessus Security
2. RELATED WORK Scanner,
etc. The basic lacks of existing SAS are as follows: (1) use
At the design stage, SAS should operate with the model of the scanner does not allow to answer to the main ques-
of analyzed computer Network generated from prelimi- tion concerning policy-based systems - Whether what is
nary or detailed design specifications. The main ap- revealed during scanning correspond to security policy?;
proaches to vulnerability assessment and security analy- (2) the quality of obtained result essentially depends on
sis can be based on analytic calculation and imitation (si- the size and adequacy of vulnerability bases;
mulation) experiments. Analytical approaches use as a (3) Implementation of active vulnerability analysis on the
rule different risk analysis methods [2, 11, 22, 24, 30, etc.]. computer system functioning in a regular mode can lead
Imitational approaches are based on modeling and simu- to failures in running applications. Therefore not all sys-
lation of network specifications, fault (attack) trees, graph tems can be tested by active vulnerability analysis.
models, etc. [9, 10, 11, 13, 17, 21, 32, 26, 27, 28, 31, etc.].
There are a lot of papers which consider different tech-
niques of attack modeling and simulation: Colored Petri 3. ARCHITECTURE OF SECURITY ANALYSIS SYSTEM
Nets [15], state transition analysis technique [12, 14], si- Architecture of security analysis system (SAS) given
mulating intrusions in sequential and parallelized forms (fig.1) contains following components: (1) user interface
[5], cause-effect model [6], conceptual models of comput- (2) module of malefactors model realization (3) mod-
er penetration [29], descriptive models of the network and ule of script set (attack scenarios) generation (4) module
the attackers [33], structured tree-based description [7, of scenario execution (5) data and knowledge repository
19], modeling survivability of networked systems [18], (6) module of data and knowledge repository updating
object-oriented discrete event simulation [3], requires/ (7) module of security level assessment (8) report genera-
provides model for computer attacks [32], situation calcu- tion module (9)network interface.
lus and goal-directed procedure invocation [8], using and
building attack graphs for vulnerability analysis [13, 31],
etc.
As one can see from our review of relevant works, the
field of imitational approaches for vulnerability assess-
ment and security level evaluation has been delivering
significant research results. [25] Quantifies vulnerability
by mapping known attack scenarios into trees. [16] Sug-
gests a game-theoretic method for analyzing the security
of computer networks. The authors view the interactions
between an attacker and the administrator as a two-
player stochastic game and construct a model for the
game. The approach offered in [34] is intended for per-
forming penetration testing of formal models of net-
worked systems for estimating security metrics. The ap-
proach consists of constructing formal state/transition
models of the networked system. The authors build ran-
domly constructed paths through the state-space of the
model and estimate global security related metrics as a
function of the observed paths. [31] Analyzes risks to spe-
cific network assets and examines the possible conse-
quences of a successful attack. As input, the analysis sys-
tem requires a database of common attacks, specific net-
work configuration and topology information, and an
attacker profile. Using graph methods they identify the
attack paths with the highest probability of success. [10]
Suggests global metrics which can be used to analyze and
proactively manage the effects of complex network faults
Fig.1. Generalized Architecture SAS
and attacks, and recover accordingly.
The knowledge base about analyzed system in- of interaction with a computer network transferring,
cludes data about the architecture and particular parame- capturing and the preliminary analysis of network traffic.
ters of computer network (for example, a type and a ver- The preliminary analysis includes: (1) parsing of packets
sion of OS, a list of opened ports, etc) which are needed according to connections and delivery of information
for scripts generation and attack execution. This data about packets (including data on exposed flags, payload,
usually can be received by malefactor using reconnais- etc.) and connections; (2) acquisition of data about attack
sance actions and methods of social engineering. results and system reactions, and also values of some sta-
The knowledge base of operation (functionality) tistics reflecting actions of SAS at the level of network
rules contains Meta and low-level rules of IF-THEN packets and connections.
type determining SAS operation on different levels of The module of security level assessment is based
detail. Meta-level rules define attack scenarios on higher on developed taxonomy of security metrics. It is a main
levels. Low level rules specify attack actions based on module which calculates security metrics based on results
external vulnerability database. IF-part of each rule con- of attack actions. The module of database and knowledge
tains (meta-) action goal and (or) condition parts. The repository update downloads the open vulnerability da-
goal is chosen in accordance with a scenario type, an at- tabases (for example, OSVDB - open source vulnerability
tack intention and a higher level goal (specified in a meta- database) and translates them into KB of operation (func-
rule of higher level). The condition is compared with the tionality) rules of low level
data from database about analyzed system. THEN-part
contains the name of attack action which can be applied
4. GENERALIZED ATTACK MODEL
and (or) the link on exploit. An example of one of rules is
IF GOAL = Denial of service AND OS_TYPE = Win- The model is defined as hierarchical structure that con-
dows_XP AND OS_VERSION =4 THEN ping_of_death sists of several levels as shown in fig.2. Three higher le-
(PoD). Each rule is marked with an identifier which al- vels of the attack model correspond to an attacks script-
lows us to determine the achieved malefactors goal. For set, a script and script stages. The scriptset level defines a
example, the rule mentioned above defines a denial of set of general malefactors intentions. The second script
service (DoS) attack ping_of_death. level defines only one malefactors intention. Third the set
The DB of attack tools (exploits) contains ex- of script stages can contain the following elements: re-
ploits and parameters of their execution. A choice of a connaissance, implantation (initial access to a host), gain-
parameter is determined by the data in KB about ana- ing privileges, threat realization, covering tracks and
lyzed system. For example, the program of ftp brute force backdoors creation. Lower levels serve for malefactor sub
password cracking needs to know the ftp server port goals refinement. The lowest level describes the malefac-
which can be determined by port scanning. tors low level actions directly executing different ex-
The module of scriptset (attack scenarios) gen- ploits.
eration selects the data about analyzed system from the
data and knowledge repository, generates attack scriptset
based on using operation (functionality) rules, monitors
scriptset execution and scriptset updating at runtime,
updates data about analyzed system.
The module of scenario execution selects an at-
tack action and exploits, prognoses a possible feedback
from analyzed computer network, launches the exploit
and recognizes a response of analyzed computer network.
In case of interaction with a computer network a real
network traffic is generated. In case of operation with the
model of analyzed system two levels of attack simulation
are provided: (1) at the first level each low-level action is
represented by its label describing attack type and (or)
used exploit, and also attack parameters; (2) at the second
(lower) level each low-level action is specified by corres-
ponding packets of the network, transport and applied
level of the Internet protocols stack.
Network interface provides: (1) in case of opera-
tion with the model of analyzed system transferring
identifiers and parameters of attacks (or network packets
under more detailed modeling and simulation), and also
receiving attacks results and system reactions; (2) in case Fig.2. Generalized attack Model
IJSER 2011
http://www.ijser.org
109
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
5. ANALYZED NETWORK MODEL consists of notions of attack realization action, as well as the
notions of type and categories of assets. There are four levels
The analyzed computer network model plays a role in
of security metrics sub-taxonomy based on attack realization
evaluating attack results and defining the reaction of the
actions (fig. 4) (1) an integrated level; (2) a script level; (3) a
system. It contains following basic components as shown
level of the script stages; (4) a level of the threat realization.
in fig.4 network interface, module of malefactors action
Each higher level contains all metrics of lower levels (arrow
recognition; module of attack result evaluation; module in fig.4 shows the direction of metrics calculation). Examples
of system response generation; database of analyzed sys- of security metrics for this taxonomy are as follows: number
tem; database of attack signatures. of total and successful attack scenarios; number of total and
Network interface provides: (1) receiving iden- successful stages of attack scenarios; number of total and
tifiers and parameters of attacks; (2) transferring attack successful malefactor attacks on the certain level of taxono-
results and system reactions. my hierarchy; number of attacks blocked by existing security
Module of malefactor actions recognition is ne- facilities; number of discovered and used vulnerabilities;
cessary at realization of detailed attack modeling and si- number of successful scenario implementation steps; num-
mulation, i.e. when malefactor actions are represented as ber of different path of successful scenario implementation,
network packets. Functioning of this module is based on etc.
a signature method the data received from the network
interface are compared to signatures of attacks from data-
base of attack signatures. Outputs of the module are iden-
tifiers and parameters of attacks.
The knowledge base about analyzed system is
created from the specification of analyzed system and
structurally coincides with KB about analyzed system
described in section 2. The difference of these knowledge
bases consist in the stored data: KB of the model of ana-
lyzed system contains the results of translating the speci-
fications of analyzed system; KB related to the genera-
lized architecture of SAS is initially empty and is filled
during the execution of attack scripts.
6. EXPERIMENTAL APPROACH
We had conducted test based on simulation. The experi-
ment is carried out using OPNET MODELER 14.0
which allows us simulate and to form a virtual computer
networks. In our test the network consists of three sub-
nets :
Internet area including hosts Int_host and
Fig.3. Model of Analyzed Computer Network ISP_DNS with ip adress 192. 17. 300.*
A logical sub network including one server with
IP address 192.168.0.*.
5. SECURITY LEVEL ESTIMATION MODEL
Local area network with IP addresses 100.0.0.*.
The security level evaluation is described by multi-level The basic components (Fig.5) of network are : (1) Internet
hierarchy of security metrics . The taxonomy of security me- host with integrated system of software. ;(2) Firewall 1
trics is based on the attack model developed. This taxonomy between logical subnetwork and the Internet ; (3) Mail
IJSER 2011
http://www.ijser.org
110
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
server; (4) File server ;(5) Firewall 2 between logical sub- According to the malefactors model realization SAS
network and LAN ; (6) A Local DNS server, services the creates one script consisting of the following two stages: (1)
clients from LAN; (7) An authentication, , authorization reconnaissance and (2) threat realization (denial of service).
and accounting server; (8) Workstations 1 and 2 . We now We calculate the confidentiality and criticality levels of
consider the SAS prototype for our experiment. We de- successfully attacked assets. At reconnaissance stage, the
termine the level of the file-server against attacks denial malefactor has received the information which total level of
of service taking into consideration that the malefactors confidentiality is 10 and total level of criticality is 4. For the
experience is low . To do this we need to count up the information which the malefactor tried to receive the ap-
assets and its criticality levels (CRL) and confidentiality propriate levels are (20, 20). After normalization, the losses
levels(COL) . Table.1. shows the CRL and COL of neces- of confidentiality and criticality are (0.6, 0.3). At thread rea-
sary assets. lization stage, the file-server has been successfully attacked
(0 points of confidentiality and 10 points of criticality have
been lost), therefore the appropriate losses are (0, 1). At
script level the losses of confidentiality and criticality are as
follows: ((0.6+0)/2, (0.3+1)/2) = (0.3, 0.65). The total securi-
ty metric can be calculated as difference 1 and average val-
ue of the given coefficients: 1-0.475=0.525. Let us select by
expert evaluation the following security level scale: (1)
green if security level value in an interval [1, 0.8); (2)
yellow [0.8, 0.6); (3) red [0.6, 0]. Then the value
0.525 acts as red level. As guideline on increase of securi-
ty level, the report about vulnerability elimination is gen-
erated. Procedure of security level evaluation is repeated
after eliminating detected vulnerabilities.
7. CONCLUSION
The paper offered the approach to vulnerability analysis and
security level assessment of computer networks, intended
for implementation at various stages of computer network
This work is based on basic components of intelligent SAS,
the model of computer attacks and the model of security
level assessment based on developed taxonomy of security
metrics. The SAS prototype was implemented and the expe-
riments were held based on simulation of attacks.
Fig. 5.Configuration of computer Network for Experiment
REFERENCES
[1] CERT/CC Statistics 1988-2005.
http://www.cert.org/stats/cert_stats.html
[2] Chapman, C., Ward S.: Project Risk Management: processes, techniques
and insights. Chichester, John Wiley (2003)
[3] Chi, S.-D., Park, J.S., Jung K.-C., Lee J.-S.: Network Security Modeling
and Cyber Attack Simulation Methodology. LNCS, Vol.2119 (2001)
[4]
[5] Chung, M, Mukherjee, B., Olsson, R.A., Puketza, N.: Simulating Con-
current Intrusions for Testing Intrusion Detection Systems. Proc. of the
18th NISSC (1995)
[6] Cohen, F.: Simulating Cyber Attacks, Defenses, and Consequences.
IEEE Symposium on Security and Privacy, Berkeley, CA (1999)
[7] Dawkins, J., Campbell, C., Hale, J.: Modeling network attacks: Extend-
ing the attack tree paradigm. Workshop on Statistical and Machine
Learning Techniques in Computer Intrusion Detection, Johns Hopkins
Table 1. COR and COL of Assets University (2002)
[8] Goldman R.P.: A Stochastic Model for Intrusions. LNCS, V.2516 (2002)
IJSER 2011
http://www.ijser.org
111
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
[9] Gorodetski, V., Kotenko, I.: Attacks against Computer Network: For- Attacks. Proc. Of the New Security Paradigms Workshop (2000).
mal Grammar-based Framework and Simulation Tool. RAID 2000. [33] Yuill, J., Wu, F., Settle, J., Gong, F.: Intrusion-detection for incident-
LNCS, V.2516 (2002) response, using a military battlefield-intelligence process. Computer
[10] Hariri, S., Qu, G., Dharmagadda, T., Ramkishore, M., Raghavendra C. Networks, No.34 (2000).
S.: Impact Analysis of Faults and Attacks in Large-Scale Networks.
IEEE Security & Privacy, September/ October (2003)
[11] Henning, R.: Workshop on Information Security System Scoring and
Ranking. Williamsburg, VA: Applied Computer Security Associates and
The MITRE Corporation (2001)
[12] Iglun, K., Kemmerer, R.A., Porras, P.A.: State Transition Analysis: A
Rule-Based Intrusion Detection System. IEEE Transactions on Software
Engineering, 21(3) (1995).
[13] Jha, S., Linger, R., Longstaff, T., Wing, J.: Survivability Analysis of Net-
work Specifications. bIntern. Conference on Dependable Systems and
Networks, IEEE CS Press (2000).
[14] Kemmerer, R.A., Vigna, G.: NetSTAT: A network-based intrusion
detection approach. 14th Annual Computer Security Applications Confe-
rence, Scottsdale, Arizona (1998)
[15] Kumar, S., Spafford, E.H.: An Application of Pattern Matching in Intru-
sion Detection. Technical Report CSDTR 94 013. Purdue University
(1994)
[16] Lye, K., Wing J.: Game Strategies in Network Security. International
Journal of Information Security, February (2005)
[17] National Institute of Standards and Technology. Technology assess-
ment: Methods for measuring the level of Computer security. NIST
Special Publication 500-133, 1985.
[18] Moitra, S.D., Konda, S.L.: A Simulation Model for Managing Surviva-
bility of Networked Information Systems, Technical Report CMU/SEI-
2000-TR-020, December (2000)
[19] Moore, A.P., Ellison, R.J., Linger, R.C.: Attack Modeling for Information
Security and Survivability. Technical Note CMU/SEI-2001-TN-001.
March (2001)
[20] M. Swanson, N. Bartol, J. Sabato, J. Hash, and L. Graffo. Security metrics
guide for information technology systems. NIST Special Publication
800-55, 2003.
[21] Nicol, D.M., Sanders, W.H., Trivedi, K.S.: Model-Based Evaluation:
From Dependability to Security. IEEE Transactions on Dependable and
Secure Computing. Vol.1, N.1 ( 2004)
[22] Peltier, T.R.: Information security risk analysis. Auerbach 2001.
[23] Peltier, T.R., Peltier, J., Blackley, J.A.: Managing a Network Vulnerability
Assessment. Auerbach Publications (2003)
[24] RiskWatch users manual. http://www.riskwatch.com
[25] Schneier, B.: Attack Trees. Dr. Dobbs Journal, vol. 12 (1999)
[26] Sheyner, O., Haines, J., Jha, S., Lippmann, R., Wing, J.M.: Automated
generation and analysis of attack graphs. Proc. of the IEEE Symposium on
Security and Privacy (2002)
[27] Singh, S., Lyons, J., Nicol, and D.M.: Fast Model-based Penetration
Testing. Proceedings of the 2004 Winter Simulation Conference (2004)
[28] Steffan, J., Schumacher, M.: Collaborative Attack Modeling. 17th ACM
Symposium on Applied Computing (SAC 2002), Madrid, Spain (2002)
[29] Stewart, A.J.: Distributed Metastasis: A Computer Network Penetration
Methodology. Phrack Magazine, 9 (55) (1999)
[30] Storms A.: Using vulnerability assessment tools to develop an OC-
TAVE Risk Profile. SANS Institute.http://www.sans.org
[31] Swiler, L., Phillips, C., Ellis, D., Chakerian, S.: Computer-attack graph
generation tool. DISCEX, 2001
[32] Templeton, S.J., Levitt, K.: A Requires/Provides Model for Computer
IJSER 2011
http://www.ijser.org
112
Abstract -- In speaker Identification System, the goal is to determine which one of the groups of an unknown voice which best matches with
one of the input voices. The field of speaker identification has recently seen significant advancement, but improvements have tended on near
field speech, ignoring the more realistic setting of far field instrumented speakers. In this paper, we use far field speech recorded with multi
microphones for speaker identification. For this we develop the model for each speakers speech. In developing the model, it is customary to
consider that the voice of the individual speaker is characterized with Generalized Gaussian model. The model parameters are estimated using
EM algorithm. Speaker identification is carried by maximizing the likelihood function of the individual speakers. The efficiency of the proposed
model is studied through accuracy measure with experimentation of 25 speakers database. This model performs much better than the existing
earlier algorithms in Speaker Identification.
Keywords-- Generalized Gaussian model, EM Algorithm, and Mel Frequency Cepstral Coefficients.
INTRODUCTION
when close talking microphones are used. In adverse
distant-talking environments, however the performance
S peaker recognition is the process of recognizing
who is speaking on the basis of information
is significantly degraded due to a variety of factors such
as the distance between the speaker and microphone, the
extracted from the speech signal. It has been number of
location of the microphone or the noise source, the
applications such as verification of control access
direction of the speaker and the quality of the
permission to corporate database search and voice mail,
microphone. To deal with these problems micro phone
government lawful intercepts or forensics applications,
arrays based speaker recognizers have been successfully
government corrections, financial services, telecom & call
applied to improve the identification accuracy through
centers, health care, transportation, security, distance
speech enhancement [3][4][6].
learning, entertainment & consumer etc [2].
In speaker identification since there is no identity claim,
the system identifies the most likely speaker of the test
The growing need for automation in complex work
speech signal. Speaker identification can be further
environments and increased need for voice operated
classified into closed-set identification and open-set
services in many commercial areas have motivated for
identification. Speaker identification can be further
recent efforts in reducing laboratory speech processing
classified into closed-set identification and open-set
algorithms to practice. While many existing systems for
identification. The task of identifying a speaker who is
speaker identification have demonstrated good
known a priori to be a member of the set of N enrolled
performance and achieve high classification accuracy
speakers is known as closed-set speaker Identification.
The limitation of this system is that the test speech signal
*Ms P.Soundarya mala Completed M.tech in Digital Electronics and
Communication Engineering in GIET in Jawaharlal Nehru Technological from an unknown speaker will be identified to be one
University,Kakinada,INDIA in 2010,PH:9493493302. among the N enrolled speakers. Thus there is a risk
Email:[email protected]
*Dr V.Sailaja received Ph.d degree in Speech Processing from Andhra
of false identification. Therefore, closed set mode should
University in 2010, INDIA,PH:9491444434, be employed in applications where it is surely to be used
Email:[email protected] always by the set of enrolled speakers. On the other
*Mr Shuaib Akram studying IV B.Tech in Electronics and Communication
Engineering, Jawaharlal Nehru Technological University , hand, speaker identification system which is able to
Kakinada,INDIA,PH:9703976497, Email:[email protected] identify the speaker who may be from outside the set of
IJSER 2011
http://www.ijser.org
113
N enrolled speakers is known as open-set speaker integration method to re-score the hypothesis scores,
identification. In this case, first the closed-set speaker measure the distance between them and combine them.
identification system identifies the speaker closest to the
test speech data. The speaker identification system is
divided into text independent speaker identification and 2. FINITE MULTIVARIATE GENERALIZED
text dependent speaker identification. Among these two, GAUSSIAN MIXTURE SPEAKER MODEL
Text Independent Speaker Identification is more The Mel frequency cepstral coefficients (MFCC) are used
complicated in open test. to represent the features for speaker identification. In
the set up used, the magnitude spectrum from a short
Speaker Identification: frame is processed using a mel-scale filter bank. The log
Given different speech inputs X1,X2,, Xc energy filter outputs are the cosine transformed to
simultaneously recorded through C multiple produce cepstral coefficients. The process is repeated
microphones, whoever has pronounced X1,X2,.Xc every frame resulting in a series of feature vectors [1].
among registered speakers S={1,2,..C} is identified by We assume that the Mel frequency cepstral coefficients
equation (1). Each speaker is modeled by GGMM k. of each are assumed to follow a Finite Multivariate
Generalized Gaussian Mixture Distribution. Therefore
the entire speech spectra of the each individual speaker
can be characterized as a M component Finite
= p(k | X1,X2,.Xc). multivariate Generalized Gaussian mixture distribution.
( , )
b (x |() = = f x
( , )
4. EXPERIMENTAL RESULTS:
(8)
The model parameters are estimated and initialized by
As the number of N-best hypotheses per channel (N- best
the EM algorithm with k-means [7].
classification results) employed for identification
increases, the performance of the proposed method
The updated equation for estimating the model outperform the earlier existing methods.
parameters are
Table1: Speaker identification accuracy
The updated equation for estimating i is
LOCAL 1m 3m 5m
() , ()
( )
= () (9)
, ()
CH C D C D C D
Where ( ) = ( ) , ()
are the estimates obtained at
the ith iteration. 0 94.9 95.3 68.3 67.8 75.7 74.6
IJSER 2011
http://www.ijser.org
115
IJSER 2011
http://www.ijser.org
116
References:
IJSER 2011
http://www.ijser.org
117
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract The process of repairing the damaged area or to remove the specific areas in a video is known as video inpainting.
To deal with this kind of problems, not only a robust image inpainting algorithm is used, but also a technique of structure
generation is used to fill-in the missing parts of a video sequence taken from a static camera. Most of the automatic techniques
of video inpainting are computationally intensive and unable to repair large holes. To overcome this problem, inpainting method
is extended by incorporating the sparsity of natural image patches in the spatio-temporal domain is proposed in this paper. First,
the video is converted into individual image frames. Second, the edges of the object to be removed are identified by the SOBEL
edge detection method. Third, the inpainting procedure is performed separately for each time frame of the images. Next, the
inpainted image frames are displayed in a sequence, to appear as a inpainted video. For each image frame, the confidence of a
patch located at the image structure (e.g., the corner or edge) is measured by the sparseness of its nonzero similarities to the
neighboring patches to calculate the patch structure sparsity. The patch with larger structure sparsity is assigned higher priority
for further inpainting. The patch to be inpainted is represented by the sparse linear combination of candidate patches. Patch
propagation is performed automatically by the algorithm by inwardly propagating the image patches from the source region into
the interior of the target region by means of patch by patch. Compared to other methods of inpainting, a better discrimination of
texture and structure is obtained by the structure sparsity and also sharp inpainted regions are obtained by the patch sparse
representation. This work can be extended to wide areas of applications, including video special effects and restoration and
enhancement of damaged videos.
Index Terms Candidate patches, edge detection, inpainting, linear sparse representation, patch sparsity, patch propagation,
texture synthesis.
1 INTRODUCTION
manually by the restoration professionals which were region for every time frame of image to reproduce the
painstaking, slow and also very expensive. Therefore original image. Last, the inpainted image frames are
automatic video restoration certainly attracted both displayed to form the inpainted video. Here a video of
commercial organizations (such as broadcasters and film short duration is considered for inpainting and the
studios) and private individuals that wish to edit and temporal domain information for each image frame is
maintain the quality of their video collection. utilized to display the inpainted image frames as a video.
Bertalmio et al. [1], [2] designed frame-by-frame PDEs In this paper, Section 2 gives an overview of the
based video inpainting which laid platform for all image inpainting using extended examplar-based
researches in the field of video inpainting. Partial inpainting method. In Section 3, the method of video
Differential Equation (PDE) based methods are mainly inpainting is defined. The experiments and the results are
edge-continuing methods. In [2], the PDE is applied discussed in the Section 4. Finally, conclusion and the
spatially, and the video inpainting is completed future research of the work are discussed in Section 5.
frame-by- frame. The temporal information of the video is
not considered in the inpainting process.
Wexler et al [6] proposed a method for space-time
2 IMAGE INPAINTING
completion of large damaged areas in a video sequence. The most fundamental method of image inpainting is the
Here the authors performed the inpainting problem by diffusion based image inpainting method in which the
sampling a set of spatial-temporal patches (a set of pixels unknown region is filled by diffusing the pixel values of
at frame t) from other frames to fill in the missing data. the image from the known region. Another method of
Global consistency is enforced for all patches surrounding image inpainting is the examplar based image inpainting
the missing data so as to ensure coherence of all in which the region is filled by propagating the patch
surrounding space time patches. This avoids artefacts values of the known region to unknown region. In the
such as multiple recovery of the same background object previous work of this paper, an examplar based image
and the production of inconsistent object trajectories. This inpainting was proposed by incorporating the sparsity of
method provides decent results, however it suffers from a natural image patches.
high computational load and requires a long video
sequence of similar scenes to increase the probability of
correct matches. The results shown are of very low Source region p
resolution videos, and the inpainted static background
was different from one frame to another creating a ghost
effect. Significant over-smoothing is observed as well.
Video inpainting meant for repairing damaged video
was analysed in [4], [7] which involves gamut of different
techniques which made the process very complicated. p
Target region
These works combine motion layer estimation and
segmentation with warping and region filling-in. We seek
a simpler more fundamental approach to the problem of a b
video inpainting.
Inpainting for stationary background and moving Fig.1 Patch selection
foreground in videos was suggested by Patwardhan et al.
[5]. To inpaint the stationary background, a relatively Fig.1 (a) shows that is the missing region , is the known region,
simple spatio-temporal priority scheme was employed and is the fill-front of patch propagation (b) shows the two
where undamaged pixels were copied from frames examples of the surrounding patch p and p which are located at
temporally close to the damaged frame, followed by a edge and flat texture region respectively
spatial filling in step which replaces the damaged region
with a best matching patch so as to maintain a consistent The process of filling in of the missing region using the
background throughout the sequence. Zhang et al., [7] image information from the known region is called as the
proposed a motion layer based object removal in videos image inpainting. Let I be the given image with the
with few illustrations. missing region or target region . In the examplar based
In this paper, video inpainting for static camera with a image inpainting, the boundary of the missing region is
stationary background and moving foreground is also called as the fill front and is denoted by . A patch
considered in the spatial temporal domain. First, the centered at a pixel p is denoted by p.
video is converted into image frames. Second, the edges The examplar based image inpainting is based on the
are found by using SOBEL edge detection method. Next, patch propagation. It is done automatically by the
the object to be removed is inpainted using novel algorithm by inwardly propagating the image patches
examplar based image inpainting using patch sparsity. from the source region into the interior of the target
The known patch values are propagated into the missing region patch by patch. Patch selection and patch
IJSER 2011
http://www.ijser.org
119
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
inpainting are the two basic steps of patch propagation video inpainting is performed in both spatial and
which is to be iterated continuously till the inpainting temporal domain. It is used to remove objects or restore
process is complete. missing regions in the video sequence. Video inpainting
Fig.1 shows the patch selection process which is used to may also be considered as the combination of frame by
select the patch with highest priority for further frame image inpainting.
inpainting. The sparseness of the nonzero similarities of a
patch to its neighboring patches is called as the structure
sparsity which is used for assigning patch priority. As Input video
shown in the Fig. 1(b), p and p are the patches which
are centered at pixel p and p which lie in the edge
structure and the flat texture region of the image
respectively. The patch p has sparser nonzero
similarities than patch p, so it is given larger patch Framed Images
priority. The patch inpainting step is performed.
a b
Fig.2 (a) shows that for the selected patch p , sparse linear
combination of candidate patches { q, q... qN} is used to infer
the missing pixels in patch p . (b) Shows the best matching patch
in the candidates set has been copied into the position occupied by
Is the filling of
p , thus achieving partial filling of .
the missing region
is same as the
Fig.2 shows the procedure of patch inpainting which is No
known region?
used to selecte the patch on the boundary for inpainting.
The selected patch on the fill-front is the sparse linear
combination of the patches in the source region
Yes
regularized by sparseness prior. In this paper, a single
best match examplar or a certain number of examplars in
Exit
the known region to infer the missing patch is not used.
The most likely candidate matches for p lie along the
boundary between the two textures in the source region, Iterate the above process for all image frames
e.g., q and p. The best matching patch in the
candidates set has been copied into the position occupied
by p, thus achieving partial filling of . The target
region has, now, shrunk and its front has assumed a Display all the individual inpainted
different shape. Thus image inpainting is performed by image frames in order
the sparse linear combination of candidate patches
weighted by coefficients, in which only very sparse
nonzero elements exist. Inpainted video as output
like the edges, corners, etc are referred to as the structure which has more sparsely distributed nonzero similarities
of an image and the image regions with feature statistics is prone to be located at structure due to the high
or homogenous patterns including the flat patterns are sparseness of structures. Confidence of structure for the
referred to as the texture of an image. patch and the data term of the structure are illustrated in
Patch priority must be defined in such a way that it the Fig 4. (c) and (d) respectively.
should be able to differentiate the structures and textures
of an image. The structure sparsity is used for assigning
priority to the patches to be inpainted. It can also be
defined to measure the confidence of a patch located at
structure instead of texture.
The structures are sparsely distributed in the image
domain. The neighboring patches of a particular patch
with larger similarities may also be distributed in the
same structure or texture as the patch of interest to be
inpainted. To overcome such cases, the confidence of the
structure can be modelled for a patch by measuring the a b
sparseness of its nonzero similarities to the neighboring
patches.
The input to the object removal algorithm is the framed
images. Initially, the structure and texture values of the
given image are calculated and the object to be removed
is found. Edges of the object to be removed are obtained
through the edge detection and the similarities of the
patch with its neighboring patches in the source region
are computed.
Then, the patch priority is obtained by multiplying the
c d
transformed structure sparsity term with patch
confidence term. The patch with highest priority is
Fig.4 Important terms in the object removal method
selected for further inpainting. The above process is
repeated until the missing region is completely filled by
Then the patch priority is computed by the product of
the known values of the neighboring patches.
the data term and the confidence term. The patch p with
The sequence of display of the individual time frame of
the highest patch priority on the fill-front is selected.
inpainted image is the video inpainting. Image inpainting
is performed for each image frame, and all the inpainted
image frames are added to form the video. Fig. 3
illustrates the overall flow diagram of the video
inpainting process.
propagation by inwardly propagating the image patches frames of each time frame is displayed as the inpainted
from the source region into the interior of the target video. Experiments showed that the proposed
region patch by patch. The above procedure is repeated examplar-based patch propagation algorithm can
for each image frames and displayed in the sequence as produce sharp inpainting results consistent with the
video. surrounding textures.
In this paper, static camera with constant background is
considered. In the future, background with multiple
scales and orientations, moving camera and high
resolution videos will also be investigated. Video
inpainting for long run (time duration) video shall be
accomplished. This work can be extended to wide areas
of applications, including video special effects and
restoration and enhancement of damaged videos.
a b c REFERENCES
[1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image
Fig. 6 Inpainted Image frames of the video sequence inpainting, in Proc. SIGGRAPH, 2000, pp. 417424.
[2] M. Bertalmio, A. L. Bertozzi, and G. Sapiro, NavierStrokes,
Fig 6 (a), (b) and (c) illustrates the inpainted image frames for the fluid dynamics, and image and video inpainting, in Proc. IEEE
corresponding Image Frames shown in Fig 5 (a), (b) and (c) Computer Society Conf. Computer Vision and Pattern Recognition,
respectively for the given video sequence. pp. 417424, 2001.
[3] Criminisi, P. Perez, and K. Toyama, Region filling and object
Fig. 5 (a), (b) and (c) illustrates the randomly selected removal by examplar-based image inpainting, IEEE Trans. Im-
image frames in a video. Fig. 5 (d), (e) and (f) shows the age Process., vol. 13, pp. 12001212, 2004.
edge detected image frames of Fig. 5 (a), (b) and (c) [4] J. Jia and C. K. Tang, Image repairing: Robust image synthesis
respectively. The selected patch p on the edge is by adaptive and tensor voting, in Proc. IEEE Computer Society
inpainted by the corresponding pixels in the sparse linear Conf. Computer Vision and Pattern Recogition, pp. 643650, 2003.
combination of examplars to infer the patch in a frame [5] K.A. Patwardhan, G. Sapiro, and M. Bertalmio, Video inpaint-
ing of occluding and occluded objects", in Proc. ICIP 2005.Vol.
work of sparse representation. The fill-front and the
II, pp. 69-72.
missing region are updated instantly. For each
[6] Y. Wexler, E. Shechtman, and M. Irani, Space-time video com-
newly-apparent pixel on the fill-front, its patch
pletion, Proceedings. 2004 IEEE Computer Society Conference on
similarities are computed with the neighboring patches Computer Vision and Pattern Recognition, vol. 1, 2004.
and its patch priority. [7] Y. Zhang, J. Xiao, and M. Shah, Motion layer based object
The selected patch is used for inpainting. By iterating removal in videos, 2005 Workshop on Applications of Computer
the above process, the filling region isinpainted Vision, 2005.
successfully for the entire image frames. Fig. 6(a), (b) and
(c) illustrates the inpainted image frames of the original
image frames of athe video as shown in Fig. 5 (a), (b) and
(c) respectively. The inpainted image frames are added
together to display as a video sequence.
5 CONCLUSION
Novel patch propagation based inpainting algorithm for
video inpainting is proposed in this paper. It is mainly
focussed on the object removal in the image frames. Patch
priority and patch representation are the two major steps
involved in the proposed examplar-based inpainting
algorithm for an image. Structure sparsity was
represented by the sparseness of the patch similarities in
the local neighborhood. The patch at the structure with
larger structure sparsity is given higher priority and is
used for further inpainting. The sparsest linear
combination of candidate patches under the local
consistency was synthesized by the patch sparse
representation. Video is represented by the display of the
sequence of the image frames. Hence the inpainted image
IJSER 2011
http://www.ijser.org
122
Abstract -It is well established that the forces between nucleons are transmitted by meson. The quantitative explanation
of nuclear forces in terms of meson theory was extremely tentative & in complete but this theory supplies a valuable point
of view . it is fairly certain now that the nucleons within nuclear matter are in a state made rather different from their free
condition by the proximity of other nucleons charge independence of nuclear forces demand the existence of neutral
meson as amongst the same type of nucleolus (P-P) or (N-N). this force demand the same spin & orbital angular
momentum. The exchange interaction in produced by only a neutral meson. The involving mesons without electric charge,
that it gives exchanges forces between proton & Neutron & also therefore maintains charge in dependence character. It is
evident for the nature of the products that neutral mesons decay by strong & weak interaction both. It means that neutral
mesons constituents responsible for the electromagnetic interaction. Dramatically neutral mesons plays important role for
electromagnetic & nuclear force both.
Index Terms - Restmass energy,Mesons,photons,protons,neutrons,velocity of light,Differentiation
1. INTRODUCTION
IJSER 2011
http://www.ijser.org
123
IJSER 2011
http://www.ijser.org
124
IJSER 2011
http://www.ijser.org
125
picture it is often enough to depths are not identical but depends upon
think of the nucleus as a the state of their actual binding in the
grouping of protons & potential well. The range also depends
neutrons interaction, with upon mass number A & binding energy.
the appearance or we know that the atomic mass number A
disappearance of is approximately equal to a twice the
photons. One atomic number Z. For the light &
should be noted that this intermediate nuclei. It shows that light
relation hold only inside nuclei prefer to add nucleons is n-p pair.
the nucleus. Out side the i.e there is a strong interaction between
nucleus the evident is to neutrons & protons. The range of nuclear
be contrary. It is a fact no force depends on the mass number A &
body(even mesons or the velocity of light depends on the range,
gamma rays ) can have so it is obviously thought that the spin &
velocity greater than the velocity of light depends on the mass
velocity of light. From this number A of the nucleus & spin is zero
formula we can find the or an integer for A even & is an odd half
nuclear force acts between integral for A odd. The total rest mass
the pair of nucleons & energy also depends on the mass number
does not influenced by the A. For increasing of rest energy, we must
presence of neighboring increase the mass number A. obviously, the
nucleons. It is necessary rest mass energy must be depends on the
that any one particle must radial distance.
brings the velocity of light. . This is purely a quantum mechanical
We know that the nuclear effect. If the mass number A increases the
force is short ranged. Out range decreases, & the force are stronger.
side of the range it is This binding energy displays saturation
repulsive. effect. This property of the nuclear force
can be explain in term of exchange nature
*Range of nuclear force :- To show that the of nuclear force. It should be noted that
range of force is related to the mass of nucleons attract each other strongly only if
exchanged particle, assume that the they are in same orbital state. This formula
0-meson is contained virtually in a proton. prove the pauli hypothesis. This formula
If this virtual particle travels with the usually attributed to the effect of higher-
velocity light as might be expected for a order interactions in which two or more
field particle, then greatest distance the mesons are simultaneously transmitted
meson could travel in this time also known between the nucleons.
as range of the pion exchange force.
3. It would seem that in a nucleus 4.The velocity of light depends on the
consisting of the many nucleons the binding wavelength of it constituents, If the
energy per nucleon should increase with particles has longer wavelength then the
the increase of the mass number A . In range decreases & therefore force is
reality evidence is to contrary, the binding stronger. we can find the effective range of
energy per nucleons decreases with nuclear force in terms of the Compton wave
increasing mass number A, The binding length of pimeson. We know that
energies of the different nucleon placed at different(variable) constituents (color
various depths are not identical but depend particles) has different wavelength, so it is
upon the states of their actual binding in obviously thought the velocity light must
the potential well. The binding energies of be variable. Evidently, if meson interacted
the different nucleon placed at various with nucleons strongly enough to be
IJSER 2011
http://www.ijser.org
126
responsible for the nuclear forces. Its mean meson comes to close together, the can
free path within the nucleus should be neutralize each other then the force
about the same as is that of a nucleons. between neutron & proton come into play.
Then one question must be arise, How & So obviously we can say that only neutral
why the velocity of light vary in free path meson plays important role in charge
or in vacuum. So far this problem is independent nuclear force. The
concern, the velocity of light influenced by mesons(positive & negative) can be
its internal matter, which must have absorbed by the nucleus of an element or
different value. However it may be it may be combine with the another meson
possible that the velocity of light equal at then the sum of the masses of these
all ranges within the nucleus. . The forces mesons converted into energy. This process
responsible for binding the individual is called annihilation of matter. Before this
particle inside the nucleus must therefore process one positive meson & one negative
be exceptionally strong.If the particles has meson unite to make neutral particles
motion then the material body has physical called k0 meson. The process of construction
significance otherwise not. It means the & destruction has proved very help full is
force between elementary particles depends considering the origin of universe. The
on the velocity of the body as well as neutral K-- mesons is a stable particle but
mass of body. It should be remarked that stability last for a small time. Its half life
the particles travels with velocity of light is of the order of micro seconds. This
are not a conservable quantities. In particle is an essential constituent of the
quantum theory, every field must be nucleons of all elements.We know that
quantized. These quanta produce a field, neutralmeson decay into two photons &
which is responsible for different forces. never into three photons. It is clear that
the neutral pions has been produced by
bombarding hydrogen & deuteron with
high energy photons. Gamma rays has
sufficient energy to maintain energy of
5.The emission of a charged meson will be nucleons then the nucleons produce the
accompanied by a change of charge of the neutral mesons.One can speak of the meson
emitting nuclear particle, Thus a neutron field associated with a proton(or a netron)
can only emits a negative meson or absorb because the nature (charge) of the nuclear
a positive meson & will thereby be particle does not change by emitting or
transformed into proton. When we consider absorbing a neutral meson.It has developed
the emission of one meson by a nuclear a theory of nuclear forces in which neutral
particle & reabsorbtion by another.It is way the equality of the forces between like
obvious that in this way no force will be & unlike nuclear particles.The theory
obtain between two nuclear particle of involving charged meson only giving no
same kind i.e. two neutrons or two forces between two like nuclear particles. It
protons. For the same kind particles, there is obvious that the negative meson &
is neutral meson is responsible for positive meson gave the symmetrical force
interaction.This solution would make the between protons & neutrons & the interact
interaction coused by neutral meson alone, equally strongly with these meson. An
Since for unlike particles the charged alternative way of explaining this equality
mesons given an additional contribution, is to assume interaction with neutral meson
while for like particles they do not, the only. Then the charge of the nuclear
total interaction will not be the same3 for particles( whether it is a proton or a
like & unlike particles in S-first state.This neutron) becomes entirely irrelevant & the
will lead to forces between a neutron & a equality of forces follows immediately. This
proton.The negative & positive charge alternative is discussed in present paper.
IJSER 2011
http://www.ijser.org
127
6.According to the pauli principle only of neutral mesons move only a few atomic
two neutrons & two protons will found in diameters before they decay( so that it
the same orbital state. Therefore it is influenced few neighbors nucleons)& thus
possible to find four nucleons strongly are not affected by the matter through
bound or Alpha particle structure, also which they pass & thus nuclear force work
confirm by binding energy curve. The properly. It should be also noted that in whole
extraordinary stability of the alpha shows universe there is only mass will be conserved
that the most stable nuclei are those in and energy will be destroyed, then the mass
which number of nucleons & photons are will not change into energy.
equal. We can find it from this formula. It *It is enough to think that - mesons which
is obviously thought that the full charge form a nuclear cloud around the individual
independence for any system in which the
number of neutrons equal to the number nucleons & are in a virtual state get their
of protons. From conclusion, we get, requisite rest mass energy from the
Number of photons = number of nucleons= incident particle & are released from the
2(number of neutral mesons).The discovery nuclear binding potential. Nuclear binding
of the neutral meson & the fact that charge potential compensate the rest mass energy.
independence is now consistent with all It produce enough energy to maintain the
nuclear data, confirm fully the use of the rest mass energy for production of mesons.
symmetric meson theory, containing Since the rest mass energy of - mesons
positive, negative &neutral mesons about 275 Me, the threshold energy for a
described by three way wave functions. gamma rays to produce the rest mass
With the form of Yukawa potential for energy of these particles should be high.
scalar mesons, it is easy to see that the pi- But, if protons projectiles are used to
meson can not be scalar. This theory prove produce mesons, it requires a large
this argument. threshold as a particle with mass retains
some energy in the collision. It should be
Change of Law-- remarked that the binding potential is
Since there is no requirement for the independent of spin & range of the
conservation of Pions so there is no particle, when they compensate the rest
conservation law in rest mass energy & energy. The energy require to pull out the
even in the universe. This formula show that nucleons from the nucleus is less than half
there is no meaning of the word constant. of rest mass energy. The slow motion
There is no conservation law controlling neutron play this role. Similarly if the
the total number of Kaons or meson. The nucleus brings (from binding potential)
energy of formation of mesons comes from sufficient energy for the existing of nuclear
binding potential ( which has the energy to force. It maintains stability. In order to
formation of meson for a long time), but approach particle to within short range or
when these potential has not enough closer the energy of the approaching
energy, the production of pions end & particle should be very high.
nuclear force does not exist.
IJSER 2011
http://www.ijser.org
128
0 rays
+d d+ 0
This reaction shows that the kinetic
energy as well as potential energy of
nucleon in the nucleus will be over
and above of the rest mass energy. In
these phenomena the total charge of
fundamental particles are conserved.
It is reasonable to assume
that the nuclear force
between two protons has
the same characteristic as
that between neutron &
proton. The argument
about short range forces
involves both proton-
proton & neutron -
proton forces. The main
difference between proton
& neutron seems to be
the electric charge, & the
nuclear force apparently
does not arise from
charge. We assume there
fore that the potential
between two protons is
confined within some
short range as before,
although the value of
range need not
necessarily be the same.
IJSER 2011
http://www.ijser.org
129
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract: Impact fatigue behavior of fully dense alumina as we studied in this work. The effect of grain size on the impact
fatigue characteristics has been found out using a simple impact fatigue test set-up. Low grained alumina was prepared using
optimized slip mcasting technique where as higher grained samples were made following conventional powdered metallurgy
technique (Compaction through isostatic pressing and subsequent solid sate sintering. All though the mechanical behavior
(e.g.hardness, toughness) was better in fine grained alumina, It was more succeptable to impact fatigue. Some of the sampled
in higher grained alumina samples were little bit elongated in shape with aspect ratio close to 2-2.5. Fractoghaphy revealed that
crack propagation was predominantly mixed mode. The elongated grains promoted bridging across crack front and caused
higher resistance to fatigue.
Index Terms dynamic, element, factor, finite, impact, intensity.point, quarter, stress
1 INTRODUCTION
Manoj Kumar Barai is currently pursuingan integrated Ph.D .program in
Jadavpur University, kolkata-32,West Bengal,India.
E-mail:[email protected]
JagabandhuShit is currently pursuingPh.D program in Jadavpur Univer-
3 EXPERIMENTAL METHODS
sity, kolkata-32,West Bengal,India. E-mail:[email protected] Sample preparation :Two grades of high purity commer-
cially available alumina powders with (i) average particle
IJSER 2011
http://www.ijser.org
130
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
a. Samples were prepared in two different routes : Fig 3 Alumina samples sintered at 1275C following slip
(a) Isopressing followed by sintering and (b) slip casting with a grain size of 0.4 m
casting followed by pre-sintering and sintering
The process flow-sheets are given below.
With alumina with smaller grain size the grains are mo-
stly equiaxed where as for alumina with average grain
size, grains are little bit elongated in shape which enha-
nces grain bridging during crack propagation.A small set
up for measuring the impact fatigue behave-our has been
developed with a swinging pendulum of length of 54 cm
and a wight in the form a spherical ball of diameters
36mm. The impact load is given to the test specimen per-
Fig. 2. Process flow sheet for slip cast alumina
pendicular to its axis, which is rigidly fixed between two
IJSER 2011
http://www.ijser.org
131
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
supports in a configuration resembling that of a Charpy experimental results it was observed that with the in-
test specimen. The bob (concentrated mass) is fixed at the crease of crack length the no of strike required for break-
end of the bar (swing arm), which is supported at almost ing the sample was decreasing for both the batch of sam-
frictionless hinge. The hinge is fixed inside the bearing, ples. But it was observed that even if the crack dimension
was more the no of strike required for breaking the sam-
which is mounted in the bracket plate. The angular move-
ple of grain size 4m was more than the sample of grain
ment is given to the set (drum & bob) by the cam mounted
size of 0.45m. This was probably due to the shape of the
on the low speed motor shaft. On the tensile surfaces of the
grains in higher grained samples which enhanced the
beam specimens made of two different grained alumina,
scope of grain bridging. In case of number of impacts re-
notches were created with different dimensions as men-
quired for breaking, there was variation in the pattern as
tioned below.
well.
For the samples with higher grain size, scatter was com-
paratively low, even the samples with a/w as high as 0.07
(even bigger than the largest a/w in the previous case)
could withstand around 80 impacts (average) before frac-
ture. With another set of specimens having a/w 0.097
(almost double the value of maximum a/w in the pre-
vious case), average number of impacts sustained before
fracture was around 30. Only one set having high a/w Fig. 10 Mixed mode of fracture in higher grained sample
(almost three times the maximum a/w of the previous
case) got broken with single impact.
From the SEM micrograph (Fig. 9), it is clear that the
cracks started propagating from the sides of the notches
and it followed a tortuous path. The crack apparently
followed a new plane after a reaching a particular point
or obstacle as observed on the fractured surface. In few
other cases, there was apparently a crack initiation site
(just at the tip of the notch) surrounded by semi-circular
arc-shaped zone. Fig. 10 shows that there was a mixed
IJSER 2011
http://www.ijser.org
133
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
mode of crack propagation: both trans-granular and inter- [7]. John R., Stress intensity factor and compliance solutions for
granular. From the data available so far, it is evident that eccentrically loaded single crack geometry. Engineering Frac-
ture Mechanics (58) pp. 87-96, 1997
higher grained alumina samples showed superior impact [8]. John R. and Rigling B., Effect of height to width ratio on K
fatigue behaviour. The exact reason behind is not yet very and CMOD solutions for a single edge cracked geometry
clearly understood however a simple analysis of the two with clamped ends. Engineering Fracture Mechanics (60) No.
microstructures reveal that in case of higher grain size the pp 147-156, 1998
shape was a little bit elongated in contrast with nearly [9]. Kishimoto K., Aoki S. Sakata M., Dynamic stress intensity
factors using J-integral and finite element method. Engineer-
equiaxed shape observed with sub-micron grained alu-
ing Fracture Mechanics 13(2),387-394, 1980
mina. This caused grain bridging retarding partially the [10]. Maity S. Sarkar B.K, Impact fatigue of porcelain ceramic
propagation of cracks along grain boundary in case of International Journal of Fatigue 1995; 17(2), 107-109.
larger grained alumina. Furthermore with higher grain- [11]. Meggiolaro M. A., Miranda. A. C. O., Castro J.T.P., Martha L.
size chance of crack-branching could be more that could F., Stress intensity factors for branched crack growth. Engi-
reduce the energy available for the propagation of the neering Fracture Mechanics. 7 2 (2005) pp. 2647- 2671.
[12]. Nishioka T., Computational dynamic fracture mechanics,
main crack front. International journal of fracture 86 (1997) pp. 127-159
[13]. Rokach I. V., On the numerical evaluation of the anvil force
5. CONCLUSIONS: accurate dynamic stress intensity factor determination. Engi-
neering Fracture Mechanics 70(2003) 2059-2074
[14]. Rokach I. V., Estimation of the three-dimensional effects for
From the study it is evident that alumina with higher the impact fracture specimen. Arch Mech Engg 1996 ;43(2-
grain size showed superior impact fatigue behaviour in 3):241-252
[15]. Rokach I.V., Mixed numerical-analytical approach for dy-
comparison with that of sub-micron grained alumina.
namic one point bend test modeling., International journal of
Even with higher non-dimensional crack length (a/w), fracture 130,L193-L200 2004
alumina specimens with bigger average grain-size with-
stood higher number of impacts prior to fracture. The [16]. Sinclair G. B.,Messner T. W ., Meda G., Stress intensity fac-
superior impact fatigue behaviour was due to higher re- tors for deep cracks in bending. Engineering Fracture Me-
sistance to fatigue crack growth owing to grain bridging. chanics Vol. 55 No.1 pp. 19-24, 1996
[17]. Sinclair G. B., Meda G., Galik K., stress intensity factors for
ACKNOWLEDGMENTS side-by-side edge cracks under bending. Engineering Frac-
ture Mechanics Vol. 57 No.55 pp. 577-581,1997
[18]. Weisbrod G., Rittel D., A method for dynamic fracture
We acknowledge sincere help to the staff members of- toughness determination using short beams. International
Mechnical Engg Dept. & SBSE, Jadavpur University, Kol- Journal of Fracture (104) pp.89-103 2000
kata. One of the authors (M.K. Barai) sincerely ack-
nwledge constant support from the HOD, Director and
Principal of Future Institute of Engg and Management.
REFERENCES
IJSER 2011
http://www.ijser.org
134
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract In this paper, a method to determine the size - location of Distributed Generations (DGs) in distribution systems
based on multi objective performance index is provided considering load models. We will see that load models affect the
location and the optimized size of Distributed Generations in distributed systems significantly. The simulation studies are also
done based on a new multi objective evolutionary algorithm. The proposed method has a mechanism to keep the diversity to
overcome the premature convergence and the other problems. A hierarchical clustering algorithm is used to provide a
manageable and representative Pareto set for decision maker. In addition, fuzzy set theory is used to extract the best solution.
Comparing this method with the other methods shows the superiority of proposed method. Furthermore, this method can easily
satisfy other purposes with little development and extension.
istributed generation, Distribution systems, Load models, Strength Pareto Evolutionary Algorithm.
Index Terms D
1 INTRODUCTION
indices evaluation on the purpose of effect description on b) Search external Pareto set for the nondominated
distribution system because of DG during maximum individuals and emit all dominated individuals from the
power production. These indices are set.
1) Active and Reactive Power Loss Indices (ILP and ILQ): c) If the amount of the individuals externally stored in
P Q
ILP LDG 100 / ILQ LDG 100 (2)
the Pareto set exceeds a prespecified maximum size,
PL QL reduce the set by means of clustering.
Step 3) Fitness assignment: Calculate the amount of
Where PLDG and QLDG are total loss of active and reactive fitness values of individuals in both external Pareto set
power distribution system with DG, PL and QL are total and the population as follows.
loss of active and reactive power of total system without a) Assign appropriate each individual s strength
DG in the distribution network. amount in external set. The strength amount is
2) Voltage Profile Index (IVD): One of the advantage of proportional to the number of individuals covered by that
proper location and size of the DG is the improvement in individual.
voltage profile. b) The fitness of each individual in population is equal
to the sum of the strengths of all external Pareto solutions
which dominate that individual.
V1 Vi
IVD max in 2 100 (3) Step 4) Selection: combine the population and external
V1 set individuals. Choose two individuals randomly and
compare their fitness. Choose the best one and copy in a
3) MVA Capacity Index (IC): This informational index mating pool.
gives information in the field of system necessities for Step 5) Crossover and Mutation: do the crossover and
promoting transmission line. mutation according to new population production
probabilities.
S ij Step 6) Ending: check the ending criteria if all things are
IC max in2 (4) being done finish the work else substitute the old
CS ij population with the new one and go to step2.
In this paper, time searching will be stopped if the
generation counter exceeds its maximum number.
In some cases, the Pareto optimal set is extremely big or
3 PROPOSED APPROACH has extra solutions. An average linkage based hierarchical
Recently evolutionary algorithm showed that this clustering algorithm is used to reduce the Pareto set. We
algorithm can be effective for removing old method want to change P given set which its size exceeds the
problems [8]. The main element method of SPEA is maximum allowable size N to P* set with size of N.
1) External set: Its a set of Pareto optimal solutions. Algorithm is such as following [8].
These solutions were recorded externally and Step 1) Give primary amount to set C. each member of
continuously be updated. Finally recorded solutions P means a distinct cluster.
show Pareto optimal front. Step 2) if the number of clusters N, go to Step 5, else
2) Strength of a Pareto optimal solution: It is an assigned go to Step 3.
real value S[0,1) for each individual in the external set. Step 3) Calculate all the pairs of clusters distance. The
The strength of an individual is proportional to the distance dc of two clusters C1, C2 C is given as the
number of individuals covered by it. average distance between pairs of individuals across the
3) Fitness of population individuals: Fitness of each two clusters
individual in population is the sum of the strengths of all
external Pareto optimal solutions by which it is covered. 1
The strength of a Pareto optimal solution is at the same
dc
n1n 2 d i1, i2 (5)
i1c1 ,i 2c 2
time its fitness.
Algorithm is in the following steps [8].
Where n1 and n2 are clusters individuals of C1 and C2.
Step 1) primary amounts: produce population and make
Function d shows Euclidian distance between i1 and i2.
empty external Pareto optimal set.
Step 4) Determine two clusters that have minimum dc
Step 2) updating external set: External Pareto optimal set
distance. Combine these clusters into a larger one. Go to
is updated as following:
Step 2.
a) Search population for the nondominated individuals
Step 5) find centroid for each cluster and choose the
and copy them in the external pareto set.
nearest individual to the centroid as agent and emit other
individuals from the cluster.
IJSER 2011
http://www.ijser.org
136
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Step 6) Compute the reduced nondominated set P* by Where is a binary random number, r is a random
uniting the representatives of the clusters. number r[0,1], gmax is maximum number of generations
As soon as having the Pareto optimal set of and is a positive constant that is desirable. = 5 is
nondominated solution, the proposed approach presents selected. This operator gives a value xi[ai,bi] such that
one solution as the best compromise solution. Each the probability of returning a value close to xi increases as
objective function of the i-th solution is represented by a the algorithm advances. This makes uniform search in the
membership function i defined as initial stages where t is small and for later stages is so
local.
1
F max F Fi Fimin
i i 5 MULTIOBJECTIVE BASED FORMULATION
i F min Fi Fimax (6)
max min i Multiobjective index for evaluating distribution systems
F
i F
i Fi Fimax operation on purpose of DG location and size planning
0
with load models, considers all previously mentioned
For each nondominated solution, the normalized indices by strategically giving a weight. The
membership function k is multiobjective index operation on basis of SPEA
algorithm is according to (10).
N obj
IMO 1 .ILP 2 .ILQ 3 .IC 4 .IVD (10)
ik
i 1
k (7) These weights are because of giving the corresponding
M Nobj
importance to each impact indices. Table 2 identifies used
ik
amount for the weights with regarding normal operation
k 1 i 1
analysis [7].
where M is the number of nondominated solutions. The TABLE 2
best solution is the one that has more k. INDICES WEIGHTS
. Indices p
ILP 0.40
4 IMPLEMENTATION OF THE PROPOSED APPROACH ILQ 0.20
IC 0.25
Because of Binary representation problems when search
IVD 0.15
space has wide dimension, the proposed approach has
been implemented using Real Coded Genetic Algorithm
(RCGA). Decision variable xi has real amount within limit Multiobjective function (10) can be minimized with
of ai and bi (xi[ai,bi]). The RCGA mutation and crossover regarding to various operational constraints to satisfy the
operators RCGA is like this. electrical requirements for distribution network. These
Crossover: A blend crossover operator (BLX-) has been limitations are:
employed in this paper. This operator will choose one 1) Power Conservation Limits: The algebraic sum of all
number randomly from the interval [xi (yi - xi) , yi + input and output powers, such as distribution network
(yi - xi)], where xi and yi are the ith parameter values of total losses and power generated from DG, which should
the parent solutions and xi < yi. Because of ensure the be equal with zero. (NOL = no of lines)
balance between exploitation and exploration from search
space, = 0.5 is chosen. n NOL
Mutation: Nonuniform mutation was used here. In this Pss i, V PD i, V Ploss V PDGi (11)
operator, new amount xi of parameter xi produced after i2 n 1
mutation in t time.
2) Distribution Line Capacity Limits: Transmission
capability in each line should be equal with thermal
x t , bi x i if , 0
xi i (8) capacity.
x i t , bi x i if , 1
S i, j S i, j (12)
max
1 t
g max
t , y y 1 r (9) 3) Voltage Drop Limits: voltage drop should base on
voltage regulation that DISCO gives.
IJSER 2011
http://www.ijser.org
137
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
6 SIMULATION RESULTS
The multiobjective index based analysis is carried out on
37-bus test systems as given in the Appendix [7]. A DG size
is considered in a practical range (00.63 p.u.). It's assumed
that DG is operated at unity p.f.. This assumption has two
reasons:
1) Usually when the DG has unity power factor, has
maximum profit because the cost of active power is higher. Fig. 2. Impact indices and IMO with DG size-location pair for
Use at unity power factor cause to have maximum capaci- industrial load
ty. The solution obtained using constant power load
2) Used models in this paper are simple and more atten- models may not be feasible for industrial load. A similar
tion is for voltage changes dependence of load models. and significant effect of load models can be easily be
The using method hasnt been limited by DG models observed from the Figs. (3) (5) for residential
and it's general. First bus was chose as feeder of electric commercial and mixed load models. The differences in
power from network and the rest buses are regarded as DG values of DG size, IMO and its components are
location. On all optimization runs, the population size and significant, showing that the load models effects are
maximum number of generations were selected as 200 and important for suitable planning of size and location. Table
500, respectively. The Pareto optimal set maximum size 3 summarizes the optimal DG size-location pairs, IMO
includes 20 solutions. The crossover and mutation proba- along with its components for each kind of load. From
bilities were selected as 0.9 and 0.01, respectively. For 37- Table 3, the optimal size-location for constant load model
bus system, variation of impact indices and IMO have been (0.6299 p.u. bus 14) is different with industrial load
shown with DG size and location in figure 3-7 for constant, model (0.63 p.u. bus 14) residential load model (0.4672
industrial, residential, commercial and mixed load models. p.u. bus 14) commercial load model (0.4419 p.u. bus
The value of IVD for all load models is near zero. It shows
14) and mixed load (0.5113 p.u. bus 32). Similarly IMO
that voltage profile improves with present DG.
and other effective indices for optimal DG location-size
We can see from Figs. (1) (5) that the indices ILP, ILQ,
are different.
IC and IMO achieve values greater than zero and smaller
than one, indicating the positive impact of DG placement
in the system. Fig. 1 shows that values of IC, ILP and ILQ
for buses 2-4 as IC<ILP<ILQ and for buses 6-8 like
ILQ<ILP<IC. Figure 2 shows the value of optimum DG
size, IMQ and its components for all buses for industrial
load model. So load models affect on solutions.
IJSER 2011
http://www.ijser.org
138
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
TABLE 4
COMPARISON OF SYSTEM POWER LOSSES AT
OPTIMAL LOCATION OF DG WITH LOAD MODELS
Load Optimal PLDG PL 0.01 QLDG QL 0.01
model location 0.01 p.u. p.u. 0.01 p.u. p.u.
Constant 14 0.1499 0.2002 0.0991 0.1335
Fig. 4. Impact indices and IMO with DG size-location pair for 6.3 Conclusion
commercial load
The general analysis includes load models is proposed for
locationsize of distributed generation planning in multiob-
jective optimization in distribution systems. The multiobjec-
tive criteria depends on system operation indices is used in
this work. It was seen that while regarding load models,
there will be changed in DG location and size. The overall
value of multiobjective index (IMO) changed during charge
model changing.
Also in this paper, we suggested a new method based on
Pareto evolutionary algorithm and used for DGs location
size planning problem. This problem formulized as a mul-
tiobjective optimization problem, A diversity preserving
mechanism for finding widely different Pareto optimal solu-
Fig. 5. Impact indices and IMO with DG size-location pair for mixture tions was used. A hierarchical clustering technique is im-
load plemented to provide a representative and manageable Pa-
The probable DG location-sizes may be little (because reto optimal set without destroying the characteristics of the
of constraints), but the number of candidate solution are trade-off front and a fuzzy based mechanism is used for
fairly large to suggest the application of SPEA. The finding the best compromise solution. The result shows that
differences in values of DG size, IMO and its components the suggestive method for multiobjective optimization prob-
are significant for load models, showing that the load lem is useful, because multiple Pareto optimal solutions are
models effects are important for suitable planning of size found during simulation. Since the proposed approach does
and location. The values of QLDG and PLDG related to not impose any limitation on the number of objectives, its
optimal size location for any kind of load model have extension to include more objectives is a straightforward
process.
been shown in table 4, although the values of QLDG and
PLDG for nonconstant load models (industrial residential APPENDIX
commercial and mixture) arent different but their
Fig. 6 shows the 37-bus test system.
difference is significant when compared to constant load
model.
TABLE 3
IMPACT INDICES COMPARISON FOR PENETRATION
OF DG WITH LOAD MODELS
Indices Constant Industrial Residential Commercial Mixture
ILP 0.7078 0.6517 0.7459 0.7756 0.7526
ILQ 0.7035 0.6449 0.7383 0.7685 0.7551
IC 0.9913 0.9671 0.9570 0.9476 0.9478
IVD 0.0687 0.0634 0.0661 0.0653 0.0696
IMO 0.6823 0.6409 0.6952 0.7106 0.6994
Location 14 14 14 14 32
Size 0.6299 0.63 0.4672 0.4419 0.5113
IJSER 2011
http://www.ijser.org
139
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
REFERENCES
[1]. V. Miranda, J. V. Ranito, and L. M. Proenca, Genetic algorithms
in optimal multistage distribution network planning, IEEE
Trans. Power Syst., vol. 9, no. 4, Nov. 1994, pp. 19271933.
[2]. C. Concordia and S. Ihara, Load representation in power sys-
tems stability studies, IEEE Trans. Power App. Syst., vol. PAS-
101, no. 4, Apr. 1982, pp. 969977.
[3]. IEEE Task Force on Load Representation for Dynamic Perfor-
mance, Bibliography on load models for power flow and dy-
namic performance simulation, IEEE Trans. Power Syst., vol.
10, no. 1, Feb. 1995, pp. 523538.
[4]. IEEE Task Force on Load Representation for Dynamic Perfor-
mance, Load representation for dynamic performance analy-
sis, IEEE Trans. Power Syst., vol. 8, no. 2, May 1993, pp. 472
482.
[5]. IEEE Task Force on Load Representation for Dynamic Perfor-
IJSER 2011
http://www.ijser.org
140
Abstract - Tourism has gradually grown over the years as a full fledged industry. Many countries are gaining from this
welcome change. The contributions of this sector to the countrys coffers are sizable for some countries, while some
countries have a long way to go. This research paper attempts to study the reasons of lack of optimal contribution of this
sector in India and also forays into strategies that can be adopted to capitalize on the patterns prevalent in tourist
behavior. A country like India with a commendable historical significance and size has not been able to garner as much of
tourist attention because of certain factors. India has a lot of offerings to whip the appetite of an avid tourist, but the
varieties have either not been promoted, or if promoted lack of associated services have not led to desired synergies.
After identifying the gaps between the two countries (India & Malaysia), the paper puts forth the tourists patterns of
behavior through the data collected. The questionnaire has been administered to tourists in New Delhi and Agra (cities in
India). Malaysia on the other hand has had a steady stream of tourists trickling down and benefiting its economy.
Key Words- ASEAN, Eco-tourism, Heritage Sites, MICE, Ministry Of Tourism (India & Malaysia). World Travel.
IJSER 2011
http://www.ijser.org
142
IJSER 2011
http://www.ijser.org
144
SURVEY RESULT
I. DEMOGRAPHICS
Nationality
An estimated 100 foreign tourists
come to India from various country
covered in the survey during March
2010. The major countries accounted for
39% of foreign tourist are East Asia, out
Gender
of America 7%, European 12%, South
IJSER 2011
http://www.ijser.org
145
Male 62
Female 38
Total 100
Age
The tourists were classified into
seven-age groups viz., upto seventeen, Education
eighteen to twenty four, twenty five to The tourists were also classified on the
twenty nine, thirty to thirty four, thirty basis of educational levels. The survey
five to thirty nine, forty to forty four and reveals that nearly 67% of the foreign
forty five to forty nine. Nearly 60% of nationals visiting India were graduates
the tourists belonged to the age-group and postgraduates at higher education or
eighteen to thirty, the next highest group university; only 3% tourist at lower
was thirty to thirty five (14%). vocational education.
IJSER 2011
http://www.ijser.org
146
Total 100%
IJSER 2011
http://www.ijser.org
147
V. EXPENDITURE PATTERN
The analysis of tourist
IV. TRAVEL PATTERN expenditure shows that 23% tourists
The analysis of travel pattern spent around 1000 USD, 19% tourists
shows that 30% of tourists traveled spent around 750 USD, 14% spent
alone, 20% traveled with two persons, around 500 USD and only 8% tourist
22% traveled in a group of 3 persons, spent above 2250 USD.
16% in a group of four persons, 12 % in What were the travel and
a group of five persons and more. lodging expenses of this trip to
India per person?
1 person 30% Around USD 500 16%
2 persons 20% Around USD 750 19%
3 persons 22% Around USD1.000 23%
4 persons 16% Around USD 1.250 14%
5 persons or more 12% Around USD 1.500 8%
Total 100% Around USD 1.750 7%
Around USD 2.000 5%
USD 2.250 or more 8%
Total 100%
IJSER 2011
http://www.ijser.org
148
IJSER 2011
http://www.ijser.org
149
VIII. ACCOMMODATION
VII. NUMBER OF DAYS STAYED
There are 60% tourists stayed in
There are 68% tourists have
middleclass hotel; 3% spending in
more than one week to four weeks
luxury hotel (4 and 5 stars). 12% tourists
stayed in India, only 3% of tourists
come to their friends, relatives or family.
stayed one week or less. Around 30%
tourists stayed more than one month to
At what kind of accommodation did you
two months in India.
stay in India?
Luxury hotel (4 and 5 stars) 3%
How long do you have holiday in
Middleclass hotel (3 stars and
India?
less) 37%
7 days or less 3%
Guesthouse 23%
8 - 14 days 18%
Apartment / bungalow 4%
15 - 21 days 28%
Private home / villa 3%
IJSER 2011 Friends / relatives / family 12%
http://www.ijser.org
Other 18%
Total 100%
150
Yes 53%
No 27%
Do Not Know 20%
Total 100%
IJSER 2011
http://www.ijser.org
151
What is your valuation of your India, remaining 11% visited India for
stay in India? other purposes.
Most satisfying 15% What would be the main purpose of
Satisfying 28% your next visit to India?
Average 31% Round trip 12%
Dissatisfying 12% Festivals 8%
Most dissatisfying 2% Eco-tourism 4%
DK/NA 12% Nature holiday 12%
Total 100% Beach holiday 7%
Cultural holiday 13%
Spiritual holiday 12%
Family visit 7%
Spa / wellness
Active holiday 4%
Honeymoon
Diving holiday
Study / placement / work 10%
Other: 11%
Total 100%
IJSER 2011
http://www.ijser.org
152
IJSER 2011
http://www.ijser.org
153
IJSER 2011
http://www.ijser.org
154
IJSER 2011
http://www.ijser.org
155
IJSER 2011
http://www.ijser.org
156
IJSER 2011
http://www.ijser.org
157
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Stenosis is abnormal narrowing of blood vessels. The presence of stenosis in arteries may cause critical flow
conditions. It may finally lead to stroke and heart-attack. A clinical study has been done on more than 130 patients along with
computational study using 2D axisymmetric rigid model of stenosis in the carotid artery. Assumed shapes of deposition zone
and the degree of occlusion used in the analysis were taken from clinical data. The Navier-Stokes equations for incompressible
fluid flow have been considered as the governing equations and it has been solved with varying flow parameters using standard
CFD software package. The radial velocity profiles at various points of the flow field, the centerline velocity plot and the
centerline pressure plots have been obtained from computational study and compared with the clinical data.
Index Terms Arterial flow, Clinical validation, Computational Fluid Dynamics, Heamodynamics, Mathemetical Modeling,
Stenosis, Stenosis geometry.
1 INTRODUCTION
IJSER 2011
http://www.ijser.org
158
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
.u = 0 (2)
r 2
u (r ) u 1
R
where,
u(r)= radial velocity at an arbitrary radius
. Fig2.Model of rectangular stenosis used in study u =mean velocity
R = the radius of the artery
r = the radius at which the velocity is to be obtained.
2. A zero pressure with no viscous stress condition at
The occurrence of the deposition shows three domi- the outlet.
nant patterns 3. A no-slip condition at all the walls.
1. Single sided deposition i.e. u = 0
2. Axis-symmetric deposition
3. Non axis-symmetric deposition 3.3 Numerical Procedure
All of them are considered with a maximum diametric
Standard Finite Element CFD based software COMSOL
constriction of 62% which can be specified as a moderate
3.5a has been used for the solution of the problems. The
degree of stenosis, as a constriction of less than 50% is
solver type is parametric and the solver used is Direct
considered mild and above 70% is considered severe in
(PARDISO).
most medical literature.
Sufficient length of the artery downstream of the ste-
nosis has been taken so that the blood coming out of the 3.4 Mesh Details and Grid Sensitivity Test
constricted region is fully developed at the outlet of the A free mesh consisting of triangular elements has been
artery. The upstream length for all of the stenosis is con- used in the study with the maximum possible refinement.
sidered at Z=0.031. The mesh has been refined in the vicinity of the constric-
IJSER 2011
http://www.ijser.org
159
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
tions so as to present a more accurate picture of their ef- indicate symmetric and asymmetric deposition.
fects on the blood flow. In the curved constriction, 14900
elements and for the rectangular constriction 15369 ele- TABLE2
ments have been used for solution of the problems. APPEARANCE OF PLAQUE
Grid sensitivity tests for all the simulations have been
performed. For all of the stenosis geometries, there have
been no noticeable changes in results when the grids have
been refined above the values mentioned. So the above
refinement of meshes is used in our subsequent studies.
TABLE1
AREA OF OCCURRENCE OF PLAQUE
4.2.3 Centerline velocity field Fig13. Recirculation length vs. Reynolds Number
The recirculation lengths of both the rectangular and
curved stenoses plotted against the respective Reynolds
The data set also reflects good amount of large sized
Numbers show an almost linear variation. From this
plaque accompanied by smaller one or multiple deposi-
graph (Fig 13) it can be seen very clearly that the recircu-
tions, where their dimension, placement, and appearance
lation lengths of the rectangular stenosis is higher than a
varies. Table 3 shows the distance between two adjacent
curved stenosis for the same value of Reynolds number.
IJSER 2011
http://www.ijser.org
162
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Index TermsActive contour, Biometrics, Daugmans method, Hough Transform, Iris, Level Set method,
Segmentation.
IJSER 2011
http://www.ijser.org
164
covert recognition, which can be performed by during embryologic development. The collarette
the fingerprint, face, iris and palmprint. Among divides the iris into the pupillary zone, which
these, iris must be enhanced, as it provides encircles the pupil, and the ciliary zone, which
higher uniqueness and circumvention values. extends from the collarette to the iris root. The
colour of these two zones often differs [12].
Genotypic Randotypic Behavioral The pupillary margin of the iris rests on the
anterior surface of the lens and, in profile, the iris
120%
100% has a truncated cone shape such that the
80% pupillary margin lies anterior to its peripheral
60% termination, the iris root. The root,
40% approximately 0.5 mm thick, is the thinnest part
20% of the iris and joins the iris to the anterior aspect
0% of the ciliary body. The iris divides the anterior
Palmprint
Fingerprint
Finger Geometry
Hand Geometry
Hand Vein
Signature
Voice
Keystroke
Iris
Face
Retina
chambers, and the pupil allows the aqueous
humor to flow from the posterior into the
anterior chamber with no resistance.
Uniqueness Universatilty
Permanence Collectability
Performance Acceptability
Circumvention
120%
100%
80%
60%
40%
20%
0%
Finger
Hand
Iris
Fingerprint
Palmprint
Face
Keystroke
Signature
Voice
Hand Vein
Retina
IJSER 2011
http://www.ijser.org
165
(f), from which branches extend toward the Based on the assumption that the pixels
pupil, forming capillary arcades. The sector intensity of the captured image can be well
below it demonstrates the circular arrangement represented by a mixture of three Gaussian
of the sphincter muscle (g) and the radial distributions, Kim et al. [28] proposed the use of
processes of the dilator muscle (h). The posterior Expectation Maximization [29] algorithm to
surface of the iris shows the radial contraction estimate the respective distribution parameters.
furrows (i) and the structural folds of Schwalbe They expected that Dark, Intermediate and
(j). Circular contraction folds also are present in Bright distributions contain the pixels
the ciliary portion. The pars plicata of the ciliary corresponding to the pupil, iris and reflections
body is at (k). [13] areas.
3.1 Daugmans Method
3. IRIS SEGMENTATION This is by far the most cited method [14] in the
In this paper CASIA iris image database has iris recognition literature. The author assumes
been used for the analysis of different both pupil and iris with circular form and
segmentation algorithms. CASIA iris image applies the following integrodifferential
database (version 1.0) includes 756 iris images operator:
from 108 eyes, hence 108 classes. I(x, y)
max , , G (r) ds (1)
In 1993, J. Daugman [14] presented one of the r , , 2r
most relevant methods, constituting the basis of This operator searches over the image
the majority of the functioning systems. domain (x, y)for the maximum in the blurred (by
Regarding the segmentation stage, this author a Gaussian Kernel G (r) partial derivative with
introduced an integrodifferential operator to find respect to increasing radius r, of the normalized
both the iris inner and outer borders. This contour integral of I(x, y) along a circular arc ds
operator remains actual and was proposed in of radius r and center coordinates (x , y ). In
2004 with minor differences by Nishino and other words, this method searches in the
Nayar [15]. space for the circumference center and radius
Similarly, Camus and Wildes [16] and with highest derivative values comparing to
Martin-Roche et al. [17] proposed circumferences of neighbour radius.
integrodifferential operators that search the At first the blurring factor is set for a coarse
space, with the objective of maximizing the scale of analysis so that only the very
equations that identify the iris borders. pronounced circular transition from iris to
Wildes [18] proposed iris segmentation (white) sclera is detected. Then after this strong
through a gradient based binary edge-map circular boundary is more precisely estimated, a
construction followed by circular Hough second search begins within the confined central
transform. This is the most common method, that interior of the located iris for the fainter pupillary
has been proposed with minor variants by Cui et boundary, using a finer convolution scale and a
al. [19], Huang et al.[20], Kong and Zhang[21], smaller search range defining the paths
Ma et al.[22], [23] and [24]. (x , y , r)contour integration. In the initial search
Liam et al. [25] proposed one interesting for the outer bounds of the iris, the angular arc of
method essentially due to its simplicity. This contour integration ds is restricted in range to
method is based in thresholds and in the two opposing 90 cones centered on the
maximization of a simple function, in order to horizontal meridian, since eyelids generally
obtain two ring parameters that correspond to obscure the upper and lower limbus of the iris.
iris inner and outer borders. Then in the subsequent interior search for the
Du et al. [26] proposed the iris detection pupillary boundary, the arc of contour
method based on the prior pupil segmentation. integration ds in operator (1) is restricted to the
The image is further transformed into polar upper 270 in order to avoid the corneal specular
coordinates and the iris outer border is detected reflection that is usually superimposed in the
as the largest horizontal edge resultant from lower 90 cone of the iris from the illuminator
Sobel filtering. However, this approach may fail located below the video camera. Taking the
in case of non-concentric iris and pupil, as well absolute value in (1) is not required when the
as for very dark iris textures. operator is used first to locate the outer
Morphologic operators were applied by Mira boundary of the iris, since the sclera is always
and Mayer [27] to obtain iris borders. They lighter than the iris and so the smoothed partial
detected the pupillary and scleric borders by derivative with increasing radius near the limbus
applying thresholding, image opening and is always positive. However, the pupil is not
closing. always darker than the iris, as in persons with
normal early cataract or significant back-
IJSER 2011
http://www.ijser.org
166
scattered light from the lens and vitreous humor; Wildes et al. [18], Kong and Zhang [21], Tisse
applying the absolute value in makes the et al. [30] and Ma et al. [22] use Hough transform
operator a good circular edge-finder regardless to localize irises. The localization method, similar
of such polarity-reversing conditions. With to Daugman's method, is also based on the first
automatically tailored to the stage of search for derivative of the image. In the proposed method
both the pupil and limbus, and by making it by Wildes, an edge map of the image is first
correspondingly finer in successive iterations, the obtained by thresholding the magnitude of the
operator defined in has proven to be virtually image intensity gradient:
infallible in locating the visible inner and outer
annular boundaries of irises. Where and
. is a Gaussian
smoothing function with scaling parameter to
select the proper scale of edge analysis.
The edge map is then used in a voting process to
maximize the defined Hough transform for the
desired contour. Considering the obtained edge
points as , a Hough transform
can be written as:
where
IJSER 2011
http://www.ijser.org
167
iris/sclera boundary was performed first, then up curve evolution. Note that, when the function
the Hough transform for the iris/pupil boundary is constant , the energy functional in is the
was performed within the iris region, instead of area of the region . The
the whole eye region, since the pupil is always energy functional in can be viewed as
within the iris region. the weighted area of . The coefficient of
can be positive or negative, depending on the
relative position of the initial contour to the
object of interest. For example, if the initial
contours are placed outside the object, the
coefficient in the weighted area term should
take positive value, so that the contours can
shrink faster. If the initial contours are placed
inside the object, the coefficient should take
negative value to speed up the expansion of the
contours.
By calculus of variations , the Gateaux
derivative (first variation) of the functional in
can be written as
function as below 50 50
100 100
250 250
50 100 150 200 250 300 50 100 150 200 250 300
200 iterations
Left click to get pupil points, right click to get end point
50 50
The external energy drives the zero level set 250 250
toward the object boundaries, while the internal 50 100 150 200 250 300 50 100 150 200 250 300
energy functional ) computes the length of 50 100 150 200 250 300
IJSER 2011
http://www.ijser.org
168
The second and the third term in the right which could make the recognition process less
hand side of correspond to the gradient accurate, since there is less iris information.
flows of the energy functional and , However, this is preferred over including too
respectively, and are responsible of driving the much of the iris region, if there is a high chance it
zero level curve towards the object boundaries. would also include undetected eyelash and
To explain the effect of the first term, which is eyelid regions.
associated to the internal energy , we
notice that the gradient flow
in false detection due to noises such as strong 50 100 150 200 250 300 50 100 200 iterations 200
150 250 300
detection for the above two methods. The eyelid 250 250
detection system proved quite successful, and 50 100 150 200 250 300 50 100 150 200 250 300
managed to isolate most occluding eyelid Fig 9: Illustrate the results of the active contour
regions. One problem was that it would segmentation method based on Level set evolution
without re-initialization over the pupils that are not perfect
sometimes isolate too much of the iris region,
circles.
IJSER 2011
http://www.ijser.org
169
The eyelash detection system implemented segmentation results for the pupil and limbus
for the CASIA database also proved to be boundaries with success rate of almost 100%.
successful in isolating most of the eyelashes Only problem with this system was that the
occurring within the iris region as shown in initial contour was to be defined for each eye
Figure 11. A slight problem was that areas where image manually.
the eyelashes were light, such as at the tips were
not detected. However, these undetected areas REFERENCES
were small when compared with the size of the [1] V. Matyas and Z. Riha, Toward reliable user
iris region. authentication through biometrics, IEEE Security and
Privacy, vol. 1, no. 3, pp. 4549, 2003.
[2] J. G. Daugman, Phenotypic versus genotypic approaches
to face recognition, Face Recognition: From Theory to
Applications, pp. 108123. Heidelberg: Springer-Verlag,
1998.
[3] S. D. Fried, Domain access control systems and
methodology,http://www.itu.dk/courses/SIAS/E2005/AU22
40 01.pdf , 2004.
[4] M. Bromba, Biometrics FAQs,
http://www.bromba.com/faq/biofaqe.htm, 2010.
[5] A. K. Jain, R. Bolle, and S. Pankanti, Personal
Identification in networked society, 2nd edition. Kluwer
Academic Publisher, E.U.A., 1999.
[6] A. K. Jain, A.Ross, and S. Prabhakar, An introduction to
biometric recognition, IEEE Transactions on Circuits and
Systems for Video Technology, vol. 14, no. 1, pp. 419,
Fig 10: Automatic segmentation of image from CASIA January 2004.
database. Black region denote detected eyelid.
[7] S. Liu and M. Silverman, A practical guide to biometric
security technology, IT Professional, vol. 3, no 1, pp. 27
32, January 2001.
[8] Biometrics and the courts,
http://ctl.ncsc.dni.us/biomet%20web/BMIndex.html,
2010.
[9] Idesias Biometric Technologies. Biometric comparison
table,http://www.idesia-
biometrics.com/technology/biometric_comparison_table.
html, 2010.
[10] International Biometric Group, Which is the best
biometric technology?,
http://www.biometricgroup.com/reports/public/report
s/best_biometric.html, 2010.
[11] J. D. Woodward, K. W. Webb, E. M. Newton, M. A.
Bradley, D. Rubenson, K. Larson, J. Lilly, K. Smythe, B.
Fig 11: The eyelash detection technique, eyelash regions
are detected using thresholding and denoted as black. Houghton, H. A. Pincus, J. Schachter, and P. Steinberg,
Army Biometric Applications - Identifying and
TABLE I
Addressing Socio-Cultural Concerns, Rand Corporation,
COMPARISON OF DIFFERENT SEGMENTATION
Santa Monica, 2001.
TECHNIQUES.
[12] A. K. Khurana, Comprehensive Ophthalmology, New
Method No. Of Properly Accuracy Age International (P) Ltd., 4th edition, 2007.
eye Segmented [13] L. A. Remington, Clinical Anatomy of the Visual System,
images Elsevier Inc., 2nd edition, 2005.
Daugmans 756 658 87% [14] J. G. Daugman, High confidence visual recognition of
Method persons by a test of statistical independence, IEEE
Hough 756 624 83% Transactions on Pattern Analysis and Machine
Transform Intelligence, vol. 25, no. 11, pp. 11481161, November
Proposed 756 750 Approx: 1993.
Method 100% [15] K. Nishino and S. K. Nayar, Eyes for relighting, ACM
Trans. Graph., vol 23, no. 3, pp. 704711, 2004.
The proposed method of active contour [16] T.A. Camus and R. Wildes, Reliable and fast eye finding
segmentation based on Level set evolution in close-up images, Proceedings of the IEEE 16th
without re-initialization provided perfect
IJSER 2011
http://www.ijser.org
170
International Conference on Pattern Recognition, pp. 389 Proceedings of the 25th International Conference on
394, Quebec, August 2002. Vision Interface, pp. 294299, Calgary, July 2002.
[17] D. Martin-Roche, C. Sanchez-Avila, and R. Sanchez- [31] C. Li, C. Xu, C. Gui, and M. D. Fox, Level Set Evolution
Reillo, Iris recognition for biometric identification using Without Re-initialization: A New Variational
dyadic wavelet transform zero-crossing, IEEE Aerospace Formulation, IEEE Computer Society Conference on
and Electronic Systems Magazine, Mag. 17, no. 10, pp. 3 Computer Vision and Pattern Recognition, vol. 1, pp. 430
6, 2002. 436, 2005.
[18] R. P. Wildes, Iris recognition: an emerging biometric
technology, Proceedings of the IEEE, vol. 85, no.9, pp.
13481363, U.S.A., September 1997.
[19] J. Cui, Y. Wang, T. Tan, L. Ma, and Z. Sun, A fast and
robust iris localization method based on texture
segmentation, Proceedings of the SPIE Defense and
Security Symposium, vol. 5404, pp. 401408, August 2004.
[20] J. Huang, Y. Wang, T. Tan, and J. Cui, A new iris
segmentation method for recognition, Proceedings of the
17th International Conference on Pattern Recognition
(ICPR), vol. 3, pp. 2326, 2004.
[21] W. K. Kong and D. Zhang, Accurate iris segmentation
method based on novel reflection and eyelash detection
model, Proceedings of the International Symposium on
Intelligent Multimedia, Video and Speech Processing, pp.
263266, Hong Kong, May 2001.
[22] L. Ma, Y. Wang, and T. Tan, Iris recognition using
circular symmetric filters, Proceedings of the 25th
International Conference on Pattern Recognition, vol. 2,
pp. 414417, Quebec, August 2002.
[23] L. Ma, T. Tan, Y. Wang, and D. Zhang, Personal
identification based on iris texture analysis, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, vol. 25, no. 12, pp. 25192533, December
2003.
[24] L. Ma, Y. Wang, and D. Zhang, Efficient iris recognition
by characterizing key local variations, IEEE Transactions
on Image Processing, vol. 13, no. 6, pp. 739750, June
2004.
[25] L. Liam, A. Chekima, L. Fan, and J. Dargham, Iris
recognition using self organizing neural network,
Proceedings of the IEEE Student Conference on Research
and Developing Systems, pp. 169172, Malasya, June
2002.
[26] Y. Du, R. Ives, D. Etter, T. Welch, and C. Chang, A new
approach to iris pattern recognition, Proceedings of the
SPIE European Symposium on Optics/Photonics in
Defence and Security, vol. 5612, pp. 104116, October
2004.
[27] J. Mira and J. Mayer, Image feature extraction for
application of biometric identification of iris - a
morphological approach, Proceedings of the 16th
Brazilian Symposium on Computer Graphics and Image
Processing, pp. 391398, Brazil, October 2003.
[28] J. Kim, S. Cho, and J. Choi, Iris recognition using wavelet
features, Kluwer Academic Publishers, Journal of VLSI
Signal Processing, no. 38, pp. 147 256, November 2004.
[29] A. P. Dempster, N. Laird, and D. Rubin, Maximum
likelyhood from incomplete data via the EM algorithm,
Journal of the Royal Statistic Society, vol. 39, pp. 138,
1977.
[30] C. Tisse, L. Martin, L. Torres, and M. Robert, Person
identification technique using human iris recognition,
IJSER 2011
http://www.ijser.org
171
Abstract- We have Investigate and study to X-ray Plasmon Satellites in Rare earth compounds ( La2CuO4 , Nd2CuO4 , Gd2CuO4 ,
PrNiSb2 , NdNiSb2 , Pr(OH)3 , Nd(OH)3 , Sm(OH)3 )
INTRODUCTION
corresponding main line should be equal to the parameter and = 0.47 rs1/2 in the place of = (1+l/L)-
1
quantum of Plasmon energy p which is given used by Pardee et. al.(14) . The equation (3) contains
by [10] a series of terms. The first term of the equation is
purely extrinsic, while second term is purely intrinsic.
The other terms are containing the relative
ev 1 contributions of both extrinsic and intrinsic. The
specialty of this formula is that each term alone or
Where Z = No.of unpaired electrons , = Specific simultaneously with other terms is able to give the
relative intensity. This formula also includes both the
gravity & = Molecular Weight
categories mentioned by Bradshaw and gives better
results as compared than traditional methods for
calculation of the relative intensity. Using the values of
This equation can be derived as given below .
, and rs in equation (4)
From the classical consideration , we get the
Using the equation (4), the author has for the
frequency of Plasmon oscillation as
first time, calculated the relative intensity of Rare
earth compounds ( La2CuO4 , Nd2CuO4 , Gd2CuO4 ,
2 PrNiSb2 , NdNiSb2 , Pr(OH)3 , Nd(OH)3 , Sm(OH)3 ) and
Hence the amount of energy given to Plasmon Our calculated and estimated values are in agreement
becomes with the calculated values of J. C. Parlebas et al. [ 15 ]
and A. Szytula , B. Penc, A. Jezierski [ 16 ]
Ep = p =
Reference
In this equation we can write n = 1. Korbar H. & Mehlhorn W.A. ; Phys. 191,
(1966) 217.
Where , Z and W are defined above and L is 2. Haynes S.K. & Velinsky, M & Velinsky
the Avogadro number .By putting the numerical L.J. ; Nucl. Phys. A99 (1967), 537.
value of constant , we get the Plasmon energy as 3. Rudd M.E. & Edward & Volz, D.J. ;
Phys Rev. 151, (1966), 28.
ev 3 4. Asaad, W.N. & Burhop E.H.S. ; Proc.
Our calculated values of E have been Phys. Soc., London 71, (1958), 369.
compared with the Scroccos experimental value. 5. Listengarten, M.A. ; Bull Acad. Sci.
And We have also calculated the relative U.S.S.R., Phys. Ser. 26 (1962), 182.
intensity of plasmon satellites, which is different 6. Asaad, W.N. ; Nucl. Phy. 66, (1965b),
in different processes. If the excitation of 494.
plasmon occurs during the transport of the 7. M.Scrocco in photoemission spectra of
electron through the solid, it is known as Pb.(II) halide; Phys. Rev. B25 (1982)
extrinsic process of plasmon excitation. The 1535-1540 .
plasmon can also be excited by another method 8. M.Scrocco , Satellites in X-ray Photo
known as intrinsic process. In this process, electron spectroscopy of insulator I 32
excitation of plasmon takes place simultaneously (1985) 1301-1306
with creation of a hole. Bradshaw et al have 9. M.Scrocco , Satellite in X-ray Photo
further divided core hole excitation into two electron spectroscopy of insulators II 32
classes, (1985) 1307-1310
1 - Where the number of slow electrons are 10. L.Marton , L.B.Lader and H.
conserved. Mendlowitz; Adv. Electronic and Electro
2 - Where the number of slow electrons are not Physics; edited by L.M arton Academic ,
conserved New York 7 (1955) , 225 .
11. Surendra poonia and S.N.Soni , Indian
The Author has calculated relative intensity in journal of pure and applied physics ,
both the cases with new modification in the light of vol.45, feb.2007 pp-119-126
Bradshaw [12] and Lengreth [13] work, which 12. A. M. Bradshaw, Cederbaurn S.L, Domeke
explains that not only intrinsic process but extrinsic W. & Krause Jour. Phys C: Solid State
process and their relative contribution may also Phys. 7, 4503, 1974
contribute in relative intensities. The combined effect 13 D. C. Lengreth, Phys. Rev. Letter, 26,
of intrinsic and extrinsic plasmon excitation intensity 1229, 1971
variation was suggested by Lengreth as: 14. W. J. Pardee, G.D. Mahan, D. E. Eastman,
R.A. Pollak, L. Ley, F.R. McFeely, S.P.
i= = n 4 Kowalczky and D.A. Shirely, Phys. Rev.
B, 11, 3614, 1975.
The value of is taken as = 0.12rs which is purely 15 J.C.Parlebas et al. , J.Phys. France 51 , (1990) ,
intrinsic, rs = (47.11/ ws) 2/3 is dimensionless 639-650
IJSER 2011
http://www.ijser.org
173
IJSER 2011
http://www.ijser.org
174
Experimental
Author
value of
Alpha Beta Calculated Intensity
S.No. Compounds Es Rs Relative
() () Relative Assignment
Intensity
Intensity
Ref. [15,16]
1
PrNiSb2 2.96 6.33 1.182 0.759 1.1011719 1.08 +2/2
2
NdNiSb2 2.96 6.33 1.182 0.759 0.4184298 0.42 -2/2
3
La 2CuO4 5.91 3.99 0.938 0.47864 0.37111941 0.411 -2/2
4
Nd2CuO4 5.91 3.99 0.938 0.47864 0.37111941 0.363 -2/2
5
Gd2CuO4 5.91 3.99 0.938 0.47864 0.37111941 0.388 -2/2
6 3*(+0.1+2/2
Pr(OH)3 7.82 3.31 0.855 0.39719 1.73677471 1.7
+3/62)
7 3*(+0.1+2/2
Nd(OH)3 7.24 3.48 0.877 0.41813 1.83197714 2
+3/62)
8 3*(+0.1+2/2
Sm(OH)3 5.91 3.99 0.938 0.47864 2.12073651 2.5
+3/62)
1
175
2
176
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract This paper presents the concept of simultaneous ac-dc power transmission.Long extra high voltage (EHV) ac lines
cannot be loaded to their thermal limits due to this instability occurs in the power system.With the scheme proposed in this
paper,it is possible to load these lines very close to their thermal limits.The conductors are allowed to carry usual ac along dc
superimposed on it.The advantage of parallel ac-dc transmission for improvement of transient stability and dynamic stability and
dampout oscillations have been established.Simulation study is carried out in MATLAB software package.The results shows the
stability of power system when compared with only ac transmission.
Index Terms Extra high voltage (EHV) transmission, flexiable ac transmission system (FACTS), HVDC, MATLab,
simultaneous ac-dc transmission, Power System Stability, Transmission Efficeincy
1 INTRODUCTION
T.Vijay Muni received Masters Degree in Power & Industrial Drives from
JNT University, Kakinada, India in 2010.Presently he working as Assis-
tant Professor in Electrical and Electronics Department,Sri Sarathi Insti-
tute of Engineering & Technology, Nuzvid,India.PH-09000055144.
E-mail: [email protected]
T.Vinoditha is currently pursuing masters degree program in Electrical
Power Systems in JNT University, Hyderabad, India. PH-09052352600. 2 COCEPT OF SIMULTANEOUS AC-DC
E-mail: [email protected]
D.Kumar Swamy recwived Masters Degree in EPSHV from JNT Universi-
TRANSMISSION
ty, Kakinada, India .Presently he working as Associate Professor & Head in The circuit diagram in Figure1 shows the basic
Electrical & Electronics Engineering Department,Dr.Paul Raj Engineering
College,Bhadrachalem, India.PH-09866653638. scheme for simultaneous ac-dc transmission. The dc
E-mail: [email protected] power is obtained through the rectifier bridge and in-
IJSER 2011
http://www.ijser.org
177
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
jected to the neutral point of the zigzag connected sec- are neglected can be written as
ondary of sending end transformer, and again it is re-
converted to ac by the inverter bridge at the receiving Sending end voltage:
end. The inverter bridge is again connected to the neu- Vs = AVR + BIR (1)
tral of zigzag connected winding of the receiving end
transformer. Star connected primary windings in place Sending end current:
of delta-connected windings for the transformers may Is = CVR + DIR (2)
also be used for higher supply voltage. The single cir-
cuit transmission line carriers both 3 phase ac and dc Sending end power:
power. It is to be noted that a part of the total ac pow- Ps+ jQS = (- VS V*R)/B* + (D*/B*) Vs2 (3)
er at the sending end is converted into dc by the ter-
tiary winding of the transformer connected to rectified Receiving end power:
bridge. The same dc power is reconverted to ac at the PR+jQR = (VS* VR)/B* - (A*/B*)VR2 (4)
received end by the tertiary winding of the receiving The expressions for dc current and the dc power, when
end transformer connected to the inverter bridge. the ac resistive drop in the line and transformer are
Each conductor of the line carries one third of the total neglected,
dc current along with ac current Ia .The return path of
the dc current is through the ground. Zigzag con- Dc current:
nected winding is used at both ends to avoid satura- Id = (Vdrcos - Vdicos)/(Rer+(R/3) Rci) (5)
tion of transformer due to dc current flow. A high val- Power in inverter:
ue of reactor, X d is used to reduce harmonics in dc cur- Pdi = Vdi x Id (6)
rent. Power in rectifier:
In the absence of zero sequence and third harmon- Pdr = Vdr x Id (7)
ics or its multiple harmonic voltages, under normal
operating conditions, the ac current flow will be re- Where R is the line resistance per conductor, Rcr and Rci
stricted between the zigzag connected windings and commutating resistances, and, firing and extinction
the three conductors of the transmission line. Even the angles of rectifier and inverter respectively and Vdr and
presence of these components of voltages may only be Vdi are the maximum dc voltages of rectifier and inver-
able to produce negligible current through the ground ter side respectively. Values of Vdr and Vdi are 1.35
due to high of Xd. times line to line tertiary winding ac voltages of re-
Assuming the usual constant current control of rec- spective sides.
tifier and constant extinction angle control of inverter,
the equivalent circuit of the scheme under normal Reactive powers required by the converters are:
steady state operating condition is shown in Fig.2.
Qdi = Pdi tanI (8)
Qdr = Pdr tanr (9)
CosI = (cos + cos ( + i) )/2 (10)
Cosr = (cos + cos ( + r) )/2 (11)
current wave are obtained for (Id/3Ia) <1.414. cvts as used in EHV ac lines are used to measure ac
The instantaneous value of each conductor voltage component of transmission line voltage. Superimposed
with respect to ground becomes the dc voltage Vd with dc voltage in the transmission line does not affect the
a superimposed sinusoidally varying ac voltages hav- working of cvts. Linear couplers with high air-gap core
ing rms value Eph and the peak value being: may be employed for measurement of ac component of
Emax = V + 1.414 Eph line current as dc component of line current is not able
Electric field produced by any conductor vol- to saturate high air-gap cores.
tage possesses a dc component superimposed with Electric signal processing circuits may be used to
sinusoidally varying ac component. But the instanta- generate composite line voltage and current wave-
neous electric field polarity changes its sign twice in forms from the signals obtained for dc and ac compo-
cycle if (Vd/Eph) < 1.414.Therefore, higher creepage nents of voltage and current. Those signals are used for
distance requirement for insulator discs used for protection and control purposes.
HVDC lines are not required.
Each conductor is to be insulated for Emax but
the line to line voltage has no dc component and
3 SELECTION OF TRANSMISSION VOLTAGE
ELL(max) = 2.45 Eph.Therefore, conductor to conductor The instantaneous value of each conductor voltage
separation distance is determined only by rated ac vol- with respect to ground becomes more in case of simul-
tage of the line. taneous ac-dc transmission system by the amount of
Assuming Vd/Eph = k the dc voltage superimposed on ac and more discs are
Pdc/Pac (Vd * Id)/(3 * Eph * Ia * cos) = (k * sqrt(1- to be added in each string insulator to withstand this
x2))/(x * cos ) (17) increased dc voltage. However, there is no change re-
Total power quired in the conductor separation distance, as the
Pt = Pdc + Pac = (1 + [k * sqrt (1-x2)]/(x * cos)) * Pac (18) line-to-line voltage remains unaltered. Therefore,
tower structure does not need any modification if same
Detailed analysis of short current ac design of conductor is used.Another possibility could be that the
protective scheme, filter and instrumentation network original ac voltage of the transmission be reduced as
required for the proposed scheme is beyond the scope dc voltage is added such that peak voltage with re-
of present work, but preliminary qualitative analysis spect to ground remain unchanged. Therefore, there
presented below suggests that commonly used tech- would be no need to modify the towers and insulator
niques in HVDC/ac system may be adopted for this strings.
purposes.
In case of fault in the transmission system, gate signals 4 PROPOSED APPLICATIONS
to all the SCRs are blocked that to the bypass SCR s are
released to protect rectifier and inverter bridges. CBs 1.Long EHV ac lines can not be loaded to their ther-
are then tripped at both ends to isolate the complete mal limit to keep sufficient margin against transient
system. As mentioned earlier, if (Id3Ia) <1.414, CBs instability and to keep voltage regulation within al-
connected at the two ends of transmission line inter- lowable limit, the simultaneous power flow does not
rupt current at natural current zeroes and no special dc imposed any extra burden on stability of the system,
CB is required. To ensure proper operation of trans- rather it improves the stability. The resistive drop due
mission line CBs tripping signals to these CBs may to dc current being very small in comparison to im-
only be given after sensing the zero crossing of current pedance drop due to ac current, there is also no appre-
by zero crossing detectors. Else CBs connected to the ciable change in voltage regulation due to superim-
delta side of transformers (not shown in figure1) may posed dc current.
be used to isolate the fault. Saturation of transformer 2. Therefore one possible application of simultaneous
core, if any, due to asymmetric fault current reduces ac-dc transmission is to load the line close to its ther-
line side current but increases primary current of trans- mal limit by transmitting additional dc power. Figure3
former. Delta side CBs, designed to clear transformers shows the variation of Pt/Pac for changing values of k
terminal faults and winding faults, clear these faults and x at unity power factor. However, it is to be noted
easily. that additional conductor insulation is to be provided
Proper values of ac and dc filters as used in due to insertion of dc.
HVDC system may be connected to the delta side and 3. Necessity of additional dc power transmission will
zigzag neutral respectively to filter out higher harmon- be experienced maximum during peak load period
ics from dc and ac supplies. However, filters may be which is characterized with lower than rate voltage. If
omitted for low values of Vd and Id. dc power is injected during the peak loading period
At neutral terminals of zigzag winding dc cur- only with V d being in the range of 5% to 10% of E ph,
rent and voltages may be measured by adopting com- the same transmission line without having any en-
mon methods used in HVDC system. Conventional hanced insulation level may be allowed to be used For
IJSER 2011
http://www.ijser.org
179
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
a value of x=0.7 and V d =0.05 E ph or 0.10 E ph, 5.1% or are used at each end. A supply of 3-phase, 400V, 50Hz
10.2% more power may be transmitted. are given at the sending end and a 3-phase, 400 V, 50
4.By adding a few more discs in insulator strings of Hz,1 HP induction motor in addition to a 3-phase,
each phase conductor with appropriate modifications 400V, 0.7 KW resistive load was connected at the re-
in cross-arms of towers insulation level between phase ceiving end. A 10 A, 110 Vdc reactor (Xd) was used at
to ground may be increased to a high value, which each end with the 230V zigzag connected neutral. Two
permits proportional increase in Emax, Therefore high- identical SCR bridges were used for rectifier and inver-
er value of Vd may be used to increase dc and total ter. The dc voltages of rectifier and inverter bridges
power flow through the line. This modification in the were adjusted between 145 V to135 V to vary dc cur-
exiting ac lines is justified due to high cost of a sepa- rent between 0 to 3A.
rate HVDC line. The same experiment was repeated by replacing the
5. With the very fast electronic control of firing angle rectifier at the sending and and the inverter at receiv-
( ) and extinction angle ( ) of the converters, the fast ing end by 24V battery and a 5A, 25 rheostat respec-
control of dc power may also be used to improve dy- tively, between Xd and ground.
namic stability and damping out oscillations in the The power transmission with and without dc com-
system similar to that of the ac-dc parallel transmission ponent was found to be satisfactory in all the cases. To
lines. check the saturation of zigzag connected transformer
6. Control of and also controls the rectifier and in- for high value of Id, ac loads were disconnected and dc
verter VAR requirement and therefore, may be used to current was increased to 1.2 times the rated current for
control the voltage profile of the transmission line dur- a short time with the input transformer kept energized
ing low load condition and works as inductive shunt from 400V ac. But no changes in exciting current and
compensation. It may also be considered that the capa- terminal voltage of transformer were noticed verifying
citive VAR of the transmission line is supplying the no saturation even with high value of I d.
whole or part of the inductive VAR requirement of the
converter system. In pure HVDC system capacitance of
transmission line cannot be utilized to compensate
6 SIMULATION RESULTS
inductive VAR. The loadability of Moose (commercial name), ACSR,
7. The independent and fast control of active and reac- twinbundle conductor, 400-kV, 50-Hz, 450-km double
tive power associated with dc, superimposed with the circuit line has been computed.
normal ac active and reactive power may be consi-
dered to be working as another component of FACTS.
power upgrading by combining ac dc transmission
8. Simultaneous ac-dc power transmission may find its A+ a3
B+
application in some special cases of LV and MV distri- a3 A+
B+
C+
C+
A-
b3
b3 B-
bution system. c3
A-
B-
C- Di stributed Parameters Line
C- c3
Zigzag
When 3-phase power in addition to dc power is sup- Zigzag
Phase-Shi fting Transformer
Phase-Shifting Transformer1
Binv
C C C
phi = 80 deg. 3rd harm. Rectifier Inverter phi = 80 deg. 3rd harm.
In air craft 3-phase loads are generally fed with higher
frequency supply of about 400Hz and separate line is
used for dc loads. Skin effect restricts the optimum use a3 A+
A+
B+
a3
Zi gzag
c3
Zigzag
weight of distributors. Phase-Shifting Transformer2
Phase-Shifting Transformer3
A
B
C
5 EXPERIMENTAL VERIFICATION
Fig 3: Simulink Model of Simultaneous AC-DC Trans-
mission
The feasibility of the basic scheme of simultaneous
ac-dc transmission was verified in the laboratory.
Transformer having a rating of 2 kVA, 400/230/110V
IJSER 2011
http://www.ijser.org
180
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
TABLE I
COMPUTED RESULTS
Power Angle 30 45 60 75
AC Current(kA) 0.416 0.612 0.80 0.98
DC Current(kA) 5.25 5.07 4.80 4.50
AC Power(MW) 290 410 502 560
DC Power(MW) 1685 1625 1545 1150
Total Power(MW) 1970 2035 2047 1710
TABLE II
SIMULATION RESULTS
Power Angle 30 45 60 75
PS (MW) 2306 2370 2380 2342
Pac (MW) 295 410 495 540
Pdc (MW) 1715 1657 1585 1498
Pac loss (MW) 12 30 54 82
Pdc loss (MW) 280 265 241 217
PR (MW) 1988 2050 2060 1995
7 CONCLUSION
Fig 5: Sending and receiving currents A simple scheme of simultaneous EHV ac-dc
power transmission through the same transmission
line has been presented. Expressions of active and
reactive powers associated with ac and dc, conductor
voltage level and total power have been obtained for
IJSER 2011
http://www.ijser.org
181
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
ACKNOWLEDGMENT
We are thankful to Department of Electrical
and Electonics Engineering of Sri Sarathi Institute of
Engineering and Technology, Nuzvid, India & Dr.Paul
Raj Engineering College, Bhadrachalem, India with
whom we had useful discussions regarding HVDC,
Performance of transmissions lines. Any suggestions
for further improvement of this topic are most wel-
come
REFERENCES
[1] N. G. Hingorani, FACTSflexible A.C. transmission system,
in Proc. Inst. Elect. Eng. 5th. Int. Conf. A.C. D.C. Power Trans-
mission,
[2] Padiyar.HVDC Power Transmission System. Wiley East-
ern, New Delhi, 1993)
[3] H. Rahman and B H Khan Stability Improvement of Power
Systemby Simultaneous AC-DC Power Transmission Electric
Power System Research Journal, Elsevier, Paper Editorial ID
No. EPSRD- 06-00732, Press Article No. EPSR-2560 Digital
Object.
[4] I W Kimbark.Direct Current Transmission Vol-I.Wiley,
New York, 1971.
IJSER 2011
http://www.ijser.org
182
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract In this paper a comprehensive model for Distribution Systems Planning (DSP) in the case of using Distributed
Generation (DG), with regard to load models is provided. Proposed model optimizes size and location of the distributed
generation. This model can optimize investment cost in distributed generation better than other solutions. It minimizes the
operating costs and total cost of the system losses. This Model affects the optimum location and size of the distributed
generation in distribution systems significantly. Simulation studies based on a new multiobjective evolutionary algorithm is
achieved. It is important that in the analysis made in this paper, DG is introduced as a key element in solving the DSP.
Moreover, the proposed method easily and with little development can satisfy the other goals.
Index Terms Economic Analysis, Distributed Generation, Distribution Systems Planning, Load Models.
1 INTRODUCTION
IJSER 2011
http://www.ijser.org
183
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
184
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
2) Distribution Feeders Thermal Capacity: Distribution Two three phase transformers with 10 MVA power ca-
systems feeders have a capacity limit for the total power pacity and 0.2 M$ price for each one may be also installed
flow through it. to expand the main substation. The cost of upgrading the
existing primary distribution feeder with another of high-
S ij S ijMax , i TN , j M (10) er capacity is 0.15 M$/Km. The price of other existing
equipments and feeders will be considered zero. System
power factor and discount rate are also 0.9 and 12.5%,
3) Distribution Substations Capacity: Power which is respectively. During all optimization runs, population
generated by substations shall be at the substations size and generations maximum number are 300 and 750
capacity level. respectively. The maximum size of Paretos optimal set
includes 20 solutions. The probabilities of crossover and
M mutation are 0.9 and 0.01, respectively. Recently, the stu-
Max
SSSij SSSi , i SS , j M (11) dies on evolutionary algorithms have indicated that these
algorithms may be effective in removal of problems of
j 1
4) Voltage Drop: The DISCO provides the older methods. The applied optimization algorithm is
predetermined maximum permissible voltage drop limit. SPEA [14].
TABLE 2
RESULTS OF SCENARIO A Fig. 1. DISCOs buses voltage profile for constant load model
Index Constant Residential Industrial Commercial Mixture
J (M$) 31.3837 24.0529 26.7622 21.1386 25.0651
Sint
4.7820 2.9595 2.2405 4.9024 3.3959
(MVA)
Si,u
9.7132 7.6799 9.8189 4.1953 7.7625
(MVA)
PL
3.06 2.55 2.74 2.36 2.63
(MW)
QL
2.14 1.78 1.92 1.65 1.84
(MVAr)
NL 1 1 1 1 1
TABLE 3
RESULTS OF SCENARIO B
Index Constant Residential Industrial Commercial Mixture
J (M$) 30.7797 23.2452 26.0381 20.8633 24.7617
Sint Fig. 2. DISCOs buses voltage profile for industrial load model
1.4927 3.0012 0.889 4.7927 1.2087
(MVA)
Si,u
0 0 0 0.8567 0
(MVA)
PLDG
1.46 1.6 1.4 1.91 1.41
(MW)
QLDG
1.02 1.12 0.98 1.34 0.98
(MVAr)
SDG
2,3,4,4 4,4 2,4,2,3 4 3,2,4,1
(MVA)
DG
5,3,9,7 9,7 3,7,5,9 9 7,3,9,5
Location
NL 0 0 0 0 0
IJSER 2011
http://www.ijser.org
186
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Fig. 7. DISCOs primary distribution feeder power flow for industrial Fig. 10. DISCOs primary distribution feeder power flow for mixture
load model load model
IJSER 2011
http://www.ijser.org
187
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
REFERENCES
[1] M. Ponnavaikko, K. S. Prakasa, and S. S. Venkata, Distribution
system planning through a quadratic mixed-integer program-
ming approach, IEEE Trans. Power Del., vol. PWRD-2, no. 4,
pp. 11571163, Oct. 1987.
[2] S. K. Khator and L. C. Leung, Power distribution planning: A
review of models and issues, IEEE Trans. Power Syst., vol. 12,
no. 3, pp. 11511159, Aug. 1997.
[3] P. P. Barker and R.W. De Mello, Determining the impact of
distributed generation on power systems. I. Radial distribution
systems, in IEEE Power Eng. Soc. Summer Meeting, vol. 3,
2000, pp. 16451656.
[4] X. Ding and A. A. Girgis, Optimal load shedding strategy in
power systems with distributed generation, in IEEE Winter
Meeting Power Eng. Soc., vol. 2, 2001, pp. 788793.
[5] N. Hadjsaid, J. F. Canard, and F. Dumas, Dispersed genera-
tion impact on distribution networks, IEEE Comput. Appl.
Power, vol. 12, no. 2, pp. 2228, Apr. 1999.
[6] L. Coles and R.W. Beck, Distributed generation can provide an
appropriate customer price response to help fix wholesale price
volatility, in IEEE Power Eng. Soc. Winter Meeting, vol. 1,
2001, pp. 141143.
[7] C. Concordia and S. Ihara, Load representation in power sys-
tems stability studies, IEEE Trans. Power App. Syst., vol. PAS-
101, no. 4, Apr. 1982, pp. 969977.
[8] IEEE Task Force on Load Representation for Dynamic Perfor-
mance, Bibliography on load models for power flow and dy-
namic performance simulation, IEEE Trans. Power Syst., vol.
10, no. 1, Feb. 1995, pp. 523538.
[9] IEEE Task Force on Load Representation for Dynamic Perfor-
mance, Load representation for dynamic performance analy-
sis, IEEE Trans. Power Syst., vol. 8, no. 2, May 1993, pp. 472
482.
[10] IEEE Task Force on Load Representation for Dynamic Perfor-
mance, Standard load models for power flow and dynamic
performance simulation, IEEE Trans. Power System, vol. 10,
no. 3, Aug. 1995, pp. 13021313.
IJSER 2011
http://www.ijser.org
188
Abstract This paper studies impact of changing mobility speed on the performance of a reactive routing protocol AODV with
reference to varying network load. For experimental purposes, initially we observed the performance of AODV with increasing
Network Load from 4 packets to 24 packets at the maximum mobility speed of 10 m/s. In another scenario we observed the
performance of AODV with increasing Network Load from 4 packets to 24 packets at maximum mobility speed of 20 m/s. The
performance of AODV is observed across Packet Delivery Ratio, Loss Packet Ratio and Routing overhead parameters. Our
simulation results show that AODV is performing better with higher mobility speed at higher network load.
Index Terms AODV, MANET, Mobility Speed, Routing, Overhead, Random Waypoint
1. INTRODUCTION
Model includes pause times when a new direction and received by the destination through the number of
speed is selected. As soon as a mobile node arrives at packets originated by the CBR source.
the new destination, it pauses for a selected time period 2. Loss Packet Ratio (LPR) - Loss Packet Ratio is
(pause time) before starting traveling again. A Mobile calculated by dividing the number of packets that
node begins by staying in one location for a certain never reached the destination through the number
period of time (i.e. pause). Once this time expires, the of packets originated by the CBR source.
mobile node chooses a random destination in the
simulation area and a speed that is uniformly 3. Routing Overhead Routing overhead, which
measures the ratio of total routing packets sent and
distributed between [vmin, vmax]. The mobile node
the total number of packets sent.
then travels toward the newly chosen destination at the
selected speed. Upon arrival, the mobile node pauses
for a specified period of time starting the process again.
The random waypoint model is the most commonly 5. SIMULATION SETUP
used mobility model in the simulation of ad hoc In this simulation we wanted to investigate how
networks. It is known that the spatial distribution of mobility speed affects on the behavior AODV with
network nodes moving according to this model is non- increasing network load.
uniform. However, a closed-form expression of this TABLE 1
distribution and an in depth investigation is still
EVALUATION WITH MOBILITY SPEED 10 M/S
missing. This fact impairs the accuracy of the current
Parameter Value
simulation methodology of ad hoc networks and makes
Protocols AODV
it impossible to relate simulation based performance Simulation Time 100 s
results to corresponding analytical results. To overcome Number of Nodes 100
these problems, it is presented a detailed analytical Network Load 4, 8, 12, 16, 20, 24 Packets
study of the spatial node distribution generated by Pause Time 20 s
Environment Size 1020 m x 1020 m
random waypoint mobility. It is considered that a
Traffic Type Constant Bit Rate
generalization of the model in which the pause time of Maximum Speed 10 m / s
the mobile nodes is chosen arbitrarily in each waypoint Mobility Model Random Waypoint
and a fraction of nodes may remain static for the entire Network Simulator NS 2.33
simulation time [3]. TABLE 2
EVALUATION WITH MOBILITY SPEED 20 M/S
Parameter Value
3. THE TRAFFIC AND SCENARIO GENERATOR Protocols AODV
Simulation Time 100 s
Continuous bit rate (CBR) traffic sources are used. The
Number of Nodes 100
source-destination pairs are spread randomly over the Network Load 4, 8, 12, 16, 20, 24 Packets
network. The simulation uses Random Waypoint Pause Time 20 s
mobility model in a 1020 m x 1020 m field with varying Environment Size 1020 m x 1020 m
network load of 4 packets to 24 packets whereas Traffic Type Constant Bit Rate
Maximum Speed 20 m / s
mobility speed is kept at 10 m/s maximum. In the next
Mobility Model Random Waypoint
simulation network load is varied from 4 packets to 24 Network Simulator NS 2.33
packets, but this time mobility speed is kept 20 m/s
maximum. Here, each packet starts its journey from a
random location to a random destination with a 6. RESULTS AND DISCUSSIONS
randomly chosen speed. Once the destination is
reached, another random destination is targeted after a During the simulation we have increased the network
pause. The pause time, which affects the relative speeds load with maximum mobility maximum speed of 10
of the mobile hosts, is kept at 20s. Simulations are run m/s and recorded the performance of AODV. We did
for 100 simulated seconds. this simulation for 100 simulated seconds with
maximum 8 cbr connections. Readings were taken for
different network loads (4, 8, 12, 16, 20 and 24 packets).
4. PERFORMANCE METRICS Again same simulation is performed, but this time with
maximum speed of 20 m/s. From the results it is
Following important metrics are evaluated- evident that AODV starts to perform better with
1. Packet Delivery ratio (PDR) - Packet delivery ratio mobility speed of 20 m/s as compared to 10 m/s for
is calculated by dividing the number of packets same scenario. At higher network load and maximum
speed of 20 m/s, the Packet Delivery ratio increases,
IJSER 2011
http://www.ijser.org
191
7. PERFORMANCE EVALUATION
Observation for Mobility Speed of 10 m/s: Simulation
result in figure 1 shows that performance of AODV in
terms of Packet Delivery Ratio degrades as network
load is increased. When network load reach 12 packets,
PDR is dropped considerably. Even though PDR starts
to improve gradually from that point and reach a much
better performance around 16 packets of load. Once
again performance starts degrading, and continues to Fig 2. Number of Packets Vs Loss Packet Ratio
degrade more. Same with the Routing overhead, Figure
3 shows that Routing Overhead keeps on increasing
until network load of 12 packets and from that point
overhead starts to decrease till the network load reaches
16 packets. After this point routing overhead keeps on
increasing and never recovers again.
Observation for Mobility Speed of 20 m/s: Simulation
result in figure 1 shows that performance of AODV
degrades as network load is increased. A point to notice
is that when network load reach 12 packets,
performance of AODV is much improved as compared
to performance with Mobility Speed of 10 m/s. Packet
Delivery Ratio stays consistent until network load
reaches 16 packets, even though it is performing poor
than the earlier simulation scenario. PDR keeps on Fig 3. Number of Packets Vs Routing Overhead
decreasing until a point where network load reach 20
packets. From this point PDR starts to improve
gradually and achieves a much better performance as 8. CONCLUSION AND FUTURE WORK
compared to performance with mobility speed of 10 Empirical results illustrate that the performance of
m/s. About Routing overhead, Figure 3 shows that AODV varies widely across different network loads,
Routing Overhead remains either equal or better than and study results from two different scenarios shows
10 m/s scenario until network load reach 12 packets. that increasing the mobility speed does help to improve
Routing overhead stays consistent till 16 packets and the performance of AODV when it comes to higher
then again gets worse till the 16 packets mark. From network loads. Hence we have to consider the network
their AODV starts to improve the performance and load of an application while selecting the mobility
achieves better readings compared to reading with 10 speed.
m/s as network load crosses the 20 packets mark. The future scope is to find out what factors can bring
more improvements in performance of AODV not only
while the network load is further increased but also on
the load where AODV has not performed well in
simulations presented here. Further simulation needs to
be carried out for the performance evaluation with not
only increased mobility speed but also varying other
related parameters like Pause Time, Mobility models
etc.
REFERENCES
[1] Humaira Nishat, Vamsi Krishna K, Dr. D.Srinivasa Rao, Shakeel
Ahmed, Performance Evaluation of On Demand Routing
Protocols AODV and Modified AODV (R-AODV) in MANET,
Fig 1. Number of Packets Vs Packet Delivery Ratio International Journal of Distributed and Parallel Systems (IJDPS)
IJSER 2011
http://www.ijser.org
192
ABOUT AUTHORS
IJSER 2011
http://www.ijser.org
193
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract In this study, we made an attempt to synthesize doped bioactive hydroxyapatite (HAp) ceramic powder using a
simple Chemical method and studied its physical and mechanical properties. Different quantities (2wt% and 5 wt%) of
Magnesium chloride Hexahydrate , Zinc oxide, Titanium oxide were incorporated as dopants into Hap at the time of synthesis.
The synthesized powder samples were analyzed for their phases using X-ray diffraction technique, Fourier Transform Infrared
Spectroscopy. The synthesized powders were uniaxially compacted and then sintered at 1250C for 1hr in air. Vickers hardness
testing was performed to determine the hardness of the sintered structures. Fracture toughness of sintered samples was
calculated using Inverted Optical Microscope with Image Analysis software.
Index Terms Dopants, Fracture toughness, FTIR, Hydroxyapatite, Sintered structure, Vickers hardness, XRD.
1 INTRODUCTION
A mong different forms of calcium phosphates, the mineral during synthesis of HAp powder. This paper
bioactive hydroxyapatite (Ca10 (PO4)6(OH) 2) phase presents the synthesis and characterization of physical,
has been most extensively researched due to its out- mechanical and crystal structure of pure and doped HAp
standing biological responses to the physiological envi- ceramic in detail.
ronment. Hydroxyapatite is brittle in nature and load
bearing capacity and strength of HAp is low so we could
not use it in load bearing implant (total bone replace-
2 EXPERIMENTAL PROCEDURE
ment) where tensile stress is developed. To overcome 2.1 Materials and methods
the above stated limitations of Hap, many researchers
were tried to generate nano grained HAp powder [1, 2] in Pure and doped HAp powders were synthesized through
different methods. There is a significant difference of water based Chemical route method. In this method, Cal-
properties between natural and apatite crystals found cium hydroxide [Ca(OH)2](MERCK,INDIA) and Ortho-
in bone mineral and the conventional synthetic HAp. phosphoric Acid [H3(PO)4] (MERCK,INDIA) were used
Bone crystals are formed in a biological environment as raw materials to produce apatite particles .
through the process of biomineralization. In addition,
the bone mineral also contains trace ions like Na+, 10Ca (OH ) 2 6 H 3 PO4 Ca10 ( PO4 ) 6 (OH ) 2 18 H 2O
Mg++, K+, which are known to play a important role
in overall performance [1]. It has also been shown that The apatite powder produced was aged for 24 hr.
the bioactivity of conventional synthetic HAp ceramics is Then, the apatite particles in the suspension were fil-
inferior to the bone [1, 3-8]. During recent years, many trated, washed with ethanol three times, and dried at
researchers were tried in developing HAp powder 100C for 24 hr in air. The dried powder was ground
doped with metallic ions by Ball milling, dry and wet with a mortar and pestle into fine powder and sub-
milling process to increase the strength and ductility jected to calcinations at 800C temperature for 2 hrs
of HAp powders [1,3]. In this study, we have used a using electrically heated furnace ( NASKAR & Co.,
simple chemical route process that could produce HAp Model No.-EN170QT) at a constant heat rate of
powder with a fairly short synthesis time. We have 5C/min, followed by cooling inside the furnace. In
introduced three metal ions ,in different weight percen- order to synthesize HAp powder doped with Magne-
tage, which are known to be present as the bone sium, Zinc and Titanium, measured quantities of Mag-
nesium Chloride Hexahydrate (MgCl2,6H2O,MERCK,
INDIA, 96% pure), Zinc Oxide ( ZnO, MERCK, INDIA,
Promita Bhattacharjee, pursuing masters degree program in Biomedical 99% pure) and Titanium Oxide (TiO2, MERCK,INDIA,
Engineering in Jadavpur University, India. 98% pure) were incorporated into the Calcium Hy-
E-mail: [email protected]
droxide suspension before the addition of H3(PO)4 solu-
Dr.Abhijit Chanda, joint director of School of Bioscience and Engineering tion separately. The dopants were used in the amount
Department of Jadavpur University,India, of 2wt% and 5 wt% to see their effects on powder
E-mail: [email protected]
morphology and properties of the sintered ceramics.
IJSER 2011
http://www.ijser.org
194
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
195
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
196
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
volumetric strain.
80
TABLE II
C/A, UNIT CELL VOLUME AND PERCENTAGE OF VOLUMETRIC 60
%T
STRAIN
40
a lume tric
0
crystal (^3) strain
Pure HAp (before 0.731 529. 672 ____ 0 1 00 0 2 00 0 3 00 0 4 00 0 5 00 0
Calcination) W a v e n u m b e r(c m -1 )
Pure HAp( after 0.731 531. 697 _____
Calcination)
5% Mg doped HAp 0.734 533. 853 0. 789 %
(calcined) increase Figure 2(a). FTIR patterns of Calcined HAp
2% Mg doped 0.729 527. 725 0.368%
HAp decrease
(calcined)
5% Zn doped HAp 0.739 535. 059 1.017%
(calcined) increase
2% Zn doped HAp 0.733 531. 0451 0. 259 % 5wt% Mg doped HAp
(calcined) increase 120
2wt% Mg doped HAp
5% Ti doped HAp 0.720 526. 323 0. 632% Calcined HAp
The Fig 2(b) shows that FTIR of 5wt% and 2 wt% Mg Wavenumber(cm-1)
IJSER 2011
http://www.ijser.org
197
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
60
ing study at 1200C which showed relatively lower
40
densification(85. 25%).Sintering studies were not done
20
above 1300C, as it is already established that sintering
of HAp ceramics above 1300C leads to significant
0 phase change, unwanted grain growth and deteriora-
0 1000 2000 3000 4000 5000
tion of their properties. An average sintered density
Wavenumber(cm-1) of 2.9 g/cc were recorded in pure HAp specimens
sintered at 1250C which is equivalent to 93.9% of
theoretical density of HAp. In this work, presence of
2.(c) Mg, Zn and Ti separately as dopants during powder
synthesis again altered the sintered density of HAp
Figure 2(c). FTIR patterns of Calcined HAp and Zn powder. It is evident from the Table V that the Hap
doped HAp( 2wt% and 5wt%) doped with [MgCl2, 6H2O] and TiO2 did not show the
highest sintered density. Presence of 2% ZnO in the
HAp powder showed the best sintered density of 3.12
g/cc when sintered at 1250C.
60
density
40
Pure HAp(calcined) 1.66 52.41%
20
5%Mg doped HAP 1.69 53.36%
(calcined)
0 2%Mg doped HAP 1.597 50.43%
0 1000 2000 3000 4000 5000
(calcined)
Wavenumber(cm-1) 5%Zn doped HAP 1.64 51.78%
(calcined)
2%Zn doped HAP 1.66 52.42%
(calcined)
Figure 2(d). FTIR patterns of Calcined HAp and Ti 5% Ti doped HAP 1.65 52.01%
doped HAp( 2wt% and 5wt%) (calcined)
2%Ti doped HAP 1.56 49.26%
(calcined)
4.2.Sintering study
Green ceramic structure prepared via uniaxial
pressing were measured for their green density and
were subjected to pressureless sintering. Average
green density of all compositions are presented in
Table III .The average diameters of all specimens be-
fore sintering is 12.15+_0.05 mm and thickness 5.4+_0.05
mm. Each sintered specimen was measured for its
density and linear ,diametric and volumetric shrinkage
IJSER 2011
http://www.ijser.org
198
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
H A R D N E S S (H V )
300
B5.0 2.85 17.31% 18.42% 44.15% PURE HAp
250
5% Mg
C2.0 2.89 18.87% 19.38% 46.94% 200
2% Mg
150
C5.0 2.78 16.54% 17.22% 42.31%
100
50
0
0.3 1 3
TABLE V
LOAD(Kgf)
PERCENTAGE OF CONVENTIONAL HAP DENSITY OF ALL COM-
POSITIONS
Composition % of conventional HAp density
Figure 3(a).Graphical representation of Hardness of
Pure HAp 93.9% Calcined HAp and Mg doped HAp( 2wt%
and 5wt%)
A 2. 0 90. 72%
A 5. 0 90. 52%
450
B 2. 0 97. 99% 400
350
B 5. 0 90. 096%
H A R D N E S S (H V )
300
PURE HAp
C 2. 0 91. 14% 250
5%Zn
200
2%Zn
C 5. 0 87. 82%
150
100
50
4.3.MECHANICAL CHARACTERIZATION
0
4.3.1.VICKERS HARDNESS TESTING 0.3 1 3
The average hardness of each of these composition
LOAD(Kgf)
was calculated and plotted as a function of the dif-
ferent indentation loads (0.3Kgf, 1 Kgf and 3 Kgf) shown
in Fig 4. It is clear from the figures that the presence
of Mg, Zn and Ti dopants in crystalline HAp influence
Figure 3(b).Graphical representation of Hardness of
its hardness.
Calcined HAp and Zn doped HAp( 2wt% and 5wt%)
In almost each compositions 2wt% of doping caused
higher hardness values with highest load (3kgf).In other
loads also hardness for 2wt% dopant concentration was
3.(b)
IJSER 2011
http://www.ijser.org
199
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
300
250
PURE HAp more than 2. 5 and it was followed here.
5% Ti The average crack length was 271.18m for pure HAp,
200
2% Ti
150 225.05m for 2% Mg doped HAp, 171.56m for 5% Mg
100
doped HAp, 210.83m for 2% Zn doped HAp, 240.63m
50
for 2% Ti doped HAp and 82m for 5% Ti doped HAp.
0
We could not calculate the average crack length of 5% Zn
0.3 1 3 doped HAp because the crack propagation was not found
LOAD(Kgf) clearly from the site of indentation. The crack propaga-
tion path and the lateral crack growth of some speci-
mens during the time of indentation with 3 kgf load
are shown in Fig 4.
Figure 3(c).Graphical representation of Hardness of
Calcined HAp andTi doped HAp( 2wt% and TABLE VII
5wt%) AVERAGE FRACTURE TOUGHNESS AND STANDARD DEVIATION
4.3.2.FRACTURE TOUGHNESS
Pure and doped crystalline HAp ceramics sin-
tered at 1250C were calculated for their fracture
We can conclude from the above chart that in most
IJSER 2011
http://www.ijser.org
200
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
of the cases fracture toughness of are close to 1 or increased. For 5wt% of Ti doped HAp powder fracture
just less than 1. In the case of 5% Ti doped HAp( C 5. toughness was highly increased and has got clear impres-
0) fracture toughness was highly increased. Pure HAp sion with small cracks. In all compositions of doped HAp
powder mechanical property was increased, density was
high and average toughness values were increased. This
is marked improvement from practical point of view
compared with pure HAp powder.
6. Acknowledgment
We want to acknowledge Instrument Science and Engi-
neering Department, Mechanical engineering Department
of Jadavpur University for their technical help.
References
F.C. Driessens, J.W. Van Dijk, J.M. Borggreven, Calcif. Tissue Res. 26
(1978) 127.
[4] R.Z. LeGeros, G. Bonel, R. Legros, Calcif. Tissue Res. 26 (1978) 111.
has inherent brittleness property. We can overcome this
brittleness property with these compositions. Fracture [5] M.A. Lopes, J.D. Santos, F.J. Monteiro, J.C. Knowles, J. Biomed.
Mater. Res. 39 (1998) 244.
toughness is increased for 2wt% and 5 wt% of all
compositions than pure HAp powder. It was also [6] R.A. Young, J. Dent. Res. 53 (1974) 193 (Suppl.).
noticed 5 wt% of all compositions is better than 2
wt% of all compositions in the case of fracture [7] S.J. Kalita, D. Rokusek, S. Bose, H.L. Hosick, A. Bandyopadhyay, J.
Biomed. Mater. SRes. A 71 (2004) 35.
toughness.
[8] G. Georgiou, J.C. Knowles, Biomaterials 22 (2001) 2811.
From the Fig 4(a) in case of 5% Ti doped HAp we have
[9] J.D. Santos, P.L. Silva, J.C. Knowles, S. Talal, F.J. Monteiro, J. Mater.
got very clear impression of rhombus with sharp edges. Sci., Mater. Med. 7 (1996) 187.
The impression resembles metal like impression with small
cracks at the edges. In contrary, pure HAp impression was [10] K.C.B. Yeong, J. Wang, S.C. Ng, Biomaterials 22 (2001) 2705.
not so clear and at identical load (3Kgf) it suffered severe
[11] M. Heughebaert, R.Z. LeGeros, M. Gineste, A. Guilhelm, G.Bonel,J.
chipping due to propagation and coalescence of lateral Biomed. Mater. Res. 22 (1988) 257.
cracks. It was also noticed that fracture toughness values of
5% Ti HAp showed large scatter. We got the lowest value
of fracture toughness of 5%Ti doped HAp was
1.4359MPam and highest value was 9.74MPam.
5. CONCLUSION
In our study, we prepared pure dense HAp and 2wt% .
and 5 wt% of Mg, Zn and Ti doped dense HAp powder
using chemical route method. In XRD study, all composi-
tions of doped and pure HAp powder we have got HAp
phase. Pure HAp and all compositions of doped HAp
powder we have got uniform pattern of shrinkage and
almost of all cases we have got densification above
90%. Hardness was increased for 2wt% of all composi-
tions of doped HAp powder; it may be attributed to bet-
ter densification. Pure HAp is brittle material, all compo-
sitions of doped HAp powder fracture toughness was
IJSER 2011
http://www.ijser.org
201
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Massive datasets arise naturally as a result of automated monitoring and transaction archival. Military intelligence data,
stock trades, retail purchases, medical and scientific observations, weather monitoring, spacecraft sensor data and censors data are
all examples of data streams continuously logged and stored in extremely large volumes, which create the need for innovative data vi-
sualization solutions. Although, there are many on-going researches and developments on this field recently, there are only few solu-
tions to visualize information for general public. In this paper, I explore different methods to use ManoStick chart to visualize informa-
tion.
Index Termsdata visualization, chart, graph, manostick
1 INTRODUCTION
There is a range of visualisation tools with a range of zation is its ability to visualize data, communicating in-
functions, such as; formation clearly and effectively. It doesnt mean that data
visualization needs to look boring to be functional or ex-
Comparison of values with bar charts, block his- tremely sophisticated to look beautiful. To convey ideas
tograms, bubble and matrix charts effectively, both aesthetic form and functionality need to
go hand in hand, providing insights into a rather sparse
Tag clouds to view word popularity in given text and complex data set by communicating its key-aspects in
Data point relationships with network diagrams a more intuitive way. Yet designers often tend to discard
and scatter plots the balance between design and function, creating gor-
geous data visualizations which fail to serve its main
Parts of the whole can be visualized with pie
purpose communicate information.
charts and tree charts
Track trend changes as rises and falls over time We can summarize that solutions for data visualization
with line graphs and stack graphs should be compact, based on known concept, easy to un-
derstand and able to express.
And many more different attempts to visualize
large scale information with complicated algo-
rithms to use by researches and analysts.
2 BACKGROUND
However, not all are suitable to current needs and prob- Introduction to ManoStick chart [4]: ManoStick chart has
lems at hand. five microstructures as shown on figure 1 & 2, which are
Some visualization methods such as bar charts, pie charts 1. Upper tail
and line graphs, are many decade old which had been 2. Upper body
created for the light-information on that time. 3. Middle body
4. Lower body
Some modern visualization [2] attempts such as Internet
Visualizations are too complicated. According to psychol- 5. Lower tail
ogist George Miller [3], who in a seminal paper in 1956,
noted the cognitive information bottleneck that percep- Each microstructure represents 20% of the depth or popu-
tion and attention span impose on the amount of informa- larity, which could be quantity when working with sales,
tion people are able to receive, process, and remember. volume when working with equities etc. ManoStick chart
In this otherwise serious essay, Miller suggested that sev- is better suitable for variable data with depth.
en independent items of information, plus or minus two,
might represent the limit of what most people can grasp To create a chart, we need to sort the variable data, find
in one moment. the total depth and divide the total depth with five as
there are five microstructures in Mano Stick.
According to Friedman [1], the main goal of data visuali-
IJSER 2011
202
International Journal of Scientific & Engineering Research, Volume 2, Issue 4, April-2011
ISSN 2229-5518
REFERENCES
[1] Vitaly Friedman (2008) "Data Visualization and Info graphics" in:
Graphics, Monday Inspiration, January 14th, 2008.
[2]http://www.webdesignerdepot.com/2009/06/50-great-
Figure 2
examples-of-data-visualization/
3. IMPLEMENTATION [3] Miller, George A. The Magical Number Seven, Plus or Minus
What sort of data visualization might help city planners Two: Some Limits on Our Capacity for Processing Information. The
to track their population? Think of a city like Paris where Psychological Review, 1956, vol. 63, pp. 81-97.
thousands of people move in, move out, babies born and
old people pass away. City planners are in constant chal- [4] M Siluvairajah, ManoStick: An Ariel View of the Stock Market,
lenge to offer amenities, education, health care, transport 2009, pp.84-91 ISBN 978-0956395603
and etc. while businesses need to supply the necessary
products and services. [5] Review of Visualization in the Social Sciences: A State of the Art
Survey and Report by Scott Orford, Daniel Dorling and Richard
Harris, 1998.
Here, we can create a time series graph, example
IJSER 2011
203
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract The objective of this paper is to present a survey of recent research work of high quality that deal with reliability in
different fields of engineering and physical sciences. This paper covers several important areas of reliability, significant research
efforts being made all over the world. The survey provides insight into past, current and future trends of reliability in different
fields of Engineering, Technology and medical sciences with applications with specific problems.
Index Terms CCN, Coherent Systems, Distributed Computing Systems, Grid Computing, Nanotechnology, Network
Reliability, Reliability.
1 INTRODUCTION
ing concepts of Reliability in various fields of Engineering Network analysis is also an important approach to model
and Sciences. Some important applications are given here: real-world systems. System reliability and system unre-
liability are two related performance indices useful to
2.1 Nano-Technology measure the quality level of a supply-demand system. For
Nano-reliability measures the ability of a nano-scaled a binary-state network without flow, the system unrelia-
product to perform its intended functionality. At the nano bility is the probability that the system can not connect
scale, the physical, chemical, and biological properties of the source and the sink. Extending to a limited-flow net-
materials differ in fundamental, valuable ways from the work in the single-commodity case, the arc capacity is
properties of individual atoms, molecules, or bulk matter. stochastic and the system capacity (i.e. the maximum
Conventional reliability theories need to be restudied to flow) is not a fixed number. The system unreliability for
be applied to Nano-Engineering. Research on Nano- (+ 1), the probability that the upper bound of the system
Reliability is extremely important due to the fact that na- capacity equals can be computed in terms of upper boun-
no-structure components account for a high proportion of dary points. An upper boundary point is the maximal
costs, and serve critical roles in newly designed products. system state such that the system fulfills the demand. In
In this paper, Shuen-Lin Jeng et al.[1] introduces the con- his paper Yi-Kuei Lin [3] discusses about multicommodi-
cepts of reliability to nano-technology; and presents the ty limited-flow network (MLFN) in which multicommod-
work on identifying various physical failure mechanisms ity are transmitted through unreliable nodes and arcs.
of nano-structured materials and devices during fabrica- Nevertheless, the system capacity is not suitable to be
tion process and operation. Modeling techniques of de- treated as the maximal sum of the commodity because
gradation, reliability functions and failure rates of nano- each commodity consumes the capacity differently. In
systems have also been discussed in this paper. this paper, Yi-Kuei Lin defines the system capacity as a
Engineers are required to help increase reliabili- demand vector if the system fulfils at most such a de-
ty, while maintaining effective production chedulesto mand vector. The main problem of this paper is to meas-
produce current, and future electronics at the lowest ure the quality level of a MLFN. For this he proposes a
possible cost. Without effective quality control, devices new performance index, the probability that the upper
dependent on nanotechnology will experience high man- bound of the system capacity equals the demand vector
ufacturing costs, including transistors which could result subject to the budget constraint, to evaluate the quality
in a disruption of the continually steady Moores law. level of a MLFN. A branch-and-bound algorithm based
Nano Technology can potentially transform civilization. on minimal cuts is also presented to generate all upper
Realization of this potential needs a fundamental under- boundary points in order to compute the performance
standing of friction at the atomic scale. Furthermore, the index.
tribological considerations of these systems are expected In a computer network there are several reliabili-
to be an integral aspect of the system design and will de- ty problems. The probabilistic events of interest are:
pend on the training of both existing and future scientists, * Terminal-pair connectivity
and engineers in the nano scale. As nanotechnology is * Tree (broadcast) connectivity
gradually being integrated in new product design, it is * Multi-terminal connectivity
important to understand the mechanical and material These reliability problems depend on the net-
properties for the sake of both scientific interest and engi- work topology, distribution of resources, operating envi-
neering usefulness. The development of nanotechnology ronment, and the probability of failures of computing
will lead to the introduction of new products to the pub- nodes and communication links. The computation of the
lic. In the modern large-scale manufacturing era, reliabili- reliability measures for these events requires the enume-
ty issues have to be studied; and results incorporated into ration of all simple paths between the chosen set of nodes.
the design and manufacturing phases of new products. The complexity of these problems, therefore, increases
Measurement and evaluation of reliability of nano- very rapidly with network size and topological connectiv-
devices is an important subject. New technology is devel- ity. The reliability analysis of computer communication
oped to support the achievement of this task. As noted by networks is generally based on Boolean algebra and
Keller, et al. [2], with ongoing miniaturization from probability theory. Raghavendra, et al. [4] discusses vari-
MEMS towards NEMS, there is a need for new reliability ous reliability problems of computer networks including
concepts making use of meso-type (micro to nano) or ful- terminal-pair connectivity, tree connectivity, and multi-
ly nano-mechanical approaches. Experimental verification terminal connectivity. In his paper he also studies the
will be the major method for uvalidating theoretical mod- dynamic computer network reliability by deriving time-
els and simulation tools. Therefore, there is a need for dependent expressions for reliability measures assuming
developing measurement techniques which have capabili- Markov behavior for failures and repairs. This allows
ties of evaluating strain fields with very local (nano-scale) computation of task and mission related measures such as
resolution. mean time to first failure(MTFF) and mean time between
failures (MTF).
2.2 Computer Communication Network A computer communication network (CCN) can
IJSER 2011
http://www.ijser.org
205
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
be represented by a set of centers and a set of communica- which failure correlation in common a communication
tion links connecting centers which are up. The network channel is taken into account. In this reliability optimiza-
can be represented mathematically by a graph with nodes tion problem, the assessment on the grid service reliabili-
representing centers and edges representing links. If links ty & performance is a critical component in obtaining the
are simplex, then the graph is directed. Various methods objective function. However, due to the size & complexity
have been derived to evaluate the system reliability and of grid service systems, the existing models for distri-
terminal reliability of a CCN. The methods usually in- buted systems cannot be directly applied. Thus, a virtual
volve the enumeration directly or indirectly of all the tree structure model was developed as the basis of the
events that lead to successful communication between optimization model and a genetic algorithm was adapted
computer centers under consideration. However, little to solve this type of optimization problem for grid task
has been done in optimizing networks except for terminal partition and distribution. In this paper author studied a
reliability. This is because as the number of links increas- case considering different numbers of resources. The Ge-
es, the number of possible assignments to links of the sys- netic Algorithm proved to be effective in accommodating
tem grows faster than exponentially. Kiu Sun-wah et al. various conditions including limited or insufficient re-
[5] present both mathematical and heuristic rules for op- sources.
timizing the system reliability of a CCN with a fixed to- Gregory Levitin et al. [7] in his paper described
pology when a set of reliabilities is given. The techniques grid computing systems with star architectures in which
can be used to predict the system reliability for alternative the resource management system (RMS) divides service
topologies if the network topology is not fixed. The eval- tasks into subtasks and sends the subtasks to different
uation of the terminal reliability of a given computer specialized resources for execution. To provide the de-
communication network is a NP-hard problem. Hence, sired level of service reliability, the resource management
the problem of assigning reliabilities to links of a fixed system (RMS) can assign the same subtasks to several
computer communication network topology to optimize independent resources for parallel execution. Some sub-
the system reliability is also NP-hard. Author develops a tasks cannot be executed until they have received input
heuristic method to assign links to a given topology so data, which can be the result of other subtasks. This im-
that the system reliability of the network is near optimal. poses precedence constraints on the order of subtask ex-
His method provides a way to assign reliability measures ecution. Also the service reliability and performance in-
to the links of a network to increase overall reliability. dices are introduced and a fast numerical algorithm for
their evaluation given any subtask distribution is sug-
2.3 Grid Computing System gested. The sharing that we are concerned with is not
GRID computing is a newly developed technology for primarily file exchange but rather direct access to com-
complex systems with large-scale resource sharing, wide- puters, software, data, and other resources. This is re-
area communication, and multi-institutional collabora- quired by a range of collaborative problem-solving and
tion. This technology attracts much attention. Many ex- resource-brokering strategies emerging in industry,
perts believe that the grid technologies will offer a second science, and engineering. This sharing is controlled by the
chance to fulfill the promises of the internet. The real, Resource Management System (RMS). The Open Grid
specific problem that underlies the Grid concept is coor- Services Architecture enables the integration of services
dinated resource sharing, and problem solving in dynam- and resources across distributed, heterogeneous, dynam-
ic, multi-institutional virtual organizations. Grid technol- ic virtual organizations; and also provides users a plat-
ogy is a newly developed method for large-scale distri- form to easily request grid services. A grid service is de-
buted systems. This technology allows effective distribu- sired to execute a certain task under the control of the
tion of computational tasks among different resources RMS. To provide the desired level of service reliability,
presented in the grid. the RMS can assign the same subtasks to several inde-
Yuan-Shun Dai [6], describe a grid computing pendent resources of the same type. To evaluate the qual-
systems in which the resource management systems ity of service, its reliability and performance indices
(RMS) can divide service tasks into execution blocks (EB) should be defined. Author considers the indices service
and send these blocks to different resources. To provide a reliability (probability that the service task is accom-
desired level of service reliability, the RMS can assign the plished within a specified time), and conditional expected
same EB to several independent resources for parallel system time; and presents the numerical algorithm for
(redundant) execution. According to the optimal schedule their evaluation for arbitrary subtask distribution in a
for service task partition and distribution among re- given grid with a star architecture taking into account
sources, one can achieve the greatest possible expected precedence constraints on the sequence of subtask execu-
service performance (i.e. least execution time), or reliabili- tion. Some of the very helpful practical applications are as
ty. For solving this optimization problem, the author sug-
gests an algorithm that is based on graph theory, Baye- Comparison of different resource management alterna-
sian approach and the evolutionary optimization ap- tives (subtask assignment to different resources),
proach. A virtual tree-structure model is constructed in Making decisions aimed at service performance im-
IJSER 2011
http://www.ijser.org
206
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
provement based on comparison of different grid struc- measure that describes the degree of system reliability.
ture alternatives and Distributed program reliability (DPR) is the probability
Estimating the effect of reliability and performance var- that a given program can be run successfully and will be
iation of grid elements on service reliability and perfor- able to access all of the files it requires from remote sites
mance. in spite of faults occurring among the processing ele-
ments & communication links. The second measure, dis-
2.4 Statistical Moments / Bayes Approach tributed system reliability (DSR), is defined as the proba-
In many practical Engineering circumstances, systems bility that all of the programs in the system can be run
reliability analysis is complicated by the fact that the fail- successfully. A distributed system is a collection of pro-
ure time distributions of the constituent subsystems can- cessor-memory pairs connected by communication links.
not be accurately modeled by standard distributions. Ge- The reliability of a distributed system can be expressed
rard L. Reijns et al.[8] in their paper, discuss a low-cost, using the distributed program reliability and distributed
compositional approach based on the use of the first four system reliability analysis. The computing reliability of a
statistical moments to characterize the failure time distri- distributed system is an NP-hard problem. The distribu-
butions of the constituent components, subsystems and tion of programs and data-files can affect the system re-
top-level system. The approach is based on the use of liability. The reliability-oriented task assignment prob-
Pearson Distributions as an intermediate analytical ve- lem, which is NP-hard, is to find a task distribution such
hicle, in terms of which the constituent failure time distri- that the program reliability or system reliability is max-
butions are approximated. The analysis technique is pre- imized. For example, efficient allocation of channels to the
sented for -out-of- systems with identical subsystems, different cells can greatly improve the overall network
series systems with different subsystems and systems throughput, in terms of the number of calls successfully
exploiting standby redundancy. The technique consistent- supported. Chin-Ching Chiu et al.[10] presents a genetic
ly exhibits very good accuracy (on average, much less algorithm-based reliability-oriented task assignment me-
than 1 percent error) at very modest computing cost. In thodology (GAROTA) for computing the DTA reliability
his paper, he present a low-cost, compositional approach problem. The proposed algorithm uses a genetic algo-
based on the use of independent of system size. In addi- rithm to select a program and file assignment set that is
tion, to improve their approach numeric implementation maximal, or nearly maximal with respect to system relia-
details have been outlined by him and a number of ex- bility. Their numerical results show that the proposed
ample applications from the aerospace domain have been algorithm may obtain the exact solution in most cases,
presented in their paper. and the computation time seems to be significantly short-
Pandey et al. [9] provides a Bayes approach of er than that needed for the exhaustive method. The tech-
drawing inference about the reliability of a 1-component nique presented in his paper would be helpful for readers
system whose failure mechanism is simple stress- to understand the correlation between task assignment
strength. The Bayes estimator of system reliability is ob- reliability and distributed system topology. A distributed
tained from data consisting of random samples from the system is defined as a system involving cooperation
stress and strength distributions, assuming each one is among several loosely coupled computers (processing
Weibull. The Bayes estimators of the four unknown shape elements).The system communicates (by links) over a
and scale parameters of stress and strength distributions network. A distributed program is defined as a program
are also considered and these estimators are used in esti- of some distributed system which requires one or more
mating the system reliability. The priors of the parameters files. For a successful distributed program, the local host
of stress and strength distributions are assumed to be possesses the program, the processing elements possess
independent. The Bayes credibility interval of the scale the required files and the interconnecting links must be
and shape parameters is derived using the joint posterior operational.
of the parameters.
2.6 Designed Experiments
2.5 Genetic Algorithm Design of experiments is a useful tool for improving the
Distributed Systems (DS) have become increasingly quality and reliability of products. Designed experiments
popular in recent years. The advent of VLSI technology are widely used in industries for quality improvement. A
and low-cost microprocessors has made distributed com- designed experiment can also be used to efficiently search
puting economically practical. Distributed systems can over a large factor space affecting the products perfor-
provide appreciable advantages including high perfor- mance, and identify their optimal settings in order to im-
mance, high reliability, resource sharing and extensibility. prove reliability. Several case studies are available in the
The potential reliability improvement of a distributed literature.
system is possible because of program and data-file re- V. Roshan Joseph et al.[11] presents the devel-
dundancies. To evaluate the reliability of a distributed opment of a integrated methodology for quality and re-
system, including a given distribution of programs and liability improvement when degradation data are availa-
data-files, it is important to obtain a global reliability ble as the response in the experiments. The noise factors
IJSER 2011
http://www.ijser.org
207
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
affecting the product are classified into two groups which separable constraints and non decreasing functions. For
led to a Brownian motion model for the degradation cha- illustration, 3 types of problems are solved using his me-
racteristic. A simple optimization procedure for finding thod. A majority of problems concerning system reliabili-
the best control factor setting is developed using an inte- ty optimization are nonlinear programming problems
grated loss function. In general, reliability improvement involving integer variables. The solution methods for
experiments are more difficult to conduct than the quality such problems can be categorized into:
improvement experiments. This is mainly due to the diffi-
culty of obtaining the data. Reliability can be defined as i) Exact methods based on dynamic programming, impli-
quality over time and therefore in reliability improvement cit enumeration and branch-and-bound technique
experiments we need to study the performance of the ii) Approximate methods based on linear and nonlinear
product over time as opposed to just measuring the quali- programming techniques,
ty at a fixed point of time. Two types of data are usually iii) Heuristic methods which yield reasonably good solu-
gathered in reliability experiments: lifetime data and de- tions with little computation
gradation data. Lifetime data gives the information about Each category has both advantages and disadvantages.
the time-to-failure of the product. In the degradation da- Due to the tremendous increase in the available compu-
ta, a degradation characteristic is monitored throughout ting power, the exact solution deserves attention from
the life of the product. Thus, they provide the complete researchers.
history of the products performance in contrast to a sin- To derive an exact solution for a reliability opti-
gle value reported in the lifetime data. Therefore, the de- mization problem, dynamic programming can be used
gradation data contain more information than the lifetime only for some particular structures of the objective func-
data. There are some similarities between reliability im- tion and constraints. It is not useful for reliability optimi-
provement, and quality improvement. Generally speak- zation of a general system, and its utility decreases with
ing, improving the quality will also improve the reliabili- the number of constraints.
ty. But this may not be true always. For example, suppose The Reliability of a Distributed Computing System is the
in a printed circuit board (PCB) manufacturing industry, probability that a distributed program which runs on
tin plating is a more stable process than gold plating. multiple processing elements and needs to communicate
Therefore in terms of improving quality the industry with other processing elements for remote data files will
should prefer tin plating compared to gold plating be- be executed successfully. This reliability varies according
cause a better platted thickness can be achieved using tin to (1) the topology of the distributed computing system,
plating. On the other hand, during customer usage, the (2) the reliability of the communication links, (3) the data
tin will wear out faster than gold and therefore the gold- files and program distribution among processing ele-
plated PCB will have higher reliability. Therefore, gold ments, and (4) the data files required to execute a pro-
plating should be preferred for improving reliability. gram. Thus, the problem of analyzing the reliability of a
Thus, the choice that is good for quality need not always distributed computing system is more complicated than
be good for reliability. Because of this reason, the author the K-terminal reliability problem. In his paper, Lin et al.
should find the procedure for the optimal setting of the [13] describe several reduction methods for computing
factors considering both quality and reliability and also the reliability of distributed computing systems. These
interaction between them. reduction methods can dramatically reduce the size of a
distributed computing system, and therefore speed up
2.7 Distributed and Coherent Systems the reliability computation.
A Design engineer often tries to improve system reliabili- The reliability of a distributed computing system de-
ty with a basic design to the largest extent possible subject pends on the reliability of its communication links and
to several constraints such as cost, weight and volume. nodes and on the distribution of its resources, such as
The system reliability can be improved either by using programs and data files. Many algorithms have been
more reliable components or by providing redundant proposed for computing the reliability of distributed
components. If for each stage of the system, several com- computing systems, but they have been applied mostly to
ponents of different reliabilities and costs are available in distributed computing systems with perfect nodes. How-
the market or redundancy is allowed, then the designer ever, in real problems, nodes as well as links may fail.
faces a decision exercise which can be formulated as a Min-Sheng Lin et al. [14, 15] propose in his paper, two
nonlinear integer programming problem. V. Rajendra new algorithms for computing the reliability of a distri-
Prasad et al. [12] in their paper deals with a search me- buted computing system with imperfect nodes. Algo-
thod based on: rithm I is based on a symbolic approach that includes two
lexicographic order and passes of computation. Algorithm II employs a general
an upper bound on the objective function, factoring technique on both nodes and edges. He also
for solving redundancy allocation problems in shows the Comparisons between both algorithms. It
coherent systems. Such problems generally belong to the shows the usefulness of the proposed algorithms for
class of nonlinear integer programming problems with computing the reliability of large distributed computing
IJSER 2011
http://www.ijser.org
208
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
In his paper, Fulya Altiparmak et al. [19] propose a new ferent topic such as Distributed and Coherent Systems,
method, based on an artificial neural network (ANN), to Network Reliability Optimization, Tele-
estimate the reliability of networks with identical link Communication / Neural Network etc.
reliability. There are two significant advantages to this.
The first is that a single ANN model can be used for mul-
4 PROPOSED FUTURE WORK
tiple network sizes and topologies. The second advantage
is that the input information to the ANN is compact, The topic reliability remains an interesting challenge for
which makes the method tractable, even for large sized future research. The possible direction is to investigate
networks. We use the approach for networks of widely whether reliability ranking for improving the perfor-
varying reliability, and then consider only highly reliable mance of the system algorithm. Also to find the better
networks. reliability system in the sense of Distributed computing
System/ Network system it is necessary to improve the
2.10 Network Reliability Optimization time complexity for such system. In the sence of nano-
Evaluation of the reliability of a network is a fundamental technology much work is needed in particular nano-
problem. It has application in many practical fields such reliability field to ensure the product reliability and safety
as communication, digital systems, and transportation is various use condition. Some other meta-heuristic ap-
systems. The physical network is represented by a graph proaches besides the above may also be applicable such
composed of nodes connected by directed and undirected as Tabu-Search, Hybrid Optimization Technique and Ant
arcs. Associated with each arc and with each node of a Colony Optimization etc. Future aspects also exist in var-
graph is a failure probability. Debany, W. H. et al. [23] ious communication schemes in Wireless (COBRA:
provide a graph with known failure probabilities of its Common Object Request Broker Architecture). This is
elements (arcs and nodes) the objective is to find the also easily extensible to generic wireless network system.
probability that at least one complete simple path (no From the point of view of quality management, treat the
node is visited more than once) exists between a source system reliability as a performance index, and conduct
and a terminal node. the sensitive analysis to improve the most important
Developments and Improvements in information component (e.g. transmission line, switch or server) will
& communication technologies in recent years have re- increase the system reliability most significantly. Future
sulted in increased capacities, and higher concentration of research can extend the problem from the single com-
traffic in telecommunication networks. Operating failures modity case to the multicommodity case. Besides, trans-
in such high capacity networks can affect the quality of mission time reduction is a very important issue for an
service of a large number of consumers. Consequently, information system. Therefore researcher can extend the
the careful planning of a networks infrastructure and the work to include the time attribute for each component.
detailed analysis of its reliability become increasingly Reliability can also be extended for hybrid Fault-Tolerant
important toward ensuring that consumers obtain the embedded system architecture in the form of hybrid re-
best service possible. One of the most basic, useful ap- covery Block (RB). Future research will also improved the
proaches to network reliability analysis is to represent the automated controller abilities, and the human machine
network as an undirected graph with unreliable links. interface, in order to increase the efficiency of the human
The reliability of the network is usually defined as the reasoning assistance, and to decrease the human response
probability that certain nodes in the graph are connected time.
by functioning links. Dirk et al. [24] is discusses this
Network reliability Optimization with network planning, ACKNOWLEDGMENT
where the objective is to maximize the networks reliabili- The author wish to thank Prof. P. N. Tondon for the
ty, subject to a fixed budget. They can develop a number support and the encouragement which he yields to
of simulation techniques to address the network reliabili- them during their research work.
ty estimation problem.
REFERENCES
3 CONCLUSION [1] Shuen-Lin Jeng, Jye-Chyi Lu, and Kaibo Wang, A Review of
The behaviour of reliability is much more sensitive to Reliability Research on Nanotechnology, IEEE Transactions on
change in different fields of engineering and sciences. Reliability, Vol. 56, No. 3, Pg. 401-410, September 2007.
Around twenty five papers from leading journals in dif- [2] J. Keller, A. Gollhardt, D. Vogel, and B. Michel, Nanoscale
ferent fields of reliability have been covered in this paper. Deformation Measurements for Reliability Analysis of Sen-
This paper is reviewed various aspects of reliability re- sors, presented at the Proceedings of the SPIEThe Interna-
search in the field of Nano-Technology, Computer tional Society for Optical Engineering, 2005.
Communication Network, Grid Computing System, [3] Yi-Kuei Lin, System Reliability of a Limited-Flow Network in
Statistical Moments / Bayes Approach, Genetic Algo- Multicommodity Case, IEEE Transactions on Reliability, Vol.
rithm etc. We have broken down our survey into the dif- 56, No. 1, Pg. 17-25, March 2007.
IJSER 2011
http://www.ijser.org
210
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
[4] Raghavendra, C. S., Kumar, V. K. P. and Hariri S., Reliability [20] Ruey-Shun Chen, Deng-Jyi Chen and Y. S. Yeh, Reliability
analysis in Distributed systems, IEEE Transactions on Com- Optimization on the Design of Distributed Computing
puters, Volume 37, Issue 3, Pg. 352 358, March 1988. Systems, Proceed of Int. Conf. on Computing and Information,
[5] Kin-Sun-Wah and McAlister D.F, Reliability optimization of Pages 422-437, 1994.
computer communication network, IEEE Trans. On Reliability, [21] Ruey-Shun Chen, Deng-Jyi Chen and Y. S. Yeh, Reliability
Vol.37, No. 2, Pp.275- 287 (1998) Dccember. Optimization of Distributed Computing Systems Subject to
[6] Yuan-Shun Dai, Optimal Resource Allocation for Maximizing Capacity Constraints, Computers & Mathematics with
Performance and Reliability in Tree-Structured Grid Services, Applications, Volume 29, Issue 4, Pages 93-99,February 1995.
IEEE Transactions on Reliability, Vol. 56, No. 3, Pg. 444-453, [22] Fulya Altiparmak, Berna Dengiz, and Alice E. Smith, A General
September 2007. Neural Network Model for Estimating Telecommunications
[7] Gregory Levitin, Yuan-Shun Dai, and Hanoch Ben-Haim, Re- Network Reliability, IEEE Transactions on Reliability, Vol. 58,
liability and Performance of Star Topology Grid Service With No. 1, Pg. 2-9, March 2009.
Precedence Constraints on Subtask Execution, IEEE Transac- [23] Debany, W. H. and Varshney, P. K., Network Reliability Evalu-
tions on Reliability, Vol. 55, No. 3, Pg. 507-515, September 2006. ation Using Probability Expressions, IEEE Transactions On Re-
[8] Gerard L. Reijns and Arjan J. C. Van Gemund, Reliability liability, Vol. R-35, No. 2, Pg. 161-166, 1986 June.
Analysis of Hierarchical Systems Using Statistical Moments, [24] Dirk P. Kroese, Kin-Ping Hui, and Sho Nariai, Network Relia-
IEEE Transactions On Reliability, Vol. 56, No. 3, Pg 525-533, bility Optimization via the Cross-Entropy Method, IEEE
September 2007. Transactions on Reliability, Vol. 56, No. 2, Pg. 275-287, June
[9] Pandey, M. and Upadhayay S. K., Reliability Estimation in
2007.
Stress- Strength Models: A Bayes Approach, IEEE Transac-
tions on Reliability, December 1985.
[10] Chin-Ching Chiu, Chung-Hsien Hsu, and Yi-Shiung Yeh, A
About the Author: Dr. Anju Khandelwal is an as-
Genetic Algorithm for Reliability-Oriented Task Assignment
sistant professor and HOD-Dean Academics of
With k Duplications in Distributed Systems, IEEE Transactions
S.R.M.S. Womens College of Engg. & Tech. Bareilly
On Reliability, Vol. 55, No. 1, Pg. 105-117, March 2006.
(U.P.) Affiliated by G. B. T. U. Lucknow-India. She
[11] V. Roshan Joseph and I-Tang Yu, Reliability Improvement
received her Bachelor degrees in Mathematics and
Experiments With Degradation Data, IEEE Transactions On
Physics from Bundelkhand University Jhansi (U.P.),
Reliability, Vol. 55, No. 1, Pg. 149-157, March 2006.
India in 1996, and Master degree in Mathematics with
[12] V. Rajendra Prasad and Way Kuo, Reliability Optimization of
Computer Application from Bundelkhand University
Coherent Systems, IEEE Transactions On Reliability, Vol. 49,
Jhansi (U.P.), India in 1998. She then joined the Bun-
No. 3, Pg. 323-330, September 2000.
delkhand University Jhansi as a lecturer under Self
[13] M.-S. Lin and D.-J. Chen, General Reduction Methods for the
finance scheme. She completed his PhD degree in
Reliability Analysis of Distributed Computing Systems, The
Operations Research from Gurukula kangri Universi-
Computer Journal, Volume 36, Issue 7, Pages 631-644, 1993.
ty Hardwar (Uttaranchal) in 2006. She received Mas-
[14] Min-Sheng Lin, Deng-Jyi Chen and Maw-Sheng Horng, The
ter degrees in technical field that is M. Tech (Software
Reliability Analysis of Distributed Computing Systems with
Imperfect Nodes, The Computer Journal, Volume 42, Issue 2,
Engineering) in 2010 from U. P. T. U. Lucknow (U.P.),
Pages 129-141, 1999. Her areas of interest include parallel and distributed
systems, optimization techniques, CCN and reliabili-
[15] MIN-SHENG LIN AND DENG-JYI CHEN, GENERAL REDUCTION
ty Analysis. She is a life member of the IAPS.
METHODS FOR THE RELIABILITY ANALYSIS OF DISTRIBUTED
COMPUTING SYSTEMS, COMPUTER J OURNAL ISSN 00104620
CODEN CMPJA6, VOL. 36, NO. 7, PP. 631-644, 1993.
[16] Raghavendra, C. S. and Makam, S. V., Reliability Modeling
and Analysis of Computer Networks, IEEE Transactions on
Reliability, Vol. R-35, No. 2, Pg. 156-160, 1986 June.
[17] Deng-Jyi Chen and Min-Sheng Lin, On Distributed Compu-
ting Systems Reliability Analysis Under Program Execution
Constraints, IEEE Transactions on Computers, Volume 43, Is-
sue 1, Pages 87-97, 1994.
[18] Chiu, Chin Ching , Yeh, Yi Shiung and Chou, Jue Sam, An
Effective Algorithm for Optimal K-terminal Reliability of Dis-
tributed Systems, Malaysian Journal of Library & Information
Science, 6 (2), Pg. 101-118, 2001.
[19] Ruey-Shun Chen, Deng-Jyi Chen and Y. S. Yeh, A New Heuris-
tic Approach for Reliability Optimization of Distributed Com-
puting Systems subject to Capacity Constraints, Journal of
Computers & Mathematics with Applications, Volume 29, Issue
3, Pages 37-47, Feb 1995.
IJSER 2011
http://www.ijser.org
211
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Service Oriented Architecture (SOA) has become a new software development paradigm because it provides a
flexible framework that can help reduce development cost and time. SOA promises loosely coupled interoperable and
composable services. Service selection in business processes is the usage of techniques in selecting and providing quality of
services (QOS) to consumers in a dynamic environment. Single business process model consists of multiple service invocations
forming service orchestration. It represents multiple execution paths called modeled flexibility. In certain cases, modeled
flexibility can cause conflicts in service selection optimization, making it impossible to simultaneously optimize all execution
paths. This paper presents an innovative approach to service selection for service orchestration that addresses this type of
conflicts by encompassing status identification based availability estimation with multiple QOS constraints along with an
effective quality assessment model. This model captures the expectations from the users on the multiple quality of a service and
returns ratings as a feedback on the service usage. This updated rating in the service list can be used by the new user. This
proposed method provides optimal services to users consistently and efficiently thereby resulting in more meaningful and
reliable selection of services for service orchestration in SOA.
Index Terms Service Oriented Architecture, Service Selection, Service orchestration, Meta-metrics, Modeled Flexibility,
Rating, Multiple QOS level, local selection, global selection.
1 INTRODUCTION
Service oriented architecture (SOA) is a new para- Status identification based availability estimation for
digm for software development that promises loosely service selection is used along with multiple QoS con-
coupled, interoperable and composable components straints. It is extended with an effective quality assess-
called services. Service orchestration is the execution of a ment model that is used to match the expectations from
single transaction that impacts one or more services in an the user with that of rating of services held in service list.
organization. It is called as business process. Business The service list is divided into four groups with all the
processes are implemented by orchestrating services of services having the triple factor of quality rating for all
different activities involved in it. Multiple QoS-based ser- the multiple QoS constraints. The feedback from the user
vice selection results in selecting an optimal service for after service usage is used to update the service list. This
single activity from a set of candidate services, thereby proposed approach of service selection with multiple QoS
maximizing the QoS of the entire business process. A sin- factors results in more meaningful and reliable selection
gle business process model can represent multiple execu- of services used in service orchestration in service
tion paths known as modeled flexibility. Modelled flex- oriented architecture. Along with this, it also resolves the
ibility can cause conflicts in service selection if the differ- conflicts with modeled flexibility in business process by
ent optimal services are selected for the common activity meta-metrics thereby ensuring selection of optimal ser-
in both the execution paths thereby making it impossible vices for all the service invocations.
to optimize all execution paths. The proposed approach
to service selection addresses this type of conflicts
through a set of meta-metrics (probability of execution).
2 SERVICE ORIENTED ARCHITECTURE
The service orientation and Service Oriented Architecture
(SOA) are not new or revolutionary concepts. It is the
K.Vivekanandan is Professor in the Department of Computer Science and next stage of evolution in the distributed computing [2].
Engineering at Pondicherry Engineering College, Puducherry.Mobile:
9443777795 Fax: 2655101 Mail id: [email protected].
SOA is not a Technology and is only an architectural ap-
S.Neelavathi is Assistant Professor in the department of Information Tech- proach. SOA is defined as Service Oriented Architecture
nology, PKIET, Karaikal. Mobile: 9443632464. Mail Id: neela- (SOA) is a paradigm for organizing and utilizing distri-
[email protected] buted capabilities that may be under the control of differ-
ent ownership domains [2]. SOA includes the previously
proven and successful elements from past distributed
IJSER 2011
http://www.ijser.org
212
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
paradigms. These elements are combined with the design tions. The agnostic nature of services enables them to
approaches to leverage recent technology in distributed be recombined and reused in different forms.
computing [1]. Service Autonomy
In SOA, the loosely coupled systems do compu-
ting in terms of services. SOA separates functions into Services to carry out their functionalities consistently
distinct units, or services, which developers make access- and reliably, needs to have a significant degree of
ible over a network in order to allow users to combine control over its environment and resources.
and reuse them in the production of applications. These Service Statelessness
services communicate with each other by passing data
from one service to another [3], or by coordinating an Services are designed to remain stateful only when
activity between two or more services. They use the well required. If there are stateful, then the management
established standards [4]. of excessive state information can compromise the
This approach is based on the design principles of availability of the service.
loose coupling, which is a principle by which the con- Service Discoverability
sumer and service are insulated from changes in underly-
ing technology and behavior, Interoperability[2] the prin-
Service discovery is the process of discovering a
ciple which provides the ability to support consumers
service and interpretation is the process of under-
and service providers that are of different programming
standing its purpose and capabilities.
languages on, different operating systems with different
Service Composability
communication capabilities, Encapsulation that allows the
potential consumer to be insulated from the internal
technology and even the details of behavior of service, It is related to its modular structure. It is com-
Discoverability which is used to realize the benefit of posed in three ways. Application is an assembly of
reuse [5]. Seamless integration of various systems allows services, components and application logic that binds
data access from anywhere anytime, thereby providing functions together. Service Federation is the collection
services to customers and partners inside and outside the of services managed together in a large service do-
enterprise. It provides a simple scalable paradigm for main and Service orchestration is execution of single
organizing large networks of systems that require intero- transaction that impacts one or more services in an
perability and develops systems that are scalable, evolva- organization.
ble and manageable and establishes solid 2.2 Elements of SOA
foundation for business agility and adaptability.
The overall architectural model in the Fig 1
shows the elements of SOA. A service provider describes
2.1 Services In SOA
its service using Web Service Description Language
The most fundamental unit of service oriented (WSDL). The WSDL definition is divided into two parts:
solution logic is the service. Services in SOA comprises the abstract description that defines the service interface,
the below 8 distinct design principles. and the concrete description that establishes the transport
Standardized service contract and location information. This WSDL definition is pub-
lished to the Universal Discovery Description Interface
Services express their purpose and capabilities via a (UDDI) service registry. SOAP (Simple Object Access Pro-
service contact. It is the most fundamental part of service- tocol) is the universally accepted standard transport pro-
orientation. tocol for messages and represents a standardized format
Service Loose Coupling for transporting messages. SOAP message contents are
presented in message body which consists of XML for-
Coupling refers to the number of dependencies be- matted data. A service requestor issues one or more que-
tween modules. Loosely coupled modules have a few ries to the UDDI to locate a service and determine how to
known dependencies whereas tightly coupled have many communicate with that service. Part of the WSDL pro-
unknown dependencies. SOA promotes loose coupling vided by the service provider is passed to the service re-
between service consumers and providers. questor. This tells the service consumer what the requests
Service Abstraction and responses are for the service provider. The service
consumer uses the WSDL to send a request to the service
This emphasizes the need to hide as much of the un- provider. The service provider provides the expected re-
derlying details of a service as possible to preserve sponse to the service consumer.
loosely coupled relationship.
Service Reusability
TABLE II
Figure 2 A Typical Scenario of Service Selection REFERENCES OF DYNAMIC SERVICE SELECTION
IJSER 2011
http://www.ijser.org
214
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
TABLE II
REFERENCES OF DYNAMIC SERVICE SELECTION
Roland Ukor, Andy Carpenter , Flexible Service Selection Optimization Using Meta-metrics Congress on
Based on QoS Services-I, IEEE 2009
Under no global V.Deora,j.Shao,w.A.Gray, supporting qos based selection in service oriented architecture proceedings of the
constraint international conference on next generation web services practices , IEEE 2006
This selection
process selects Y wang, J Yang,Relation Based Service Networks for reliable service selection proceedings of the conference
optimal service out on commerce and enterprise computing,IEEE 2009
of a group of func- Canfora, G., Di Penta, M., Esposito, R., and Villani, M. L.,An Approach for QoS-aware Service Composi-
tionality-similar tion based on Genetic Algorithms, Proc. of the 2005 Conf. on Genetic andevolutionary computation,
services optimized ACM Press, New York, 2005.
for a certain prop- Under single global
erty of QoS. constraint D.Liu, Z.Shao,C.Yu, A heuristic Qos-aware service selection approach to web service composition, Interna-
tional Conference on computer and Information science IEEE 2009
Lingshuang S,Lu Z, et al. Dynamic Availability Estimation For Service Selection Based On Status Identifi-
cation, IEEE International Conference on web Services, 2008 IEEE
Bang y, chi-Hung,et al. Service selection model based on QoS reference vector, Congress on services , IEEE
2007
D.A.Menasce et al. On optimal service selection in SOA Performance Evaluation 67 (2010) 659-675
Under multiple V.Diamadopoulau et al.Techniques to support Web Service selection Journal of Network and Computer
global constraint Applications (2008)
D.Liu, Z.Shao,C.Yu, A heuristic Qos-aware service selection approach to web service composition, Interna-
tional Conference on computer and Information science IEEE 2009
Based on Semantic Achieves the similari- Z Guoping, Z Huijuan, Wang Z An Approach to QoS-aware service selection in Dynamic Web service composi-
web ty comparison by tion IEEE(2007)
calculating the se-
mantic distance by V.X.Tran et al. QoS ontology and its QoS-based ranking algorithm for Web services Simulation Modelling
QOS and context. Practice and Theory 17 (2009)
Based on improving Add new actions to Balke W.T, Wagner M, Kim S.M, et (2004)
standard UDDI to
protocol or lan-
achieve dynamic
guage UDDI process, or B.Jeong et al. On the functional quality of service (FQoS) to discover and compose interoperable web services
design a selecting Expert Systems with Applications (2009)
language like SQL,
select Web services
by setting restrictive
conditions.
Based on user pre- Through users' scores O.Minhyuk, B.Jongmoon,et.al, An efficient approach for QoS-aware service selection based on a tree-based
on web services,
ference algorithm Seventh IEEE international conference on computer and Information science 2008
achieves dynamic
updating of Web
services selection TQOS for automatic web service composition IEEE transactions on services computing (2010)
system, thus forming
a dynamic selection
process with self-
evaluation function.
IJSER 2011
http://www.ijser.org
215
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
4 PROPOSED QOS BASED SERVICE possible to obtain a solution that is simultaneously op-
SELECTION FOR SERVICE timal for both paths, resulting in problems with service
ORCHESTRATION selection. This raises conflicts with modeled flexibility.
In order, to resolve these conflicts with multiple
Service orchestration is the execution of a single execution paths in business process, a set of process me-
transaction that impacts one or more services in an organ- trics called meta-metrics is used. They exist solely for the
ization, called a business process. In order to maximize purpose of biasing the evaluation of QoS metrics for can-
the benefits of SOA, service selection is important espe- didate services based on priorities. With probability of
cially in terms of providing quality of services (QoS) to execution as meta-metric, activities with a higher proba-
consumers in a dynamic environment. The proposed bility of being executed are given priority to resolve the
work is intended to develop a more effective, meaningful selection conflicts between the optimal solutions for two
and robust service selection methodology in service or- or more execution paths.
chestration for conflict resolving using Meta metrics .It
aims at selecting reliable and optimal services by using 4.2 STATUS IDENTIFICATION BASED AVAILABILITY
more than one relevant QoS category along with an effec-
tive quality assessment model. ESTIMATION FOR SERVICE SELECTION (SIBE)
recent user. The new service user receives changes from attributes, thereby providing a more meaningful and reli-
the service list and uses this updated rating for reselecting able service as the output. The actual rating value i.e
their optimal services. As the inclusion of rating of many feedback provided by the user after the service usage is
new users will subsequently incur more memory space, added into the service list. This updated rating can be
provision is made to remove the older entry thereby pro- used by the user with the same intentions. Further to
viding an efficient method of data storage. maintain a fixed set of entries in the service list, the older
entries are always removed paving way for new entries of
ratings.
The figure 5 shows the quality of service assess- Figure 5 Quality of service assessment model used in
ment model used in SIBE. It captures the rating from the SIBE
users. The user apart from mentioning the functionalities
of the required service is also prompted to specify quality 4.3 PROPOSED ARCHITECTURE FOR SERVICE SELEC-
rating for all the QoS criterias of the intended service. TION METHOD
Availability is one of the key QoS attribute. Other QoS
attributes considered are cost, performance, reliability,
reputation and fidelity. Users specify quality rating in the
form of triple factor comprising Expectation from the ser-
vice, perceived value from the user and the actual rating
offered by the user after using the service. The triple fac-
tor for cost is denoted as E(c ),P(c ) and R(c ).similarly it is
E(per)P(per)R(per) for performance ,E(r )P(r )R(r ) for re-
liability, E(rep)P(rep)R(rep) for reputation and
E(F)P(F)R(F) for fidelity.
The service held in four lists consists of triple
factor for all the QoS criterias comprising expectation,
perceived value and the actual rating. Among the expec-
tation rating provided by the user for all the QoS
attributes, the highest rating of a particular attribute value Figure 6 Proposed architecture model
is taken. Then the particular attribute with highest rating
is mapped against the attribute of the service in the ser- The above Figure 6 shows the proposed architec-
vice list with the same expectation. If the expectation ture model for service selection in service orchestration of
matches, then the appropriate service from the service list a business process model. The client in the above figure is
is selected and offered to the user. Meeting the expecta- anonymous with the business process. Every business
tion for a single attribute does not mean to satisfy the oth- process model represents multiple execution paths, a
er attributes to a significant extend. So, the expectation condition known as modeled flexibility. Activity analyzer
matching can be extended to a maximum number of other analyzes each execution paths for the total number of ac-
IJSER 2011
http://www.ijser.org
217
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
tivities. For all the activities analyzed, service selection 4. Rating Vs three QoS levels namely low, mod-
optimization is carried out dynamically on an instance- erate and high.
by-instance basis by SIBE(Status Identification based
Availability Estimation for Service Selection)as explained
5. Selection and invocation of services Vs proxy
above.
Each execution path represents a set of activities servers stored.
in a service oriented business process. Each activity in the
business process involves service invocation resulting in
service orchestration for the entire transaction. Service 6.CONCLUSION
selection in a transaction can be carried out once for all
activities of a process, or may be carried out on an activi-
ty-by- activity basis. Here service selection on the latter This paper presents an innovative approach to multiple
grounds is used. The optimal service selected for each Quality of service (QoS) based service selection
activity in all execution paths is held in the candidate ser- SIBE(STATUS IDENTIFICATION BASED AVAILABILITY
vice list. Service orchestration is finally derived by service ESTIMATION FOR SERVICE SELECTION ) for service
aggregations of services stored in candidate service list. orchestration in service oriented architecture. SIBE results
Meta metrics assigner check the solution set in in more meaningful and reliable selection of services used
the candidate service list comprising of activities along in service orchestration. Execution of single transaction
with their optimal services. If path1 and path 2 of busi- that impacts one or more services in an organization is
ness process selects same optimal service for the common called a business process. Modelled flexibility in business
activity, then there is no problem. If there is a mismatch processes can cause conflicts between optimal service
between the services in the execution paths for the com- selections for activities that are common to multiple ex-
mon activity, then there raises conflict of modeled flexibil- ecution paths. This approach presented in this paper ad-
ity. Here priority of execution of activities in execution dresses these conflicts by using a set of meta-metrics
paths is used as meta-metrics to resolve the conflicts. along with meaningful service selection.
Comparison of priority among the activities in different
execution paths is carried out by analyzing the actual REFERENCES
working environment. Then the activity with a higher [1] Thomas Erl, (2005) Service-Oriented Architecture: Concepts,
probability of execution is assigned the optimal service Technology, and Design. Prentice Hall, PTR
and the same optimal service is also allotted to other [2] Judith Hurwitz, Robin Bloor, Baroudi C,Marcia K Service
common activities in different execution paths by the ser- Oriented Architecture for Dummies Wiley Publishing, Inc.
vice adjustor thereby overcoming the conflicts with mod- [3] Soa Principles Of Service Design
eled flexibility. Thus the proposed work represents an [4] Canfora, G., Di Penta, M., Esposito, R., and Villani, M L.,An
innovative approach of service selection for service or- Approach for QoS-aware Service Composition based on Genetic Al
chestration by addressing the conflicts with modeled flex- gorithms, Proc. of the 2005 Conf. on Genetic and evolutionary
ibility based on meta-metrics. computation, ACM Press, New York, 2005.
[5] D. T. Tsesmetzis, I. G. Roussaki, I. V. Papaioannou, et al.QoS
awareness support in Web-Service semantics, in Proceedings of
5.0 EVALUATION METHOD AICT/ICIW 2006, 2006.
Evaluation Environment: [6] Yu and K. Lin, Service selection algorithms for Web services with
1. Test Bed end-to-end QoS constraints, in Proceedings CEC04, pp. 129-
136, 2004.
2. Actual Service (Google, Yahoo etc.) [7] R. Ukor and A. Carpenter, On modelled flexibility and service
selection optimisation, in 9th Workshop on Business Process
Evaluation Metrics: Modeling, Development and Support, vol. 335.CEUR-WS,
1. No of services Vs the time needed to selecting 2008.
optimal service. [8] D. Ardagna and B. Pernici, Global and local qos guarantee in
web service selection, in Business Process Management Work
2. No of Qos constraints Vs Score value of service. shops, 2005, pp. 3246.
[9] O.Minhyuk, B.Jongmoon,et.al, An efficient approach for QoS-
3. Comparing the meta metrics value Vs no of con- aware service selection based on a Tree-based Algorithm, Seventh
flicts arose of selecting different service for same IEEE international conference on computer and Information
science
activities in different execution paths.
[10] Web Service Business Process Execution Language (WS
BPEL), Version 2.0 - OASIS Committee Draft, 17th May 2006.
[11] Lingshuang S,Lu Z, et.al , Dynamic Availability Estimation For
IJSER 2011
http://www.ijser.org
218
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
219
Abstract This paper presents the design and implementation of a power converter for an autonomous wind induction generator
(IG) feeding an isolated load through the PWM-based novel soft-switching interleaved boost converter. The output voltage and
frequency of the wind IG is inherently variable due to random fluctuation of wind-speed variation. The interleaved boost converter
composed of two shunted elementary boost conversion units and an auxiliary inductor. This converter is able to turn on both the
active power switches at zero voltage to reduce their switching losses and evidently raise the conversion efficiency. Since the two
parallel-operated elem entary boost units are identical, operation analysis and design for the converter module becomes quite
simple. A three-phase induction machine model and a three-phase rectifier-inverter model based on a-b-c reference frame are used
to simulate the performance of the generation system. It can be concluded from the simulated results that the designed power
converters with adequate control scheme can effectively improve the performance of output voltage and frequency of the IG feeding
an isolated load.
Index Terms wind power generator, rectifier-inverter circuit, pulse width modulation (PWM), interleaved boost converter, soft
switching.
IJSER 2011
http://www.ijser.org
220
proposed control scheme is employed to design a PWM - Pm: Mechanical output power of the turbine (W),
controller for the boost converter and inverter. The - Cp: Performance coefficient of the turbine,
implementation and design of a power converter for an - : Air density (Kg/m3),
autonomous wind induction generator (IG) feeding an - A: Turbine swept area (m2),
isolated load through the PWM-based boost-inverter circuit - V: Wind speed (m/s).
is simulated here. Pulse-width modulation (PWM) is a very efficient way
of providing intermediate amounts of electrical power
between fully on and fully off. A simple power switch with
2 METHODOLOGY a typical power source provides full power only when
The generation system is designed with IG. The stator switched on. The term duty cycle describes the proportion
winding terminals of the IG are connected to the load of on time to the regular interval or period of time; a low
through the rectifier, DC link, Boost circuit and inverter. duty cycle corresponds to low power, because the power is
The closed loop PWM signal generates proper PWM off for most of the time. Pulse-width modulation uses
signals to switch the 2 power electronics devices of the a rectangular pulse wave whose pulse width is modulated
Interleaved Boost Converter (IBC). The wind turbine resulting in the variation of the average value of the
rotates the IG. The IG generates power when the speed of waveform. If we consider a pulse waveform f(t) with a low
the turbine is above the rated speed. The power generated value ymin, a high value ymaxand a duty cycle D, the
from the IG is converted to DC with a diode bridge average value of the waveform is given by
rectifier. The obtained DC voltage will not be in a pure DC
signal. A filter circuit is used to filter out the ripple current
and a pure DC voltage is obtained. This DC voltage is then As f(t) is a pulse wave, its value is ymax for 0<t<D.T
boosted to the required DC level and then converted to and ymin for D.T<t<T The above expression then becomes
three phase AC signal with IGBT which is driven by PWM
signal. To regulate the AC output voltage the IBC is
controlled by close loop PWM signals. A load is connected
at the output of the inverter. The voltage-current equations
of the studied IG in matrix form are listed as below.
Ce PWM
PWM Gating Signal
Driver
DSPIC
IJSER 2011
http://www.ijser.org
221
IJSER 2011
http://www.ijser.org
222
2) The forward voltage drops on MOSFET S1, S2 and three-phase rectifier-inverter model based on a-b-c
diodes D1 and D2 are neglected. reference frame to simulate the performance of the
3) Inductors L1 and L2 have large inductance, and their generation system. The IG design is made with the
currents are identical constants, i.e., IL1=IL2=IL. calculated value of resistance, flux linkage of the stator and
4) Output capacitances of switches Cs1 and Cs2 have the rotor windings and with the torque equation and the
same values, i.e., Cs1=Cs2=Cs. number of poles. The generated power is rectified. The
The two active switches S1 and S2 are operated with generated power varies with the wind speed. The power
pulsewidth-modulation (PWM) control signals. They are converter converts the three phase AC to DC and then the
gated with identical frequencies and duty ratios. The rising filter circuit is used for obtaining smooth DC voltage across
edges of the two gating signals are separated apart for half it. This DC voltage is then boosted with the interleaved
a switching cycle. The operation of the converter can be boost converter (IBC). Theoretically IBC can boost up to
divided into eight modes, and the equivalent circuits and 200%. The DC voltage is fed to the inverter before which it
theoretical waveforms are illustrated in Fig. 4. is regulated to desired voltage in a closed loop with PWM
The project module is simulated using MATLAB technique. Thus the output at the IBC is always constant
7.7.0(R2008b). The simulation is executed under ode23tb rated voltage. This DC voltage is then regulated AC three
(stiff/TR-BDF2) state which is used to fasten the execution phase with IGBTs. An IGBT inverter is designed with a
speed and the Zero-crossing control is disabled. The solver open loop PWM technique and feed the AC load. PWD
method is set to fast. The voltage is measured at different signal is designed in close loop and open loop system with
points in the simulation circuit. The simulated output is PID and PI controller respectively. The PWM signal is
shown below. The system is tested with different load and generated to regulate the DC voltage that in turn gives a
wind speed. The designed system generates AC power fixed AC supply from the inverter. Thus a regulated Three
with asynchronous generator (215HP; 400V; 50Hz). The Phase AC supply is transmitted to the Load.
utilizes a three-phase asynchronous machine model and a
IJSER 2011
http://www.ijser.org
223
5 CONCLUSION
Thus from the above simulated results we can conclude
at that even at low wind speed and low power generation
the required voltage can be obtained at the output. The
main advantage of the system is that the minimum
Fig 13 THD for output Voltage 220V AC required wind speed for the power generation can be
reduced and generator power ratings can be reduced. The
The input voltage and current generated at the wind induction gear mechanism between the turbine and the shaft can be
generator is shown in fig 5 & 6. The generated voltage will vary reduced. The displayed out is taken when the wind speed
for different wind speed. The generated voltage is noted for is 7m/s. At this speed the generator generates 160V AC
different wind velocity and at each voltage level the output and this voltage is boosted in the IBC to 220V DC which is
voltage is noted. The examined result shows that the output the required voltage level. Steady-state results under
voltage is maintained at 220V even when the generated voltage various loads show that the designed control system for IG
is below the desired voltage level. The simulated result shown can maintain at desired levels. The simulated results of
here is for generated voltage of 160V and current 9A. The output voltage validate the required performance of the
generated voltage is converted to DC 160V by the rectifier, proposed control scheme. Thus the voltage obtained at the
shown in fig 7. This DC voltage is boosted in the closed loop output is regulated with the IBC and inverter block.
circuit with the IBC (Interleaved Boost Converted) to 220V DC,
IJSER 2011
http://www.ijser.org
224
IJSER 2011
http://www.ijser.org
225
Abstract- Fingerprint recognition is a method of biometric authentication that uses pattern recognition techniques based on high-
resolution fingerprints images of the individual. Fingerprints have been used in forensic as well as commercial applications for
identification as well as verification. Singular point detection is the most important task of fingerprint image classification operation.
Two types of singular points called core and delta points are claimed to be enough to classify the fingerprints. The classification can
act as an important indexing mechanism for large fingerprint databases which can reduce the query time and the computational
complexity. Usually fingerprint images have noisy background and the local orientation field also changes very rapidly in the singular
point area. It is difficult to locate the singular point precisely. There already exists many singular point detection algorithms, Most of
them can efficiently detect the core point when the image quality is fine, but when the image quality is poor, the efficiency of the
algorithm degrades rapidly. In the present work, a new method of detection and localization of core points in a fingerprint image is
proposed.
Index TermsCore Point, Delta Point, Smoothening, Orientation Field, Fingerprint Classes
Core points are the points where the innermost ridge loops
1. INTRODUCTION are at their steepest. Delta points are the points from which
ingerprints have been used as a method of identifying three patterns i.e. loop, delta and whorl deviate. Definitions
F individuals due to the favorable characteristics such as may vary in deferent literatures, but this definition of
unchangeability and uniqueness in an individuals singular point is the most popular one. Figure 1 below
lifetime. In recent years, as the importance of information represents the core and delta points.
security is highly demanded, fingerprints are utilized for
the applications related to user identification and
authentication. Most Automatic Fingerprint Identification Core Point
systems are based on local ridge features; ridge ending and
ridge bifurcation, known as minutiae The first scientific
study of the fingerprint was made by Galton who divided
fingerprint into three major classes: arches, loops, and
whorls. Henry, later refined Galtons classification by
increasing the number of classification. Henrys
classification is well-known and widely accepted. Henrys
classes consist of: arch, tent arch, left loop, right loop and Delta Points
whorl .
At a global level the fingerprint pattern exhibits the area Fig 1. The Core and Delta Points on a fingerprint image
that ridge lines assume distinctive shapes. Such an area or
region with unique pattern of curvature, bifurcation, This paper is organized as follows. In section 2, are
termination is known as a singular region and is classified discussed the different types of fingerprints. In Section 3 is
into core point and delta point. The singular points can be explained the drawbacks with the existing techniques of
viewed as the points where the orientation field is core point detection. Section 4 focuses on the problem
solution. In section 5, the core point is extracted using the
discontinuous. proposed algorithm. The experimental results performed on
a variety of fingerprint images are discussed in section 6
and the conclusion and future scope is discussed in Section
Navrit Kaur is pursuing M.Tech from Guru Nanak Dev Engineering
7.
College Ludhiana E-mail: [email protected]
Amit Kamra is with Guru Nanak Dev Engineering College, Ludhiana. E-
mail: [email protected].
IJSER 2011
http://www.ijser.org
226
2 FINGERPRINT CLASSES
The positions of cores and deltas are claimed to be enough
to classify the fingerprints into six categories, which include
arch, tented arch, left-loop, right-loop, whorl, and twin-
loop.
Loops constitute between 60 and 70 per cent of the
patterns encountered. In a loop pattern, one or
more of the ridges enters on either side of the
impression, recurves, touches or crosses the line of
the glass running from the delta to the core, and
terminates or tends to terminate on or in the
direction of the side where the ridge or ridges
entered. There is exactly one delta in a loop. Loops Fig 2. Classes of fingerprint (a)Arch, (b)Tented Arch, (c) Right Loop (d)
Left Loop, (e) Whorl and (f) Double Loop (The double loop type is
that have ridges that enter and leave from left side sometimes counted as whorl)
are called the Left Loops and loops that have ridges
that enter and leave from right side are called the
Fingerprint friction ridge details are generally described in
Right Loops. In twin loops the ridges containing
a hierarchical order at three levels, namely, Level 1
the core points have their exits on different sides.
(pattern), Level 2 (minutiae points) and Level 3 (pores and
ridge shape).Automated fingerprint identification systems
In a whorl, some of the ridges make a turn through
(AFISs) employ only Level 1 and Level 2 features. No two
at least one circuit. Any fingerprint pattern which
fingerprints are alike, but the pattern of our fingerprint is
contains 2 or more deltas will be a whorl pattern
inherited from close relatives and people in our immediate
family. This is considered "level 1 detail." The detail of our
In arch patterns, the ridges run from one side to the
actual finger and palm print is not inherited. This is
other of the pattern, making no backward turn.
considered "level 2 and 3 level detail" and is used to identify
Arches come in two types, plain or tented. While
fingerprints from person to person.
the plain arch tends to flow rather easily through
The following figure briefly explains the three types of
the pattern with no significant changes, the tented
levels of details in our fingerprint:
arch does make a significant change and does not
have the same easy flow that the plain arch does.
3 PROBLEM FORMULATION
The existing techniques used for detection of core point do
not produce good results for noisy images. Moreover, they
IJSER 2011
http://www.ijser.org
227
may sometimes detect spurious core point due to the a) Firstly, the image I(i,j) is divided into non
inability to work efficiently for noisy images. Also overlapping blocks of size wXw.
techniques like Poincare Index fail for Arch type of Image. b) The mean value M(I) is then calculated for each
The aim of proposed algorithm is to formulate a more block using the following equation:
accurate core point determination algorithm which can w w
2 2
produce better localization of core points avoiding any M ( I ) 12 I (i, j ) (1)
spurious points detected producing robust results for all w iw j w
2 2
types of fingerprints that have been discussed in this paper.
c) The mean value calculated above is then used to
find the variance using the following equation:
4 PROPOSED SOLUTION w w
2 2
V ( I ) 12 (I (i, j) M (I )) 2
(2)
w iw j w
Original Image 2 2
Smoothed Image
Fine Tuning of
Orientation Field
4.2 NORMALIZATION
In image processing, normalization is a process that
Fig 4. The proposed methodology for Core Point Detection
changes the range of pixel intensity values. Normalization
4.1 SEGMENTATION is sometimes called contrast stretching. In more general
The first step of the fingerprint enhancement algorithm is fields of data processing, such as digital signal processing, it
image segmentation. Segmentation is the process of is referred to as dynamic range expansion. The purpose of
separating the foreground regions in the image from the dynamic range expansion in the various applications is
background regions. The foreground regions correspond to usually to bring the image, or other type of signal, into a
the clear fingerprint area containing the ridges and valleys, range that is more familiar or normal to the senses, hence
which is the area of interest. The background corresponds the term normalization. Normalization is a linear process. If
to the regions outside the borders of the fingerprint area, the intensity range of the image is 50 to 180 and the desired
which do not contain any valid fingerprint information. range is 0 to 255 the process entails subtracting 50 from each
Cutting or cropping out the region that does not contain of pixel intensity, making the range 0 to 130. Then each
valid information minimizes the number of operations on pixel intensity is multiplied by 255/130, making the range 0
fingerprint image. The background regions of a fingerprint to 255.
image generally exhibit a very low grey-scale variance Let I(i,j) denote the gray-level value at pixel (i,j), M0 and V0
value, whereas the foreground regions have a very high denote the estimated mean and variance of I, respectively,
variance. Hence, a method based on variance threshold can and N(i,j) denote the normalized gray-level value at pixel (i,
be used to perform the segmentation. The steps for mean j). The normalized image is defined as follows:
and variance based segmentation are as follows:
IJSER 2011
http://www.ijser.org
228
The orientation flow is then estimated using the least square As we know that singular points are the points where the
method using the following equations after dividing the orientation field is discontinuous, hence orientation plays a
input image I into non overlapping blocks of size wXw and crucial role in estimating the core point on a fingerprint
then computing the gradients x and y at each pixel. image. Hence, we need another mechanism to fine tune the
i w/ 2 j w/ 2 orientation field so as to avoid any spurious core points and
Vx(i, j) 2x(u, v) y(u, v) (4) the irregularities that has occurred because of noise. The
u i w / 2 v j w / 2 orientation field for coarse core point is then fine tuned by
adjusting orientation using the following :
i w/ 2 j w/ 2
Vy(i, j) 2 x(u, v)2 y(u, v) (5) If : B( i , j ) 0 then: 0.5 tan 1 ( B / A) (8)
u i w / 2v j w / 2
else: / 2
Where x (u,v) and y (u,v) represents gradient if 0 then
magnitudes at each pixel in x and y directions respectively. if : A 0 then: / 2
else:
The direction of block centered at pixel (i,j) is then else if A 0 then: / 2
computed using the following equation:
V y (i, j ) Hence we calculate the value , which is the orientation
( i , j ) 1 tan 1
(6) value of the image.
2 V x ( i. j )
Due to the presence of noise, corrupted ridge and valley 4 PROPOSED ALGORITHM
structures, minutiae etc. in the input image, the estimated
1. The original fingerprint image is first segmented
local ridge orientation, (i,j), may not always be correct. A
and normalized using equations (1), (2) and (3).
low-pass filter is hence used to modify the incorrect local
ridge orientation. 2. Determine the x and y magnitudes of the gradients
Gx and Gy at each pixel.
4.4 SMOOTHING AND FINETUNING
IJSER 2011
http://www.ijser.org
229
7 REFERENCES
IJSER 2011
http://www.ijser.org
231
Index Terms singular systems, Impulsive behavior, internal model control, Model based control, robust control, tracking problem,
impulse elimination.
ler in the conventional context. The paper is organized as fol- Corollary1: A singular system is called impulse free, if and
lows. In the next section backgrounds are discussed and the only if, it doesnt exhibit impulses in its impulse response.
obstacles in control of singular systems are presented, and
Definition2: A singular system is called minimal if it is observ-
some major limitations of the direct extension of IMC are ex-
able and controllable. The minimality of the plant is presumed
plained. In the third section the proposed method is studied and
throughout this paper.
the filter design procedure is illustrated. In the fourth section
several examples and simulations are given to examine the al- Definition3: A transfer function is strictly proper, bi-proper and
gorithm both in terms of robustness properties and closed loop improper if the following limit is zero, a finite nonzero value
performance. Finally, the concluding remarks are given in last and infinite respectively. Strictly proper and bi-proper systems
section. may be generally named as proper.
lim ( s)
CONTROL OBJECTIVES IN SINGUALR CONTROL s
Ex Ax Bu 1 b j sj
(1) j 1
y Cx
Because CP is supposed to be strictly proper, the largest term in
Definition1: System (1) is impulse free if and only if: its expansion has a negative power therefore the denominator
has a greater degree than the nominator and thus the closed
deg sE A rank E (2)
loop system is strictly proper.
The nullity index of E is called singularity index of a singular
system (1) in this paper.
Remark1: Note that the following general inequality always
holds:
Figure1: Feedback structure
deg sE A rank( E ) (3)
IJSER 2011
http://www.ijser.org
233
Remark3: Note that Lemma2 provides a sufficient condition. certainty descriptions are limited to represent only singular
The necessary and sufficient condition is derived later. Lem- systems with a pre-specified singularity index. If one aug-
ma2 shows that why the objective of properness has not been ments the improper plant by high frequency stable poles a
considered before the introduction of descriptor systems. As- strictly proper model can be obtained, which has a very
suming strictly proper functions for plant and compensator, it is close behavior to plant at least at low enough frequency
trivial that the closed loop system is strictly proper. Also for a range. Larger poles results in closer response to that of the
strictly proper plant and a bi-proper compensator the closed plant in wider bandwidths. However, in this way the uncer-
loop will be bi-proper. tainty becomes unbounded. In particular assume a poly-
Lemma3: For a bi-proper plant and a bi-proper compensator, nomial of stable real poles with a unit steady state gain
the closed loop will be improper if and only if: namely D, then one can write:
lim CP 1 ~ p( s)
s p ( s) (5)
D( s )
Lemma4: In order to have a strictly proper closed loop system
The above description for model is the most natural selec-
with a unit feedback, if the plant is improper the compensator
should be strictly proper with a sufficiently large relative de- tion for a strictly proper model, whose behavior is as close
gree. as possible to that of the plant. However, in this situation
the mismatch between plant and model is not included in a
Proof: For the closed loop to be strictly proper, the compensa- disk shaped region. In other words the uncertainty bound
tor/plant should be strictly proper according to lemma2, there- will be infinity. Now we can take different approaches:
fore the compensator should be strictly proper. Choose another internal model which yields bounded un-
certainty; developing new theory for this kind of uncertain-
Robust Internal Model Control of Singular Systems
ty; or modify the plant input in order to bind the uncertain-
In IMC structure the parallel model or internal model is ty as well as removing impulses from the response. The
inevitably proper or strictly proper. Therefore there always following lemmas are introductory materials for the theo-
exists a mismatch between the plant and model. For a con- rems developed later in this paper.
tinuous output especially in case of initial jumps of the in-
put, it is required that the plant and the internal model have Lemma5: A control system is robustly stale, if and only if,
the same infinite gain and the compensator is strictly prop- the complementary sensitivity function fulfills the follow-
er. This issue can be treated by a smoothing pre-filter for ing inequality: [12]
reference signal but the method is not robust against model
sup ( s)lm 1 (6)
uncertainties. The IMC filter is conventionally used to en-
hance robustness properties by victimizing closed loop re- Remark4: For IMC structure the complementary sensitivity
sponse and making the compensator implementable (prop- function and the uncertainty can be computed as follows:
er). Moreover, it accounts for online adaptation of the con-
qp qp (7)
trol system by adjusting the filter time constant. In this pa- (s)
1 q( p ~
p ) 1 qp(1 1/ D )
per we extend this approach by using a second IMC filter
which assures the closed loop to be strictly proper and has a
lm ( s ) D 1 (8)
smooth response by compensating the singular plant im-
pulsive behavior. The singular internal model control filter Therefore, in this case condition (6) cannot be satisfied.
or SIMC filter is designed to yield a continuous smooth re- Thus we need to modify the IMC structure or algorithm in
sponse and a robust IMC design for singular systems. In order to gain a more tractable uncertainty profile. In the
fact by using a parallel strictly proper model in IMC, the following section the SIMC filter is introduced and the pro-
uncertainty will become unbounded and the robust control posed method is studied.
will not be feasible any more. Therefore the SIMC filter has
another role of bounding the uncertainty profile and mak- THE SIMC FILTER
ing the robust control problem feasible. The disk-type un- The idea of augmenting the IMC compensator by an IMC
certainty profile is usually assumed in robust control filter can be extended to singular systems in a different
schemes, is described by the following relation. manner. According to the previous discussions one way to
p ( j ) ~
p ( j ) overcome the obstacles in IMC of singular systems is to
lm ( ) l m ( ) (4)
~p ( j ) augment the compensator by an additional IMC filter, we
call it SIMC. This filter have the same structure as the con-
This uncertainty description allows us to incorporate sever- ventional IMC filter for step reference signals, and there-
al singular systems in the design while the state space un- fore, the IMC problem of singular systems consists of find-
IJSER 2011
http://www.ijser.org
234
Remark10: It should be noticed that there exists no con- Corollary2: For a strictly proper plant/compensator, (15)
straint on the SIMC filter time constant and any positive does not occur because:
time constant can be chosen. However when smoothness of
the response is also a requirement, large time constant for cp( s ) a1s 1 a 2 s 2 .... (16)
SIMC filter is required, and when a fast response is desired, As a result for a strictly proper compensator/plant combi-
it is better to choose the time constant as small as possible. nation the regularity issue is not of concern. This corollary
Note that if the SIMC filter time constant is larger than that depicts the fact that why the regularity control objective is
of IMC filter and the plant dominant time constant, it will introduced only for singular systems and not for standard
determine the closed loop time constant. In fact the closed strictly proper ones.
loop time constant is the largest time constant among the
plant, IMC filter and SIMC filter time constants. Because of In the following theorem we may introduce the interesting
robustness considerations SIMC filter time constant may be characteristics of the proposed algorithm.
smaller than IMC filter time constant, and does not restrict Theorem2: The closed loop system with an appropriate
the closed loop performance. It is not possible to decrease IMC filter designed according to (12) is robustly strictly
SIMC filter time constant as much as desired, since input proper and robustly regular against all uncertainties de-
noises may be amplified. scribed by (4).
Remark11: Note that (13) means that steady state gains for Proof: Note that from theorem1, the closed loop robust sta-
the plant and model should be of the same sign. A little bility and zero frequency performance are assured. The
mismatch between plant and model steady state gain may family of plants described by (4) all have a singularity index
cause instability if their sign were different. This a common smaller than or equal to that of nominal plant. This can be
drawback of robust control systems for plants with zeros shown as follows; assume that there is a plant in the family
near the origin. By a slight change of the zero location the (4) that has a larger singularity index than the nominal
closed loop may become unstable if the zero is near the ori- plant. Then uncertainty profile can be written as:
gin.
p ~
p
Lemma9: Irregularity of closed loop occurs, if and only if: lm ~
p
cp 1 for all s
From the above assumption uncertainty will increase by
Proof: From the definition of regularity, a singular system is frequency because it has an improper transfer function.
irregular if and only if: Therefore, (4) cannot be satisfied as the uncertainty is un-
bounded. Moreover, for any plant being in family (4) the
sE A 0 (14) relative degree of SIMC filter is greater than or equal to the
plant singularity index and thus the closed loop system is
In the frequency domain context of output feedback control
robustly strictly proper according to lemma2. Also note that
systems the above determinant is the characteristic poly-
nomial of system or the denominator of complementary regularity of the plant is guaranteed by lemma8 because of
sensitivity function. Write the closed loop transfer function strict properness of plant/compensator combination.
as:
The following design procedure can be followed for robust
N internal model control of a singular plant.
cp N
(s ) M Design Procedure:
1 cp 1 N N M
M 1. Choose the polynomial D and set f 2 as its inverse. The
According to (14) and (15) the closed loop system is irregu- polynomial time constant should be smaller than the domi-
lar, if and only if: nant time constant of plant. According to nominal singular
plant choose m such that the strictly properness of closed
N M loop is guaranteed.
This can be rewritten as:
2. For the nominal plant check the feasibility of robust con-
cp 1 for all s (15) trolhaving uncertainty profile as (4) according to (12), if
satisfies design IMC filter for a good performane in nominal
The last equality also means an unsolvable algebraic loop in case.
the simulation.
3. Redesign SIMC filter for having better performance if
required.
IJSER 2011
http://www.ijser.org
236
SIMULATION RESULTS
Simulating an improper system is not possible with the
existing numerical methods, since simulation needs future
data for computing the present state vector. This is why
many Papers in the field of singular systems do not include
any simulation examples or just simulate causal singular
systems. However, if the closed loop system is proper, any
simulating software can easily implement the closed loop
system regardless of the inner unsolvable loops, which
form singular systems in the inner parts of the closed loop
system. In this paper, some illustrative simple examples are
chosen in order to show the effectiveness of the proposed
algorithm.
x1 6 x1 2 x2 u x12
x2 u 0
y x1 2x2 (17)
~ 2 s 13 2 s 13
p(s) p( s )
( s 6)(0.1s 1) (s 6)
IJSER 2011
http://www.ijser.org
237
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 7
ISSN 2229-5518
7
1 1
f 2 ( s) 6
(s 1) D
Step response
5
(0.1s 1)(s 6)
q~ ( s)
4
2 s 13 3
1 2
f1 ( s) 1
(0.5s 1) 0
change in the set point, initial condition is vanished and Figure 3: Set point response with initial condition
then the set point signal is tracked without any off set. The
closed loop system is stable as the phase portrait shows and
0.4
as it is strictly proper, a smooth response is attained. Closed
loop system is able to follow any piecewise constant refer- 0.2
x2
i.e. regardless of uncertainty in the plant model. -0.4
-0.6
6 -0.8
5 -1
-5 -4 -3 -2 -1 0 1 2 3 4 5
X1
4
Figure 4: phase portrait of closed loop near the origin
Initial condition response
-2
Example2: Consider a group of linear singular systems as
0 10 20 30 40 50 60 70
Time (sec) described by the following set of transfer function.
Figure 1: System response to initial condition
p ( s ) s 2 s 1
12 p1 ( s) s 2 3s 1
10 Disturbance rejected p2 ( s) s 2 4s 1
Step response with disturbance
8
p3 ( s ) s 2 1
Disturbance occured
6
The nominal plant (model) is assumed to be:
~
p( s) s 2 2s 1
4
~ s 2 2s 1
0 p ( s)
0 10 20 30
Time (sec)
40 50 60
(0.1s 1) 3
Figure 2: Set point and disturbance response
(0.1s 1) 3
q ( s ) f1 ( s)
s 2 2s 1
It is aimed to design a single robust controller for all plants
described above. The uncertainty norm is bounded for all of
the models described in (19), however, its infinity norm is
near the unity for case p4. Following figures depict closed
IJSER 2011
http://www.ijser.org
238
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 8
ISSN 2229-5518
loop behavior in tracking step set point. Set point tracking is CONCLUSION
almost perfect even in presence of uncertainties. This charac-
In this paper a new, effective and simple control scheme is
teristic also exists in conventional unit feedback control, e.g.
proposed for robust internal model control of singular linear
PID controllers. However state space methods like [5] dont
systems. The method has many advantages over the exist-
include this feature. In contrast to the aggressive nature of
ing, state space methods including robust strict properness
singular systems, closed loop response is smooth enough to
of the closed loop, avoiding algebraic loops, robust tracking
ensure preventing any damage to the instruments.
1.4
of specific signals and the ability to robustly stabilize a larg-
er group of singular systems comparing with other methods.
1.2
Two simulation examples are included to depict the algo-
1 rithm performance.
Step Response
0.8
References
0.6
0.4
[1] Mertzios, B.G. and F. Lewis, Fundamental matrix of discrete singular
0.2
systems, Journal of circuits, systems and signal processing, vol.8, NO.3,
1989.
0
0 20 40 60 80 100 120 140 160 180 200
[2] Dai, L. Singular control systems, Springer verlag, 1989.
Time (sec)
[3] Campbell, S.L, R. Nikoukhah and B.C. Levy, Kalman filtering for
general discrete time linear systems, IEEE transactions on automatic
Figure 5: step response for p1
control, vol.44, NO.10, pp. 1829-1839,1999.
1.4
[4] Luenberger, D. Dynamic equations in descriptor form,IEEE
1.2 transactions on automatc control, vol.AC.22, NO.3, June, 1977
[5] Syrmos, V.L. and F.L. Lewis, Robust eigen value assignment in
1
generalized systemsProceedings of the 30th conference on decision and
control, England, 1991.
Step Response
0.8
[6] Fang, C.H., L.Lee and F.R. Chang, Robust control analysis and design
0.6 for discrete time singular systems, Automatica, Vol.30, NO.11, pp.1741-
1750, 1994.
0.4
[7] Xu, S. , C. Yang, Y. Neu and J. Lam, Robust stabilization for uncertain
discrete singular systems, Automatica, Vol.37, pp.769-774, 2001.
0.2
[8] Mukundan, R. and W. Dayawansa, Feedback control of singular
0 systems- proportional and derivative feedback of the state, International
0 20 40 60 80 100 120 140 160 180
Time (sec) journal of systems science, vol.14, NO.6, pp.615-632, 1983.
[9] Chu, D.L., H.C. Chan and D.W. Ho, Regularization of singular systems
Figure 6: Step response for p2 by derivative and proportional output feedback, Siam journal of matrix
1.4
nalysis and applications, vol.19, NO.1, pp.21-38, January, 1998.
1.2
[10] Xu, S. and J. Lam, Robust stability and stabilization of discrete singular
systems: An equivalent characterization, IEEE transaction on automatic
1 control, vol.49, NO.4, April 2004.
[11] Hou, M. Controllability and elimination of impulsive modes in
Step Response
0.8
descriptor systems, IEEE transactions on automatic control, vol.49,
NO.10, 2004.
0.6
[12] Morari, M. and E. Zafiriou, Robust process control, Prentice Hall,
0.4 1989.
0.2
0
0 50 100 150 200 250
Time (sec)
IJSER 2011
http://www.ijser.org
239
Abstract - Demand considered in most of the classical inventory models is constant, while in most of the practical cases the
demand changes with time. In this article, an inventory model is developed with time dependent two parameter weibull demand
rate whose deterioration rate increases with time. Each cycle has shortages, which have been partially backlogged to suit
present day competition in the market. Also the effect of permissible delay is also incorporated in this study. The total cost
consists of ordering cost, inventory holding cost, shortage / backordering cost, lost sale cost and deterioration cost are
formulated as an optimal control problem using trade credit policy. Optimal solution for the model is derived and the trade credit
on the optimal replenishment policy are studied with the help of numerical examples.
Index Terms- Inventory, Shortages, partial backlogging, weibull demand, trade credit , variable deterioration, replenishment
1. INTRODUCTION In some real life situation there is a part of demand which
can not be satisfied from the current inventory, leaving the
I n the classical inventory economic order quantity system in stock out. In these systems two situations are
(EOQ) model, it was tacitly assumed that the customer mainly considered, all customers wait until the arrival of
must pay for the items as soon as the items are received. the next order (Complete back order case) or all
However, in practices or when the economy turns sour, customers leave the system (lost sales case). However,
the supplier frequently offers its customers a permissible in practical, some customers are able to wait for the next
delay in payments to attract new customer. In todays order to satisfy their demands during the stock out period,
business transaction, it is frequently observed that a while others do not wish to or can not wait and they have
customer is allowed some grace period before settling the to fill their demands from other sources. This situation is
accounts with the supplier or the producer. The customer modeled by consideration of partial back- ordering in the
does not have to pay any interest during this fixed period formulation of the mathematical model. Wee (1999)
but if the payment gets delayed beyond the period developes a determinatic inventory model with quantity
interest will be charged by the supplier. This arrangement discount, pricing and partial backlogging when the
comes out to be very advantageous to the customer as product in stock deteriorates with time according a weibull
he may delay the payment till the end of the permissible distribution. Teng (2002) presents on EOQ model under
delay period. During the period, he may sell the goods, the conditions of permissible delay. Chen et al. (2003)
accumulate revenues on the sales and earn interest on establish an inventory model having weibull deterioration
that revenue. Thus it makes economic sense for the and time varying demand. Wu et al. (2003) considered an
customer to delay the payment of the replenishment inventory model where deteriorating rate and demand
account upto the last day of the settlement period allowed rate are follows the Chens model (2003) where shortages
by the supplier on the producer. Similarly for supplier, it are permitted. Papachristos and Skouri (2003) present a
helps to attract new customer as it can be considered production inventory model with production rate, product
some sort of loan. Furthermore, it helps in the bulk sale of demand rate and deteriorating rate, all considered as
goods and the existence of credit period serves to reduce functions of the time. Their model follows shortages and
the cost of holding stock to the user, because it reduces the partial rate is a hyperbolic function of the time up to
the amount of capital invested in stock for the duration of the order point. They propose an algorithm for finding the
the credit period. So, the concept of permissible delay in solution of the problem. Abad (2003) considers the
payments beneficial both for supplier and customer. problem of determining the optimal price and lot size for
reseller in which unsatisfied demand is partially
----------------------------------------------- backordered. There are several interesting papers related
1. P.G. Department of Statistics, Utkal Universitym to partial backlogging and trade credits viz. park (1982),
Bhubaneswar-751004, India, [email protected] Jamal et al. (1997) , Lin et al. (2000), Dye et al. (2007)
2. Dept. of Mathematics, Orissa Engineering College,
Bhubaneswar-751007, India, [email protected] and their references.
T 1 1
1 1 T T 1
C2 (8)
2
T
2
T
1 2
1
The amount of lost cost during the period (0,T) is given by
T
t 1
LC C 3 1
T1 1 T t
T 1 T1
= C3 T1 T (9)
1 1
IJSER 2011
http://www.ijser.org
241
O 1
Case II: T > M >T1
In this situation interest charged IC2=0 and the
interest earned per time unit is
T
IE 2 PI e t dt T M T
0
2T 1
PI e MT (15)
1
(Fig-2)
IC 1 PI c I t dt
M
T 1 v 9 T 3
1 1
1 6 3
PIc 3
v M 2
M .T
2 3 1
M 1 v T1 M 3 (Fig-3)
T1 M (11)
1 6
Total inventory cost per unit time is given by Then the total inventory cost per unit time is
1 given by
TIC1 A HC SC LC DC IC1 IE1 1
T TIC 2 A HC SC LC DC IC 2 IE 2
T
1 T1 1 v 9T1 3 T 1 1
A h
6 3
c2
T1 T 1 I T1 1 v 9T1 3
T 1 1 A h
T 1 6 3
T 2 T1 2 T 1
T1 T
T1
+ C3 T 1 1
T 2 T1 2
1 2 1 1 C2
T1 T 1
1 1 2
v T1 2 T1 1 v 9 T1 3
C1 PIC T 1 T1 vT1 2
2 1 6 3 C3 T1 T C1
1 1 2
v M 3 2 M 1 v M 3T1
MT1 T1 M 2T 1
2 3 1 6 PI e M T (16)
1 1
PI e M
(12)
1 The solutions for the optimal values of T1 and T
can be found by solving the following equations
simultaneously.
IJSER 2011
http://www.ijser.org
242
Case II : We have
A=50, h-2, c1=2, C2=0.8, C3=2.0, =0.0002, v=0.001, dD(t )
( 1)t 2 (19)
dt
=0.5, p=0.5, Ie=0.1, m =0.5 d 2 D(t )
and ( 1)( 2)t 3 (20)
T1 T Q TIC2 dt 2
0.8 0.4998753 1.257346 0.000115 39.766547
1.0 0.41972384 1.066637 0.000084 46.876526 D (t ) d 2 D( t )
1.5 0.398675 1.051521 0.00005 46.962943 i) for 0<<1, 0 and 0 the demand rate
2.0 0.37985845 1.047747 0.000029 47.721766
dt dt
2.5 0.359786885 1.007243 0.000016 49.640806 decreased with time at an increasing rate.
3.0 0.34985674 0.990081 0.000009 50.501263
3.5 0.29985764 0.85216 0.000003 58.674548 ii) =1, the demand rate becomes steady over time.
4.0 0.2789685 0.797059 0.000001 62.730769 d D( t ) d 2 D( t )
4.5 0.2756759 0.791841 0.000001 63.144167 iii) 1< 2, 0, 0, then the demand
dt dt 2
5.0 0.2679865 0.772816 00000 64.698610
increases with time at decreasing rate.
IJSER 2011
http://www.ijser.org
244
Abstract Security has always been a key issue with all the wireless networks since they have no physical boundaries. Many existing and
evolving threats which must be considered to ensure the countermeasures are able to meet the security requirem ents of the environments for
which it is expected to be deployed.
connection to the wired network. For wireless devices in a
1 Wireless Technologies WLAN to communicate with each other, they must all be
configured with the same SSID. Wireless devices have a
default SSID that is set at the factory. Some wireless devices
W ireless Broadband is a fairly new technology that
provides high-speed wireless internet and data
refer to the SSID as network name.
IJSER 2011
http://www.ijser.org
245
IJSER 2011
http://www.ijser.org
246
aim of saving battery life for their own communications are 3.5 Denial of Service Attack
considered to be selfish. The harmful or malicious nodes In a "denial-of-service" attack an attacker may try to
can change the routing information by modifying, prevent an authentic user from using a service. This may be
fabricating or impersonating nodes and thereby disrupting done by "flooding" a network, disrupt connections between
the correct functioning of a routing protocol [6]. two machines etc. The common method is to flood a
network with degenerate or faulty packets and hence
Once an attacker has gained enough information from the denying access to the legitimate traffic.
passive attack, the hacker can then launch an active attack
against the network. There are an large number of active 3.6 Physical Tampering
attacks that a hacker can launch against a wireless In this attack the physical device may be tampered and
network. Some examples of the attacks are Spoofing attacks, sensitive information can be extracted from it.
Unauthorized access, Flooding attacks etc. Spoofing occurs
when a malicious node misrepresents its identity by 4 General Link layer Attacks
altering its MAC or IP address in order to mislead the 4.1 Unencrypted Management Communication
information about the network topology. It can also create Almost all the IEEE 802.16 management messages are still
routing loops etc. sent unencrypted. The IEEE 802.16-2004 standard does not
provide any capability to encrypt management messages.
3 General Physical layer Attacks
3.1 Jamming Attacks on Wireless Networks 4.2 Masquerading threat
Jamming is a attack specific to wireless networks. Jamming By intercepting the management messages an attacker can
occurs when spurious RF frequencies interfere with the use the hardware address of another registered device.
operation of the wireless network. In some cases, the Once this is successful an attacker can turn a Base Station
jamming may be unintentional and is caused by the (BS) into a Rogue Base Station [7]. A rouge WiMAX base
presence of other devices such as cordless phones, that station pretends to be a valid base station, and then
operate in the same frequency as the wireless drops/eliminates all/some of the packets.
network. Intentional and malicious jamming occurs when
an attacker analyzes the spectrum being used by wireless 4.3 Threat due to Initial Network Entry
networks and then transmits a powerful signal to interfere In IEEE 802.16j-2009 no integrity protection for
with communication on the frequencies discovered. management messages can be made in case of multicast
transmissions and in case of initial network entry by a new
3.2 Scrambling candidate node [8].
Scrambling is a type of jamming attack for short intervals to
disorder targeted frames (mostly management messages)
5 General Network layer Attacks
which ultimately lead to network failure.
The main network-layer operations in MANETs are ad hoc
routing and data packet forwarding, which interact with
3.3 Water Torture Attack
each other and fulfill the functionality of delivering packets
In this attack the attacker pushes a Subscriber Station (SS)
from the source to the destination. Routing messages in Ad
to drain its battery or consume computing resources by
Hoc routing protocols exchange routing messages between
sending bogus frames [7].
nodes and maintain routing states at each node accordingly.
3.4 Man in the Middle Attack Data packets are forwarded by intermediate nodes along
In this attack an attacker intercepts the valid frames and an established route to the destination based on the routing
then intentionally resends the frames to the target system. states. Both routing and packet forwarding operations are
A wireless specific variation of man-in-the-middle attack vulnerable to malicious attacks, leading to various types of
is placing a rogue access point within range of wireless malfunction in the network layer. Network-layer
stations. If the attacker knows the SSID in use by the vulnerabilities generally fall into one of two categories:
network, he can gain information such as the key routing attacks and packet forwarding attacks, based on the
information, authentication requests and so on. It may target operation of the attacks [9].
involve spoofing an IP address, changing a MAC address
to emulate another host, or some other type of modification.
IJSER 2011
http://www.ijser.org
247
In the context of DSR [11], the attacker may modify the 5.1 Blackhole Attack
source route listed in the RREQ or RREP packets by In networking, black holes refer to places in the network
deleting a node from the list, switching the order of nodes where incoming traffic is silently discarded (or "dropped"),
in the list, or appending a new node into the list [12]. In hiding from the source the information that the data did
AODV [10] are used, the attacker may advertise a route not reach its intended recipient. The black holes themselves
with a smaller distance metric than its actual distance to the are invisible, and can only be detected by monitoring the
destination, or advertise routing updates with a large lost traffic. An attacker creates forged packets to
sequence number and invalidate all the routing updates impersonate a valid mesh node and subsequently drop
from other nodes [13]. By attacking the routing protocols, packets [14].
the attackers can pull traffic toward certain destinations in
the nodes under their control, and cause the packets to be
forwarded along a route that is not the optimal route or N1
which may even be nonexistent. The attackers can also
create routing loops in the network [9].
D
S
Attacks may be launched in addition to routing attacks
such as in packet forwarding operations. These attacks do
not disrupt the routing protocol but poison the routing
states at each node. Instead, they cause the data packets to
be delivered in a way that is intentionally inconsistent with S - Source BH N2
N1 - Node 1
the routing states. The attacker may drop or modify the N2 - Node2
contents of the packets or duplicate the packets it has BH - Blackhole
D Destination
already forwarded. Attacker may also inject large amount
Fig 2. Blackhole Attack
of bogus packets into the network which wastes a Source : [22]
significant portion of the network resources and introduces
severe wireless channel contention and network The properties of Blackhole attack is that firstly, the
congestion in the MANET [9]. Blackhole node exploits the ad hoc routing protocol, such
as AODV, to advertise itself as having a valid route to a
Routing protocols for ad-hoc networks are based on the destination node, even though the route is false, with the
assumption that intermediate nodes do not maliciously intention of intercepting packets. Secondly, the packets are
change the protocol fields of messages passed between consumed by the Blackhole node. Thirdly, the Blackhole
nodes. This assumed trust permits malicious nodes to nodes can conduct coordinated attacks.
easily generate traffic subversion and denial of service (DoS)
attacks. Attacks using modification are generally targeted 5.2 Greyhole Attack
against the integrity of routing computations and so by Grey Hole is a node that can change from behaving
modifying routing information an attacker can cause correctly to behaving like a black hole so as to avoid
network traffic to be dropped, redirected to a different detection. Some researchers discussed and proposed a
destination, or take a longer route to the destination solution to a black hole attack by disabling the ability for
increasing communication delays. Forged routing packets intermediate nodes to reply to a Route Reply (RREP) only
may be sent to other nodes to divert traffic to the attacker allowing the destination to reply [15].
or to some other node. The intention is to create a black
hole by routing all packets to the attacker and then 5.3 Wormhole Attack
discarding it. As an extension to the black hole, an attacker In a wormhole attack, an attacker forwards packets
could build a grey hole, in which it intentionally drops through a out-of-band high quality link and replays those
some packets but not others, for example, forwarding packets at another location in the network [16].
routing packets but not data packets. A more subtle type of
modification attack is the creation of a tunnel (or wormhole)
in the network between two colluding malicious nodes
linked through a private network connection [6]. A brief
description of the attacks is as follows -
IJSER 2011
http://www.ijser.org
248
IJSER 2011
http://www.ijser.org
249
IJSER 2011
http://www.ijser.org
250
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Production and enhancement of hydrogen on large scale is a goal towards the revolution of green and cheap energy.
Utilization of hydrogen energy has many attractive features, including energy renewability, flexibility, and zero green house gas
emissions. In this current research the production and the enhancement of hydrogen from the NaHCO3 mixed water have been
investigated under the action of diode pumped solid state laser with second harmonic of wavelength 532nm. The efficiency of the
hydrogen and oxygen yields was found to be greater than the normal Faradic efficiency. The parametric dependence of the yields
as a function of laser irradiation time, Laser focusing effect and other parameters of the electrolysis fundamentals were carefully
studied.
Index Terms Photo catalysis, Electrolysis of water, Hydrogen, Laser interaction, Electrical signals, Oxygen.
1. INTRODUCTION
scale for production of ammonia, for refining the petroleum Our work on lasers has revealed the important para-
and also refining the different metals such as uranium, cop- meters which played a critical role in the enhancement of
per, zinc, tungsten and lead etc. The main source of energy hydrogen from water by laser. Most of the research work
on earth is fossil fuels which cause severe pollutions and basis on photo catalysis has carried out by flash lamps. A
cannot last for long time use. Nuclear energy is very expen- very little work is done by lasers.21) Since laser light has
sive and having disposal problems. The other sources such special properties like monochromatic, coherent, intense and
as tidal and wind schemes are not sufficient. The solar, polarize, so it was of great interest to use the laser beams as
thermal and hydral energy sources are feasible but required a an excitation source in water. The second parameter is that
lot of capital. An alternative source is water, which is cheap, the most of the work has done on light water, distilled water
clean and everlasting source of global energy and heavy water; we have used drinking water for produc-
tion of hydrogen. We have used NaHCO3 electrolyte. The
Hydrogen gas can be easily obtained by the electro- diode pumped solid state laser having a green light of wave
lysis. However, direct decomposition of water is very diffi- length 532 nm was used as an irradiation source. We inves-
cult in normal condition. The pyrolysis reaction occurs at tigated the different parameters of the laser by monitoring
high temperatures above 3700Co. 1) Anomalous hydrogen the rate of evolved gases i.e. hydrogen and oxygen. We in-
generation during plasma electrolysis was already reported. spected the dependence of hydrogen and oxygen yields as a
2-5) Access hydrogen generation by laser induced plasma laser exposure time, the effect of laser beam power and the
electrolysis was reported recently. 6-9) laser focusing effect.
Water in the liquid state has the extremely high ab- 2. Experimental Setup
sorption coefficient at a wavelength of 2.9 m.10) The effect A schematic diagram of the hydrogen reactor is shown
of generation of an electric signal, when IR-laser radiation in Figure 2.The reactor contained a glass made hydrogen
having the power density below the plasma formation thre- fuel cell having dimension 10 inch x 8 inch. Fuel cell con-
shold interacts with a water surface, was discovered by.11) tained a window for irradiation of laser, an inlet for water
The electrical signals induced by lasers were already re- and electrolyte, two outlets for hydrogen and oxygen gasses,
ported.12,13) A lot of research has been done on photo cata- an inlet for temperature probe and a D.C power supply mod-
lytic hydrogen production. The photo catalytic splitting of el ED-345B.Two electrodes steel and Aluminum were ad-
water using semiconductors has been widely studied. Many justed in the fuel cell. A CCD camera and a computer trig-
scientists produce hydrogen from water by using different gered with fuel cell for grabbing, a multimeter and gas flow
photo catalysts in water and reported hydrogen by the inte- meter are arranged with the fuel cell. The diode pumped
raction of lasers.14-18) In addition to this photolysis of wa- solid state laser with second harmonics DPSS LYDPG-1
ter has been studied using UV light.19) Solar energy has model DPG-2000 having green light of wavelength 532 nm
been used to obtained Hydrogen from water by photo cata- was placed near the fuel cell for irradiation during electroly-
IJSER 2011
http://www.ijser.org
251
International Journal of Scientific & Engineering Research Volume 2, Issue 4 April-2011
ISSN 2229-5518
B
A
O2 D
H2
C PC
o
Reaction Mechanism
CCD 1
H 2 O h electrolyt
e H 2 O 2 ----------- (1)
- + Camera 2
A
Power The energy deposited to the water
meter
Power
E VIt h ------------- (2)
The criteria for splitting water is
Supply DC Source
E Ed
Figure2: Schematic diagram of hydrogen reactor and
E Ed K H K O --------------------(3)
IJSER 2011
http://www.ijser.org
252
International Journal of Scientific & Engineering Research Volume 2, Issue 4 April-2011
ISSN 2229-5518
0.3
3.3 Effect of Temperature
0.25
The other important factor which affected the
H2 and O2 Yields(cc)
and the efficiency of hydrogen and oxygen yields. It was Proc. ICCF-7, Vancouver, Canada, 279, (2000).
observed that the efficiency of yields increased rapidly af- [3] Tadahiko Mizuno, Tadayoshi Ohmori, Tadashi Akimoto, Akito Taka-
ter one minute of the run and reached at 95%.After that hashi, Production of heat during Plasma Electrolysis in Liquid",J.
efficiency slightly decreased 90% and maintained this val- Appl. Phys., Vol.39, No.10 (2000).
ue throughout the run of experiment. This efficiency found [4] N.N. Il'ichev, L.A. Kulevsky, P.P. Pashinin Photovoltaic effect in water
to be greater than normal faradic efficiency. induced by a 2.92-m Cr3+: Yb3+: Ho3+: YSGG laser Quantum Elec-
tronics 35(10) 959-961 (2005).
[5] R. Mills, M. Nansteel and P. Ray Water bath calorimetric study of
excess heat generation in resonant transfer plasmas Plasma Physics,
0.45
69, 131 (2003).
0.4
0.35
[6] Muhammad Shahid,Noriah Bidin,Yacoob Mat Access Hydrogen
0.3 production by photolysis of K2CO3 mixed water. Proc. FSPGS2010,
H2 Yield (CC)
70
umn Quantum Electronics 39 (2) 179 -184 (2009).
60
50
[12] Daniela Bertuccelli and Hctor F. Ranea-Sandoval Perturbations of
40 Conduction in Liquids by Pulsed Laser-Generated PlasmaIEEE Jour-
30 nal of Quantum electronics, Vol. 37, NO. 7, July 2001.
20 [13] Miyuki Ikeda,"Photocatalyic hydrogen production enhanced by laser
10
ablation in water-methanol mixture containg titanium(IV) oxide and
0
0 2 4 6 8 10 12
graphite silica",catalysis communications 9 (2008)1329-1333.
time(min) [14] Byeong Sub kwak,"Enhanced hydrogen production from Metha-
Figure 8. A graph of laser exposure time versus nol/water photo-splitting in TiO2 including Pd compnant" Bull Korean
efficiency of yields chem. Soc, Vol.30,No5, (2009).
[15] M.A Gondal "Production of hydrogen and oxygen by water splitting
3 CONCLUSION using laser induced photo-catalysis over Fe2O3."Applied catalysis A;
The experimental results revealed that, the diode genral 286(2004) 159-167.
pumped solid state laser with second harmonics having a [16] C. A. Sacchi, Laser-induced electric breakdown in water, J. Opt. Soc.
green light of wave length 532nm is highly efficient in Amer., vol. B 8, pp. 337345, (1991).
photo splitting of water into hydrogen and oxygen dur- [17] A. De Giacomo et al, spectroscopic "investigation of laser-water interac-
ing plasma electrolysis of NaHCO3 water. The laser pow- tion beyond the breakdown thresh hold energy"CNR.IMIP Sec Bari, via
Amendola 122/D, 70126 Bari, Italy (2007).
er, focusing effect and temperature of the water have a
[18] F. Andrew Frame,a Elizabeth C. Carroll,a Delmar S. Larsen,a Michael
significant role in enhancement of the hydrogen produc-
Sarahan,a Nigel D. Browningb and Frank E. Osterloh;First demonstra-
tion photolysis of water.
tion of CdSe as a photocatalyst for hydrogen evolution from water
under UV and visible light. DOI: 10.1039/b718796c; Berkeley, CA,
USA);(2008).
REFERENCES [19] Kestutis Juodkazis1,6, Jurga JuodkazytPhotoelectrolysis of wa-
[1] T. Mizuno, T. Ohmori and A. Akimoto Generation of Heat and Prod- ter:Solar hydrogenachievements and perspectives Optical Society of
ucts during Plasma Electrolysis,Proc. ICCF-10, (2003). America(2010).
[2] T. Ohmori and T. Mizuno, Strong Excess Energy Evolution, New [20] S.Sino,T.A.Yamamoto,R.FujimotoHydrogen evaluation from water
Elements Production, and Electromagnetic Wave and/or Neutron dispersing nano particles irridated with gamma rays/size effect and
Emission in the Light Water Electrolysis with a Tungsten Cathode, dose rate effect.Scripta matter 44(2001)1709-1712.
IJSER 2011
http://www.ijser.org
254
International Journal of Scientific & Engineering Research Volume 2, Issue 4 April-2011
ISSN 2229-5518
[21] Kia zeng, dongka zhang "Recent progress in alkaline water electrolysis
for hydrogen production and applications"progress in energy and
combustion science (2009).
IJSER 2011
http://www.ijser.org
255
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract This paper expresses about how to construct the peripheral interface controller (PIC) based the display unit of the
remote display system. The remote display system can be used to display the token number that is to know the people. It is also
intended for use in clinic, hospitals, bank, and etc. In this research, the peripheral interface controller based remote display
system is used for displaying the number and the character. The remote display system consists of two portions: display unit
and console unit. The display unit of remote display system contains the display controller, three seven-segment Light Emitting
Diode (LEDs), Diode matrix, Category display LEDs and DSUB9 connector. The display controller is controlled by the
microcontroller PIC16F873. It controls to display the token numbers. And then it can control at the diode matrix to display the
three kinds of character such as A, B, and C. The three numbers of seven-segment LEDs will display the token number from
one to 999. The diode matrix helps to display the category display LEDs. The category display LEDs will display one kind of
characters. In the research work, the category display LEDs can be displayed only three kinds of characters. For the display
unit, the DSUB9 connector applies the data that is from the console unit of the remote display system. In this research work, the
display unit works as the receiver the console unit works as the transmitter in remote display system. This paper explains about
the design, construction, testing and result of a remote display system.
Index Terms Diode Matrix, Display Unit, Light Emitting Diode, Peripheral Interface Controller, Remote Display System.
1 INTRODUCTION
are two sets of displays, a small one kept inside the cabin
of the person controlling it, and a large one kept outside
for visitors. There is a keyboard for teller to enter token
numbers. The large size plasma devices and TTL chips
are being used in the constructing of bank token display
[2]. In digital bank token number display system, the dis-
play device circuit is used to control both the common
cathode LED and common anode plasma displays. A well
regulated transistorized circuit provides power to TTL
chips as well as to high voltage plasma displays.
the area which stores a category display scan position. specification. For displaying the seven-segment LEDs,
ddisp-p is to store a seven-segment LED display scan RA0, RA1, RA2 must be rewritten while d\saving this
position. rcv-p is the area which counts a received data. value.
And r-category is the workarea to use for the change of The following flowchart shows the LED control
the received category data. When making the power ON process.
of the PIC, the instruction is executed from zero address
of the program memory. When there is interruption
processing, processing is began for the address4. And
then it makes each processing jump with the GOTO in-
struction. The initialization processing is done after turn-
ing on. In this processing, all ports of the PORTA are set
output mode. And all ports of PORTB are also set to out-
put mode and are used for the segment control of the sev-
en-segment LED. RC0, RC1, RC2, RC3 are set to output
mode for the scan of the category display. RC7 pin of
PIC16F873 is the only input mode and to receive the
transferred data form the console unit.
For the USART, the most of the registers about the setting
of a USART are in bank1. Therefore, it is necessary to be
careful of the bank designation. The designation of the
asynchronous serial communication with 9600bps trans-
mission speed is done. This receiving interrupting occurs
when the data is received in the receiving buffer. When
the initialization of the USART ends, the receiving opera-
tion is immediately done. When the initialization of inter-
ruption will start, Global Interrupt Enable bit (GIE), Peri-
pheral Interrupt Enable bit (PEIE), and Timer0 Overflow
Interrupt Flag bit (T0IF) are set. To use the interruption of
transmission complete, PEIE must to set.The initialization
processing is ended. Then, it waits for the interruption
only.
As the main processing, it repeats the execution of the
same address. In the interruption process subroutine, two
kinds of interruptions are used. They are interruption by Fig. 4. Software flowchart for subroutine of LED control
the time-out of TMR0 and the interruption when the data process.
is received. The interruption of TMR0 is identified by the
TOIF bit of the INTCON register and the data receive Therefore, it makes data 0 except the category speci-
interruption is identified by the RCIF bit of the PIR1 regis- fication by the AND. By suing the AND, the result be-
ter.. The RETFIE instruction is executed at end of the in- comes 1 only when both data is 1. If the bit that fixing
terruption processing. It becomes the interruption possi- value in the instruction is 1, the contents dont change.
ble condition. When the bit that is fixing value is 0 always becomes
0. The digit specification data of the seven-segment
3.3 Software Flowchart and Processing of LED LED is set to RA0-2. Now, the calculation of the OR is
Control used. When either of the data is 1, the result becomes
The interruption occurs every two milliseconds with 1. If the contents of the bit that fixing value in the in-
TMR0. The interruption flag of TOIF should be cleared struction is 0, the contents do not change. When the bit
first. If TOIF bit is not cleared, the interruption occurs that is the fixing value is 1, the contents always be-
without waiting the desired time. Therefore, the setting of comes 1. When the digit specification is ended, the
the timer value of TMR0 is needed. As for the control of segment data will be to display. The segment data to dis-
the category display, one row is done every time it inter-
play is written in the portB. It reads the content in the
rupts in the two milliseconds. The specification of the row
received data area of each of the digits and writes in the
is done by the portC. As for the seven-segment LED con-
portB. It makes writing processing to the portB common
trol, a three digit figure is displayed by the seven-segment
and a processing step is reduced.
LED. The one digit is controlled every two milliseconds
about these LEDs. The digit specification of seven- 3.4 Software Flowchart for Data Receive Process
segments is done by RA0, RA1, and RA2. And RA3, RA4, The following flowchart is the software flowchart for sub-
RA5 of the PIC16F873 is used for the category display
IJSER 2011
http://www.ijser.org
258
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
routine of data receives process. the received data is cleared. When the overrun occurs, it
stops the receiver and it must be started once again. This
process must be always done when the overrun occurs.
It clears all display data after that to find that the over-
run occurred in the process. After the overrun error is
checked, the frame error is checked. The frame is from the
start bit to the stop bit. When a stop bit is not detected
after detecting a start bit, the frame error occurs. If a nor-
mal frame is received, the error passes away. When the
frame error occurs, it clears display only. If the frame er-
ror occurs, the Framing Error (FERR) bit of the Receive
Status and Control Register (RCSTA) register is set. And
then the category data, 100th data, 10th data and 1st data
are cleared. The received position is almost cleared. The
(a) interrupt is ended.
A
After the frame error is checked, the start process is
checked. If the frame error does not occur, the start
Start?
NO process may be occurred. The category, 100th, 10th and 1st
YES Category?
NO data is sent sequentially from the console unit to the dis-
NO
Start data? YES
play unit. The information which shows the kind of the
YES
Translate to data isnt included in each data. The kind of the data is
display data
Read start data decided in order to receive. The console unit sends start
Receive position +1
SET category data
PORTA= 00CCCXXX
data first and transmits category, 100th, 10th, and 1st data
B
Receive position +1
continuously in the order. In the receiving process, it
waits for the start data first. If the start data bit is received
Clear receive it is read and the received position is set. Then, the re-
position
ceived position is incremented and the interrupt is ended.
NO
When the data except the start data comes in the start
100th ?
NO
data receiving position, the data is canceled and the in-
YES 10th ? crementation of the receiving position is not done.
Translate to
display data YES 1st ?
NO
C The category data is received behind the start data. Af-
Translate to
th
Write 100 data display data YES ter the start data is checked, the category data will be
DATA_H
Translate to checked. The category data which is sent from the console
Write 10th data display data
Receive position +1 DATA_T unit is sent in the form to use in the console unit. The bit
Receive position +1
Write 1st data
DATA_U
configuration and the bit contents of the data that is sent
from the console unit to the display unit is different. In
Clear position
the received data, bit four-six is the data which shows a
End Interrupt category. Bit six indicates A, bit five indicates B, and
bit four indicates C. And, 0 indicates lighting-up and
(b) 1 indicates going-out.
If the category data is received, it is checked to be A
Fig. 5. Software flowchart for subroutine of data receive firstly. As for the display unit, bit three-five indicates a
process: (a) for checking the overrun rrror and frame er- category. Bit three corresponds to A, bit four corres-
ror and (b) for checking the category and LED. ponds to B and bit five corresponds to C. In the dis-
play unit, 0 indicates going-out and 1 indicates light-
The data receive process makes the data receiving from ing-up. The difference is due to the difference of the
the console unit. This process is started by the data re- hardware of each unit. In the number data receive
ceive interruption by the Receive Interrupt Flag (RCIF) bit process, the 100th data, the 10th data and the 1st data are
of the PIR1 register. It is different from the other interrup- received in the order following the category data. The
tion display bit and to clear RCIF bit by the software is form of the console unit and the form of the display unit
not necessary. It is cleared by the hardware in reading the are different about these data. The bit position is the same
data which was received to RCREG. When the data is and the meaning is opposite to 0 and 1. In the console
received, the overrun error must be checked. If an over- unit, 0 is lighting-up and 1 is going-out. In the display
run condition occurs, the Continuous Receive Enable unit, 0 is going-out and 1 is lighting-up.
(CREN) bit of Receive Status and Control Register The Exclusion OR is used to reverse 0 and 1. Ac-
(RCSTA) must be cleared. Then the CREN bit is set and cording to Exclusive OR, 0 is output if being a value
IJSER 2011
http://www.ijser.org
259
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
with the same data and 1 is output when the value is the PIC16F873. Fig. 6 shows the circuit diagram of the
different. The result may reverses when calculating in display unit of the remote display system.In the portion
Exclusion OR with 1. When the received data is of design of display unit of the remote display system, the
01001111, the result of execution of Exclusion OR with following circuits are including: the seven-segment LED
11111111 is 10110000. This is the value which re- control circuit, the category display control circuit, the
versed 0 and 1 of the received data. A translated val- RS232C control circuit, PIC oscillator circuit and power
ue is written in the storage area which corresponds to circuit.
IC4
+12V
each of the digits. C5 78L05 C6
R 12
TR2
TR1 R34 -R 46
18 G1 23
frame would be received, being normal. The receiving 19 IC2
G2 74HC154 5 6
34
4 5 R 14
R 15
TR 3 R47 -R 59
6 7 R2
processing has the possibility to make a mistake in the 12 GND
7 8
8 9
TR 4
9 10
kind of data. When the 100th data is receiving, the com- A B C D 10
23 22 21 20
11
R3
R 16
R 17
TR5 R 60 -R72
R18
TR 7 R73 -R 83
the console unit is receiving, it receives the start data as R4
R 19
TR8
10th data and category data as 1st data. Because it becomes R20
R 21
TR 9 R84 -R 92
R5
start data waiting condition when 1st data is received, the TR 10
R 22
100th data, 10th data, and 1st data are canceled. So, the R6 R 23
TR11 R93 -R 100
TR 12
normal receiving is done. R 24
TR13
R 25 R 101 -R 109
R7
TR 14
R 26
TR16
TR15 R 110 -R 119
C2 C3
IC3
console unit of the remote display system. PIC16F873 con- 5
8
ADM232AAN
3
9
18
RX RA2 4
R 163
LED1 LED2 LED3
6 15
trols the three numbers of seven-segment LEDs and diode RX C4
IC1
PIC16F873
GND
matrix. a g a g a g
IJSER 2011
http://www.ijser.org
260
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
al which applies + voltage is common to all the segments. are used. At the LED selecting circuit, that transistor is
Because an LED selecting circuit is put in the side of the + directly controlled by the portA of PIC. If the port A of
voltage, PNP type is used for the control transistor. When PIC is 0V (0 condition), the transistor will be off and the
using NPN-type transistor for this circuit, the emitter of LED does not light-up. When the portA of PIC is 5V (1
the transistor is connected with the anode terminal of the condition), the base electric current flows through the
LED. In the case, the electric current control for the base transistor. And the electric current through the LED and
becomes difficult. the LED will be light-up.
At the segment selecting circuit, the NPN transistor be-
comes off when the portB of PIC is making 0V (0 condi-
tion). So, the base electric current of PNP transistor does
TR26
not flow and the PNP transistor will off. The electric cur-
rent does not through the segment of seven-segment LED
and the segment does not light-up. If the portB of the PIC
is 5V (1 condition), the NPN transistor will be on. The
TR27
electric current flows through the base of the PNP transis-
tor and that transistor will be on condition. And the elec-
tric current flows through the segments and the segment
TR32 lights up. The PNP transistor is not directly by the PIC.
4.2.2 Category Display Control Circuit
Fig. 7. Schematic diagram of 7-segment LED control cir- In the display unit, the category is displayed with the di-
cuit. ode matrix. Fig. 8 shows the category display control cir-
cuit. That consists of the row selecting circuit and the LED
The TR27 is used for the output of PIC. This is because
the voltage which is applied to the I/O port of PIC is li- selecting circuit. A circuit like seven segments is used
mited to +5V. When making TR26 ON, it makes the base about the drive circuit for the category display. A catego-
voltage of TR26 less than +12V (About11V). The electric ry character is one character but is displayed by the LED
current flows through the base with this. When making matrix. An LED matrix is composed of 11 lines to the di-
TR26 OFF, it makes base voltage +12V. So, about +12V rection of the side (Row) and is composed of 13 lines to
the longitudinal (Column). Alighting-up control is done
voltage is applied to the base of TR26. The base of TR26
can not be directly driven by PIC. When making RA0 of every row. The row selecting circuit controls to pass an
PIC 0V (0 conditions), TR27 becomes OFF. Therefore, the electric current to the LED of the row which was specified
base electric current of TR26 does not flow and TR26 be- by PIC. Because there are 11 rows, the direct control by
comes OFF too. That is, LED1 does not light up. When PIC is difficult. It is to limit the number of the I/O ports.
A control signal is developed from the four-bit signal of
making RA0 of PIC +5V (1 condition), the base electric
current flows through TR27 and TR27 becomes ON. PIC to 11 signals by the decoder IC. The operation of the
When TR27 becomes ON, the electric current flows row selecting circuit is similar to the case of seven seg-
through the base of TR26 and TR26 becomes ON condi- ments.
tion. By this, +12V are applied on the anode of the LED
and the LED becomes the condition about which it is
possible to light up. TR1
The segment selecting circuit drives a lit segment. A seg-
ment selecting circuit is put between the LED and the
grounding. So, the transistor for the control can be direct- TR2
ly driven by PIC. In the left figure, a circuit for the "a"
segment is drawn. The other segment control circuit is
similar too. When making RB6 of PIC 0V (0 conditions), TR23
TR32 becomes OFF. Therefore, the electric current does TR24
not flow through the "a" segment of the LED and the "a" TR25
segment does not light up. The base electric current flows
through the base of TR32 when RB6 of PIC is +5V (1 con-
dition). With this, the electric current flows through the Fig. 8. Schematic diagram of the category display control
"a" segment and the "a" segment lights up. In this re- Circuit.
search, the common cathode seven-segment LED is used. The LED selecting circuit is the circuit which drives a lit
So, the LED selecting circuit used only one NPN transis- LED in the selected row. The character to display with the
tor for each seven-segment LED. For the segment select- circuit this time is three kinds. The kind of the character is
ing circuit, one PNP transistor and one NPN transistor controlled by RA3, RA4 and RA5 of PIC. The LED which
IJSER 2011
http://www.ijser.org
261
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
4 CONCLUSION
This research describes about the display unit of the re-
mote display system. The peripheral interface controller
Fig. 13. Testing the receiving data both console unit and based remote display system can be used to inform the
display unit in remote display system. people. In this project, the display unit is constructed, one
of the nearest, most versatile and most useful LED array,
to view alphabetic character and numeric characters. In
The Fig. 14 shows the display portion of the remote this research, the three kinds of characters and the three
display system. digits numbers are displayed. The PIC16F873 controls the
numbers and the characters. The display unit of peripher-
al interface controller based remote display system is de-
signed to construct using with the devices which can buy
easily in the market, easy to use and familiar with a per-
son who interested in electronics field. This system is de-
signed for English language capital letters and numbers.
This thesis is not only applied for the educational aid for
the beginner to learn the design, programming and de-
velopment of the applications which use the PIC but also
applied for displaying the number in many fields such as
clinic, bank and financial institutions, interviewing com-
mittees to call the candidates and etc.
As further extension, the remote display system is
used 15m length between the console unit and the display
unit in this research. For more distance, the various kinds
of RS232 interface can be used. In large area of display, it
can be used the largest size of seven-segment LED. By
this way, the resolution of the display is smooth. Instead
of LED, LCD can be used in this research. In this research,
the digit display part of the display unit is made to light
up by one digit to suppress an electric current. It can be
Fig. 14. The display unit with the data receiving process.
made to light up the digits at the same time. It can be
used to hold the display data by latch register (74LS273).
The supply voltage of the display unit is supplied from
The latch register is placed between the peripheral inter-
the console unit of the remote display system. Fig. 15
face controller and the segment selecting circuit. The re-
shows the testing of received voltage level at the display
mote display system can be widely used in many fields
unit.
that need to give information to users or partners. To de-
velop the display technique and devices, people should
IJSER 2011
http://www.ijser.org
263
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
ACKNOWLEDGMENT
Firstly the author would like to thank her parents for their best
wishes to join the Ph.D research. The author would like to ex-
press the heart-felt gratitude to Daw Atar Mon for her leader-
ship and advice, Daw Khin Sandar Tun for guidance in her re-
search and U Tun Tun Win for suggestions on how to design
the remote token display system. The author greatly expresses
her thanks to all persons who will concern to support in prepar-
ing this paper and her research.
REFERENCES
[1] V.Rajarama and Mohan Ingle, Electronic Project Volume-16, An EFY
Enterprise Publication, 1995.
[2] BEL Application Lab, Electronic Project Volume-8, An EFY Enter-
prise Publication, 1987.
[3] Inoue, S, Remote Display System, PIC Circuit Gallery, October 2005.
[4] Data Sheet PIC16F873 28/40-Pin 8-bit CMOS FLASH Microcontrol-
lers, <http://www.microchip.com>
[5] +5V-Powered, Multichannel RS-232 Drivers/Receivers.
<http://www.maxim.ic.com/packages>
[6] Data Sheet of 74HC/HC154 4-to-16 Line Decoder/Demultiplexer,
September, 2004. <http://www.philips.com>
[7] http://www.Datasheet 4u.com
[8] Thomas L. Floyd, Electronic Devices I, 4th Edition, Prentice Hall
International, Inc, 1996.
[9] Thomas L. Floydm, Electronic Devices II. 4th Edition, Prentice Hall
International, Inc, 1996.
[10] Thomas E.Kissell, Industrial Electronics, Printed in the Republic of
Singapore, Prentrice-Hall International Editions.
[11] Thomas C.Hayes Paul Horowitz, Student Manual for the Art of Elec-
tronic, Printed in Great Britain at the University Press, Cambridge.
[12] Nashelsky, L.and Boylestad, R.L., Electronic Devices and Circuit
Theory. 8th Edition, U.S.A. Prentice Hall, Inc. 2002.
[13] Boylestand, R., Electronic Devices and Circuit Theory, Fifth Edition,
Printed by Prentice-Hall International, Inc.
[14] http://www.interq.jp.au.com
[15] http://www.piclist.com
[16] http://www.st.com
[17] http://www.datasheetarchive.com
IJSER 2011
http://www.ijser.org
264
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
AbstractAn algorithm is a set of instruction pattern given in an analytical process of any program/function-ale to achieve
desired results. It is a model-programmed action leading to a desired reaction. A neural network is a self-learning mining model
algorithm, which aligns/ learns relative to the logic applied in initiation of primary codes of network. Neural network models are
the most suitable models in any management system be it business forecast or weather forecast. The paper emphasizes not
only on designing, functioning of neural network models but also on the prediction errors in the network associated at every step
in the design and function-ale process.
X
1
W
1
Ac- Net
Sum- tiva- Out
ming tion put
W up Func
X 2
juncn -tion
2
Out = 1/1+e-Net
Sigmoid function
W
X n
n
Out
Wei-
Input ghts
sig-
nals
Thre
shol
d
Net
tion of a neural network otherwise the network would insulin in correct amount and at required times so that
have been just a plain mathematical algorithm without body learns to metabolize efficiently with inefficient or
logical application. For feedback or feed forward learning week pancreas. Ageing is one of the prime factor for all
of network, the activation function should be differentia- metabolic disorders and so it goes for diabetes, our pan-
ble as it helps in most of the learning curves for training a creas ages right along with us, and doesn't pump ade-
neural network like the bounded sigmoid function , the quate levels of insulin as efficiently as it did when we
logistic tanh function with positive and negative values were younger. Also, as our cells age, they become more
or the Gaussian function. Almost all of these nonlinear resistant to insulin (the carrier of glucose for the cells) as
activation function conditions assure better numerical well. Modern sedentary lifestyle is damaging our healths
conditioning and induced learning. and is a prime responsive factor for growing obesity
Networks with threshold limits with out activation func- problems." being obese or overweight is one of the prime
tions are difficult to train as they are programmed for factors for increasing level of glucose in the blood causing
step wise constant rise or fall of weight. Where as a sig- diabetes. Obesity increases fat cells in the body and fat
moid activation functions with threshold limits makes a cells lack insulin or glucose receptors compared to muscle
small change in the input weight produces change in the cells, thus increasing fat in the cells is an indirect call to
output and also makes it possible to predict whether the diabetes, exercising and reducing fat in the cells can act as
change in the input weight is good or bad. an alert to stay away from this disorder. Eating less of fat
and enough fibre and complex carbohydrates compared
In the activation function training, Numerical condition is
to simple carbohydrates could contribute to reduce the
one of the most fundamental and important concepts of
risk of diabetes. A survey research on 20 individuals for
the algorithm, it is very important that the activation
risk to diabetes was conducted using only 4 non-clinical
function of a network algorithm is a predefined numeric
parameters. A supervised network was built and trained
condition. Numerical condition affects the speed and
to give an output equivalent to associated risk to diabetis
accuracy of most numerical algorithms. Numerical condi-
with a predefined threshold limit to reach safe non di-
tion is especially important in the study of neural net-
abetic level with maximum iterations possible at the acti-
works because ill-conditioning is a common cause of slow
vation function to reach the out put, which of course was
and inaccurate results from many network algorithms.
not fixed but was predefined to reach to zero risk to di-
Numerical condition is mostly decided based on condi- abetes.
tion number of the input value, which for a neural net-
Sample data collected
work is the ratio of the largest and smallest eigenvalues.
of the Hessian matrix. The eigenvalues of inputs are the No Age Stress level/ BMI Obesity
squares of the singular values of the primary input and Kind of work
Hessain matrix is the matrix of second-order partial de- 1 30 Stressful labour 30 Over Weight
rivatives of the error function with respect to the weights 2 24 Sedentary work 29 Over weight
and biases. 3 23 Sedentary work 32 Obese
4 35 Sedentary work 40 Highly obese
5 40 Minimum work 33 Obese
4 MODEL NEURAL NETWORK FOR A SAMPLE SIZE 6 45 Minimum work 34 Obese
OF 20 INDIVIDUALS TO TEST RISK TO DIABETIS. 7 43 Minimum work 22 fit
8 50 Minimum work 26 Slightly over weight
Type 2 diabetes, is a non-neonatal kind of diabetes that
9 55 Maximum work 20 fit
develops in the later stages of life due varied reasons, is
10 58 Maximum work 18 fit
far more severe and chronic than type 1 or neonatal di-
11 37 Maximum work 19 fit
abetis in its destructive metabolic effects on the body. An
12 38 Minimum work 30 Obese
associated gland called pancreas in our body synthesis
13 27 Maximum work 25 fit
hormone called insulin required to metabolize, break-
14 29 Stressful labour 17 fit
down and capture glucose for every cell of the body to
15 30 Stressful labour 18 fit
synthesize energy in the form of energy rich molecule
16 34 Sedentary work 20 fit
called ATP (Adenosine tri- phosphate). Metabolism of
17 38 Sedentary work 22 fit
glucose is an indispensable process for the body as this
18 32 Sedentary work 21 fit
metabolism generates energy for the cell at micro and
19 42 Maximum work 20 fit
body at the macro level. There are varied factors response
20 43 Maximum work 18 Fit
for the onset of this metabolic disorder called diabetes,
control of some of these factors could induce synthesis of
IJSER 2011
http://www.ijser.org
267
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
268
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
great difference in the numeric functions. Such tific Digital Literature Library
an issue probably could be minimized by sub- [7] Genetic Programming in Data Mining for Drug Dis-
tracting the mean of each input variable from the covery - Langdon, Barrett (2004) CiteSeer.IST Seintif-
variable. ic Digital Literature Library
[8] Making Indefinite Kernel Learning Practical - Miers-
2. If the variances among input variables increase wa (2006) (Correct)
measurably, problem can be cured by dividing [9] Evolutionary Learning with Kernels: A Generic Solu-
each input variable by its standard deviation. tion for Large.. - Mierswa (2006) CiteSeer.IST Seintif-
ic Digital Literature Library
3. High correlations among input variables. This [10] Combining Decision Trees and Neural Networks for
problem could be cured by ortho-normalizing Drug.. - Langdon, Barrett, etal. (2002) CiteSeer.IST
the input variables using Gram-Schmidt, SVD, Seintific Digital Literature Library
principal components. The orthogonal compo- [11] Diversity in Neural Network Ensembles - Gavin
nents in this case should be standardized before Brown To (2003)
taking as input value. Single orthogonal compo- [12] Linear Equality Constraints and Homomorphous
nents which fail standardization should be re- Mappings in PSO - Christopher Monson CiteSeer.IST
moved from the network as they cannot trained Seintific Digital Literature Library
or accepted in the network. Training them in the
long process mail cause network failure.
6 REFERENCES
[1] Neural Network design, Martin T Hagan, Howard B
Demuth, Mark Beale, 1996 Cengage Learning India
Pvt Ltd, ISBN-13 978-81-315-0395-9W.-K. Chen, Li-
near Networks and Systems.
[2] Ill-Conditioning in Neural Networks, Warren S.
Sarle, SAS Institute Inc., Cary, NC, USA Sep 5,
1999Copyright 1999 by Warren S. Sarle, Cary, NC,
USA
[3] The Machine Learning Dictionary for COMP9414
[4] Analytics Consulting, Prasanna Parthasarathy, Man-
agement Journal Bharathidasan Institute of Manage-
ment, Trichy , 324-327
[5] Comparison of Artificial Neural Network and Logis-
tic Regression Models for Prediction of Mortality in
Head Trauma Based on Initial Clinical Data, Bio-Med
Central Feb 2005
[6] R. Roy.. - Furuhashi And Homann, Soft Computing
and Industry Recent Applications, ,CiteSeer.IST Sein-
IJSER 2011
http://www.ijser.org
269
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
Abstract Data aggregation is very crucial techniques in wireless sensor network. Because with the help of data aggregation we
reduce the energy consumption by eliminating redundancy. When wireless sensor network deployed in remote areas or hostile
environm ent. In the wireless sensor network have the most challenging task is a life time so with help of data aggregation we can
enhance the lifetime of the network .In this paper we discuss the data aggregation approaches based on the routing protocols, the
algorithm in the wireless sensor network. And also discuss the advantages and disadvantages or various performance measures of
the data aggregation in the network.
Index Terms Wireless sensor network, data aggregation, architecture, Network Lifetime, Routing, Tree, Cluster, Base Station.
1 INTRODUCTION
2. CLUSTERING IN WSN
Sensor node are densely deployed in wireless sensor
network that means physical environment would produce
very similar data in close by sensor node and transmitting
such type of data is more or less redundant. So all these
facts encourage using some kind of grouping of sensor
Kiran Maraiya is currently pursuing masters degree program in Computer nodes such that group of sensor node can be combined or
Science and engineering in NIT Hamirpur, India, PH-+91 9318583266. E- compress data together and transmit only compact data.
mail: [email protected] This can reduce localized traffic in individual group and
Kamal Kant has joined as Lecturer in ASET,Amity University, Noida
(U.P.) , India, PH-+919718281158 E-mail: [email protected] also reduce global data. This grouping process of sensor
Nitin Gupta has joined as Assistant Professor in NIT Hamirpur, India, nodes in a densely deployed large scale sensor node is
PH-+911972254416 E-mail: [email protected]
IJSER 2011
http://www.ijser.org
270
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
known as clustering. The way of combing data and 2. Heterogeneity in a network, it means user can put some
compress data belonging to a single cluster called data power full nodes, in term of energy in the network which
fusion (aggregation). can behave like cluster head and simple node in a cluster
Issues of clustering in wireless sensor network:-
work as a cluster member only.
1. How many sensor nodes should be taken in a single
cluster. Selection procedure of cluster head in an individual Many protocols and algorithm have been proposed which
cluster. deal with each individual issue.
3. DATA AGGREGATION
In typical wireless sensor networks, sensor nodes are burden before they are transmitted to the base station or
usually resource-constrained and battery-limited. In order sink. The wireless sensor network has consisted three types
to save resources and energy, data must be aggregated to of nodes. Simple regular sensor nodes, aggregator node
avoid overwhelming amounts of traffic in the network. and querier. Regular sensor nodes sense data packet from
There has been extensive work on data aggregation the environment and send to the aggregator nodes
schemes in sensor networks, The aim of data aggregation is basically these aggregator nodes collect data from multiple
that eliminates redundant data transmission and enhances sensor nodes of the network, aggregates the data packet
the lifetime of energy in wireless sensor network. Data using a some aggregation function like sum, average, count,
aggregation is the process of one or several sensors then max min and then sends aggregates result to upper
collects the detection result from other sensor. The collected aggregator node or the querier node who generate the
data must be processed by sensor to reduce transmission query.
IJSER 2011
http://www.ijser.org
271
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
sent towards the base station. Example, in some Latency: Latency is evaluate data of time delay experiences
circumstance a node receives two data packets which have by system, means data send by sensor nodes and received
a correlated data. In this condition it is useless to send both by base station(sink).basically delay involved in data
data packets. Then we apply a function like MAX, AVG, transmission, routing and data aggregation.
and MIN and again send single data packet to base station. Communication overhead: It evaluates the communication
With help of this approach we reduce the number of bit complexity of the network fusion algorithm.
transmitted in the network and also save a lot of energy. In Data accuracy: It is a evaluate of ratio of total number of
network aggregation without size reduction is defined in reading received at the base station (sink) to the total
the process of data packets received by different neighbors number of generated. There are different types data-
in to a single data packet but without processing the value of aggregation protocols like network architecture based data-
data. This process also reduces energy consumption or aggregation protocols, network-flow-based data-
increase life time of the network. aggregation protocols and quality of service (QOS)-aware
data-aggregation protocols designed to guarantee QOS
3.1 Advantage and Disadvantage of Data metrics. Here network architecture based protocols are
aggregation in wireless sensor network described in detail.
Advantage: With the help of data aggregation process we
3.3 Impact of data aggregation in wireless sensor
can enhance the robustness and accuracy of information
network
which is obtained by entire network, certain redundancy
exists in the data collected from sensor nodes thus data In this paper we discuss the two main factors that affect the
fusion processing is needed to reduce the redundant performance of data aggregation methods in wireless
information. Another advantage is those reduces the traffic sensor network, Such as energy saving and delay. Data
load and conserve energy of the sensors. Disadvantage: The aggregation is the process, in which aggregating the data
cluster head means data aggregator nodes send fuse these packet coming from the different sources; the number of
data to the base station .this cluster head or aggregator transmission is reduced. With the help of this process we
node may be attacked by malicious attacker. If a cluster can save the energy in the network. Delay is the latency
head is compromised, then the base station (sink) cannot be connected with aggregation data from closer sources may
ensure the correctness of the aggregate data that has been have to held back at intermediate nodes in order to
send to it. Another drawback is existing systems are several combine them with data from source that are farther away.
copies of the aggregate result may be sent to the base Basically aggregation method based on the position of the
station (sink) by uncompromised nodes .It increase the sources in the network, number of sources and the network
power consumed at these nodes. topology. If the examine the factors, we consider the two
models of the source placement. The event radius (ER)
3.2 Performance measure of data aggregation model and random source model [14]. The modelling says
There are very important performance measures of data us that where the source are clustered near each other or
fusion algorithm. These performances are highly located randomly, significant energy gains are possible
dependent on the desired application. with data aggregation. These gains are greatest when the
Energy Efficiency: By the data-aggregation scheme, we can number of sources is large, and when the sources are
increase the functionality of the wireless sensor network. In located relatively close to each other and far from base
which every sensor nodes should have spent the same station. The modelling through, also seems to the suggest
amount of energy in every data gathering round. A data- that aggregation latency could be non negligible.
aggregation scheme is energy efficient if it maximizes the
functionality of the network. Network lifetime, data
accuracy, and latency are some of the significant
performance measures of data-aggregation algorithms. The
4. DATA AGGREGATION APPROACHES IN
definitions of these measures are highly dependent on the WIRELESS SENSOR NETWORK
desired application.
Network lifetime: The network lifetime is defining the Data aggregation process is performed by specific routing
number of data fusion rounds. Till the specified percentage protocol. Our aim is aggregating data to minimize the
of the total nodes dies and the percentage depend on the energy consumption. So sensor nodes should route packets
application .If we talk about some based on the data packet content and choose the next hop
application,simultenously working of the all the sensor in order to promote in network aggregation. Basically
nodes is crucial hence the lifetime of the network is routing protocol is divided by the network structure, thats
number of round until the first nodes which improves the why routing protocols is based on the considered
energy efficiency of nodes and enhance the lifetime of approaches.
whole network.
IJSER 2011
http://www.ijser.org
272
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
273
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
Protocols/algorithms Tree Cluster Multipath Hybrid Data representation is the effective way to representation
the data. Wireless sensor network is consisting a large
TAG - - - number of small sensor nodes. These are resource
constraint, due to limited resource constraint it needs to
decide whether to store, compress, discard and transmit
IJSER 2011
http://www.ijser.org
274
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April -2011
ISSN 2229-5518
data. All this requirement wants a suitable way to security issues is data integrity with the help of integrity
represent the information any type of structure are we reduce the compromised sensor source nodes or
common to all sensor node in the network.[14] aggregator nodes from significantly altering the final
aggregation value. Sensor node in a sensor network is
7. SECURITY ISSUES IN DATA easily to compromised. Compromised nodes have a
AGGREGATION FOR WIRELESS SENSOR capability to modify or discard messages. Method of secure
NETWORK data aggregation: There are two type of method for
There are two type of securities are require for data securing data hop by hop encryption and end to end
aggregation in wireless sensor network, confidentiality and encryption, both methods follows some step.
integrity. The basic security issue is data confidentiality, it 1. Encryption process has to be done by sensing nodes in
is protecting the sensitive data transmission and passive wireless sensor network.
attacks, like eavesdropping. If we talk about hostile 2. Decryption process has to be done by aggregator nodes.
environment so data confidentiality is mainly used because 3. After that aggregator nodes aggregates the result and
wireless channel is vulnerable to eavesdropping by then encrypt the result again.
cryptography method. The complicated encryption and 4. The sink node gets final aggregated result and decrypt it
decryption operations such as modular multiplication. The again.
8. CONCLUSION also discuss the various approaches for data aggregation or
also discuss the advantage and disadvantages and various
In this paper we present wireless sensor network is consist a performance measures of the data aggregation.
large number of sensor node. And these nodes are resource
constraint. Thats why lifetime of the network is limited so the ACKNOWLEDGMENT
various approaches or protocol has been proposed for This work was supported in part by a grant from NIT
increasing the lifetime of the wireless sensor network. In this Hamirpur (himachal Pradesh).
paper we discuss the data aggregation are one of the important
techniques for enhancing the life time of the network. And
IJSER 2011
http://www.ijser.org
275
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract The Slantlet Transform (SLT) is a recently developed multiresolution technique especially well-suited for piecewise linear data.
The Slantlet transform is an orthogonal Discrete Wavelet Transform (DWT) with 2 zero moments and with improved time localization. It also
retains the basic characteristics of the usual filterbank such as octave band characteristic and a scale dilation factor of two. However, the
Slantlet transform is based on the principle of designing different filters for different scales unlike iterated filterbank approaches for the DWT.
In the proposed system, Slantlet transform is implemented and used in Compression and Denoising of various input images. The perfor-
mance of Slantlet Transform in terms of Compression Ratio (CR), Reconstruction Ratio (RR) and Peak-Signal-to-Noise-Ratio (PSNR) present
in the reconstructed images is evaluated. Simulation results are discussed to demonstrate the effectiveness of the proposed method.
Index TermsDiscrete Wavelet transform,Compression ratio, Data Compresion, Peak-signal-to-ratio(PNSR), Coding Inter Pixel, Slantlet
Coefficients, Choppy Images.
1 INTRODUCTION
Lei Zhang; Bao, P; Xiaolin Wu [8] proposed a appear together with its time reverse. While hi(n) does not
wavelet-based multiscale linear minimum mean square- appear with its time reverse, it always appears paired
error estimation (LMMSE) scheme for image denoising is with the filter fi(n). In addition, note that the l-scale and (l
proposed, and the determination of the optimal wavelet + 1) scale filterbanks have in common the filters gi(n) for
basis with respect to the proposed scheme is discussed. i=1,., l-1 and their time-reversed versions.
The overcomplete wavelet expansion (OWE), which is The Slantlet filterbank analyzes scale i with the
more effective than the orthogonal wavelet transform filter gi(n) of length 2i+1. The characteristics of the Slantlet
(OWT) in noise reduction, is used. filterbank:
1.Each filterbank is orthogonal. The filters in the synthe-
3.The Slantlet Transform
sis filter bank are obtained by time reversal of the analysis
The Slantlet filterbank is an orthogonal filter filter.
bank for the discrete wavelet transform, where the filters 2.The scale-dilation factor is 2 for each filterbank.
are of shorter support than those of the iterated D2 filter- 3.Each filterbank provides a multiresolution decomposi-
tics of the usual DWT filterbank. 4.The time localization is improved with a degradation of
frequency selectivity.
The Slantlet filter bank shown in Fig 3.1. is generalized as
5.The Slantlet filters are piecewise linear.
follows. The l-scale filter bank has 2l channels. The low-
F2(Z) 4
dancy of the input data. In the first stage of the encoding
IJSER 2011
http://www.ijser.org
G1(Z) 4
Z-3 G1(1/Z) 4
277
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
process the mapper transforms the input data into a for- The decoder contains only two components: a
mat designed to reduce the interpixel redundancies. symbol decoder and an inverse mapper. The percentage
threshold
100
be given as, PSNR=20*log10(1256/ (original data-de- If a pixel in the image has intensity less than the
1.2. Separate the odd and even moment vectors along the 1 = 0
length of the input signal. 3.3. Using DC and linear Slantlet coefficients, compute
0(n;l) and 1(n;l)
1.3. Since the filters are piecewise linear, each filter can be
m = 2^l;
represented as the sum of a DC and a linear term.
0(n;l)=s(1)/(m)*s(2)*(3*(m-1)/(m*(m+ 1)));
1.4. The DC and linear moments at scale i can be com-
1(n;l) = s(2) * (-2 * (3/(m*(m^2 - 1))) 3.4. Then
puted from the DC and linear moments at the next finer
compute 0(n;i) and 1(n;i) for decreasing values of i by
scale (i-1).
updating 0 and 1 using Slantlet coefficients.
The image obtained after 2-D Slantlet transform
6.2. Algorithm for signal De-noising
can be shown as Input image is added with noise. For this noisy
image, apply Slantlet transform. Steps 1 and 3 of the sec-
LL3 LH3 LH2
HL3 HH3 LH1 tion 6.1 remains unchanged. But in the step 2, soft thre-
shold is applied for de-noising. Soft threshold is an exten-
HL2 HH2
sion of hard threshold, first setting to zero the elements
whose absolute values are lower than the threshold, and
HL1 HH1
then shrinking the nonzero coefficients towards 0. The
output of the step3 in section 6.1 is the de-noised version
Figure 5.3. Decomposed image after 2-D Slantlet Trans- of the original input image.
form
IJSER 2011
http://www.ijser.org
279
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
The Compression Ratio (CR) and Reconstruction Figure 7.1. Graph for Cameraman
Ratio (RR) for different input images are tabulated as
shown in the table 6.1 and table 6.2. The graphs are plot-
ted against various threshold values and the ratios. 7.2. Simulation results of de-noising
For the various threshold values, the percentage
of signal to noise ratio for different images are tabulated
as shown in the table 6.3 and 6.4. The graphs are plotted
Thr Cameraman MRI Testpart against various threshold values and the PSNR.
value % CR % % % % % RR
RR CR RR CR
20.0 0
22.4248 6 24.1644
6 7 21.9417
15 MRI
50.0 99.8123.5620
500 99.2 99.825.2520
89.2 99.622.1960
98.07
10
Testpart
4 1 8 0 5
5000 99.97 96.2 99.9 18.7 99.9 94.74 0
2 9 9 7 0.05 0.5 5 10 20 50
Threshold
120 8. Snapshots
100
IJSER 2011
80
http://www.ijser.org
% Ratio
60
%Cr
40 % RR
20
0
280
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
IJSER 2011
http://www.ijser.org
281
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Bibliography
May 1999.
2. David l. Donoho Smooth wavelet decomposi-
Figure 8.6. (a) Input image(Testpart) (b) Noisy image (c) De- tions with blocky coefficient kernels May, 1993.
noised image. 3. Vaidyanathan, p. p., Multirate systems and filter
9.Conclusion banks, prentice hall inc., Englewood cliffs, new
The Slantlet transform orthogonal filterbank uses jersey, 1993.
the filters of shorter length as compared to iterated 2- 4. M. Lmg, H. Guo, et. al Noise reduction using an
scale DWT filterbank. The number of filters increases
undecimated discrete wavelet transform, IEEE
according to the order. The Slantlet transform gives better
signal processing , vol 3, no. 1, Jan 1996.
compression result for the piecewise linear data. The
5. Aali M. Reza ,From Fourier transform to Wave-
Slantlet transform has been applied for compression and
let transform, October 27, 1999 white paper.
de-noising of various images. The matlab programs for
6. Paolo Zatelli, Andrea Antonello, New Grass
the Slantlet transform applied for compression and de-
noising are developed and the computer simulation is modules for multiresolution analysis with wave-
carried out for some test images. It is observed that as the lets, proceedings of the open source gis - grass
threshold level increases better compression ratio and users conference, September 2002.
PSNR can be achieved for the test data. 7. Kenneth Lee, Sinhye Park, et.al, Wavelet-based
split.
Acknoweledgements
IJSER 2011
http://www.ijser.org
282
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Many commercial applications provide customer services over the web like flight tracking, emergency notification, order
inquiry etc. VoiceXML is an enabling technology for creating streamlined speech-based interface for such web-based information ser-
vices. Whereas in computing, aspect-oriented programming (AOP) is a programming paradigm, which aims to increase modularity.
AOP includes programming methods and tools that support the modularization of concerns at the level of the source code. The aim of
this paper is to integrate AOP with VoiceXML. Aspect-Oriented Programming (AOP) encapsulates common low-level scattered code
within reusable components called aspects. There are certain tags in VoiceXML like <nomatch>, <noinput>, <error> which appear
commonly in every VoiceXML document. These tags can be considered as the concerns and can be put inside an aspect. This elimi-
nates the need to programmatically write these tags in every VoiceXML document and modularizes the crosscutting-concerns.
1 INTRODUCTION
accessible via both the Public Switched Telephone Automatic Speech Recognition (ASR) and/or touchtone
Network and the Internet. Users may access wiki content (DTMF keypad) for input, and prerecorded audio and
via fixed or mobile phones or via a PC using web browser text-to-speech (TTS) synthesis for output. Numerous
or a Voice over IP service. Silog is a biometric authentica- commercial vendors such as IBM, TellMe and BeVocal
tion system that extends the conventional PC logon provide voice browsers that can be used to play Voice-
process using voice verification [2]. SeeCCT is a XML documents. Current voice interfaces to the web are
prototype of a multimodal social networking system of two types: voice interface to screen display and voice-
designed for sharing geographical bookmarks [3]. In [4], only interface. The dynamic voice interface presented in
VoiceXML portal is developed that allows people with a this paper is a voice-only interface that uses a telephone
mobile or usual phone to get informed about cultural as an input and output device. Figure 1 shows the core
activities by dialing a phone number and by interacting architecture of VoiceXML applications.
with a computer via voice. Domain-specific dialogs are
created in native languages viz. slavic languages using
VoiceXML [5]. HearSay is a non-visual Web browser
developed for visually impaired users [6].
AspectJ is one of the oldest and well-known aspect lan-
guages and it helped to bring AOP to the mainstream.
AspectJ is an extension of the Java language which de-
fines a special syntax for declaring aspects. The first ver-
sions of AspectJ featured compile-time source code weav-
ing and bytecode weaving. It later merged with Aspect-
Werkz which brought load-time weaving as well as As- Fig. 1. Architectural Model of VoiceXML
pectWerkzs annotation style to the language. AOP is
used in a variety of fields. Aspect-oriented software de- A document server processes requests from a client ap-
velopment had played an important role in the design plication, the VoiceXML Interpreter, through the Voice-
and implementation of PUMA which is a framework for XML interpreter context. The server produces VoiceXML
the development of applications that analyze and trans- documents in reply, which are processed by the Voice-
form C or C++ source code [7]. A framework for middle- XML Interpreter. The VoiceXML interpreter context may
ware design is invented which is based on the Concurrent monitor user inputs in parallel with the VoiceXML inter-
Event-based Aspect-Oriented paradigm [8]. A fully dy- preter. The implementation platform is controlled by the
namic and reconfigurable monitoring system is designed VoiceXML interpreter context and by the VoiceXML in-
based on the concept of Adaptable Aspect-Oriented Pro- terpreter.
gramming (AAOP) in which a set of AOP aspects is used
to run an application in a manner specified by the adap-
tability strategy [9]. The model can be used to implement
4 ASPECT ORIENTED PROGRAMMING (AOP)
systems that are able to monitor an application and its The main idea of AOP is to isolate the cross-cutting con-
execution environment and perform actions such as
cerns from the application code thereby modularizing
changing the current set of resource management con-
them as a different entity. A cross-cutting concern is be-
straints applied to an application if the applica-
tion/environment conditions change. An aspect-oriented havior, and often data, that is used across the scope of a
approach is advocated as an improvement to the object piece of software. It may be a constraint that is a characte-
oriented approach in dealing with the issues of code tan- ristic of your software or simply behavior that every class
gling and scattering in case of multilevel security [10]. must perform. The most common example of a cross-
cutting concern is that of logging. Logging is a cross-
3 VOICEXML cutting concern because it affects many areas across the
software system and it intrudes on the business logic.
VoiceXML is HTML of Voice Web. It is W3Cs standard
XML format for specifying interactive voice dialogues Logging is potentially applied across many classes, and it
between a human and a computer. VoiceXML is a mar- is this form of horizontal application of the logging aspect
kup language for creating voice user interfaces. It uses that gives cross-cutting its name. Some central AOP con-
cepts are as follows:
Sukhada Bhingarkar is with the Computer Engineering Dept., MIT colle- Aspects: A modularization of a concern that cuts across
gee of Engineering, Kothrud, Pune, India 411038
E-mail:[email protected]
multiple objects. Transaction management is a good
example of a crosscutting concern in J2EE applications.
IJSER 2011
http://www.ijser.org
284
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Figure 3 shows a simple business logic class along with Fig. 4. VoiceXML example
an aspect applied to this class.
The commonly occurring tags in VoiceXML can be consi-
dered as crosscutting concerns and are encapsulated in a
special class i.e. an aspect. Figure 5 shows the Servlet that
is used to construct VoiceXML.
IJSER 2011
http://www.ijser.org
285
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
6 CONCLUSION
VoiceXML is a language to create voice-user interfaces
while AOP allows us to dynamically modify our static
model to include the code required to fulfill the second-
ary requirements without having to modify the original
static model. Integration of AOP with VoiceXML better
separates the concerns of Voice based applications, there-
by providing modularization. This helps developers to
concentrate on core logic and promotes code reuse.
REFERENCES
[1] Constantinos Kolias et al, Design and implementation of a
VoiceXML-driven wiki application for assistive environments
on the web, Personal and Ubiquitous Computing, 2010,
Volume 14, Number 6, 527-539, Springer-Verlag London Li-
mited 2010
[2] Sergio Grau, Tony Allen, Nasser Sherkat, Silog: Speech input
logon, Knowledge-Based Systems, Volume 22, Issue 7, October
2009, Pages 535-539
Abstract This paper proposes a method for computation of error of approximation involved in case of evaluation of integrals
of single variable. The error associated with a quadrature rule provides information with a difference of approximation. In
numerical integration, approximation of any function is done through polynomial of suitable degree but shows a difference in
their integration integrated over some interval. Such difference is error of approximation. Sometime, it is difficult to evaluate the
integral by analytical methods Numerical Integration or Numerical Quadrature can be an alternative approach to solve such
problems. As in other numerical techniques, it often results in approximate solution. The Integration can be performed on a
continuous function on set of data.
Index Terms Quadrature rule, Simpsons rule, Chebyshev polynomials, approximation, interpolation, error.
1 INTRODUCTION
f x dx
a
(1)
2 PROPOSED METHOD
n b
2.1 Reflection on Approximation
i.e. i
f ( x) to approximate f ( x)dx . The methodolo-
i 0 a This section cover types of approximation following
gy for computing the antiderivative at a given point, the condition best approximation for a given func-
the polynomial p( x) approximating the function f ( x) tion, concentrating mainly on polynomial approxima-
generally oscillates about the function. This means that tion. In this for approximation, there is considered a
if y p( x) over estimates the function y p( x) in one polynomial of first degree such as y a bx ; a good
interval then it would underestimate it in the next in- approximation to a given continuous function for the
terval [5]. As a result, while the area is overestimated interval (0, 1).
in one interval, it may be underestimated in the next Under the assumption of given concept two following
interval so that the overall effect of error in the two statements may be considered as,
intervals will be equal to the sum of their moduli, in- The Taylor polynomial at x 0 (assuming f (0) exists)
stead the effect of the error in one interval will be neu-
tralized to some extent by the error is the next interval. y f (0) xf (0) (2)
Therefore, the estimated error in an integration formu-
la may be unrealistically too high. In view to above The interpolating polynomial constructed at x 0
discussed facts, the paper would reveal types of ap- and x 1 .
proximation following the condition best approxima-
y f (0) x[ f (1) f (0)] (3)
Rajesh Kumar Sinha is with the Department of Mathematics, NIT Patna,
India. E-mail: [email protected] A justification may be laid that a Taylor or interpolat-
Satya Narayan Mahto is with the Department of Mathematics, M. G. Col-
lege, LNMU, Darbhanga, India.
ing polynomial constructed at some other point would
Dhananjay Sharan is research scholar in the Department of Mathematics, be more suitable. However, these approximations are
NIT Patna, India. designed to initiate the behavior of f at only one or two
points.
IJSER 2011
http://www.ijser.org
287
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Thus equation (11) is known as error that exists in fol- Differentiating both sides with respect to x ,
lowing statement. If (a, b) is any interval and also con-
tains (n +1) points x, x0 , x1 ,.......xn . Suppose further it is ds d
pn ( x) p ( x sh) (19)
assumed that f , f .......... f n
exist and are continuous dx ds n 0
known second term on the right of equation (12) be- By putting x xr , the second term on the right of equa-
comes zero. tion (22) becomes zero.
b b b physical sciences.
( f ( x) g( x))dx f ( x)dx g( x)dx
a a a
(27)
REFERENCES
n n n [1] K. E. Atkinson, An Introduction to Numerical Analysis, Wiley,
NewYork, 1993.
( f ( x ) g( x )) f ( x ) g( x )
i 0
i i
i 0
i
i0
i
(28)
[2] R. E. Beard, Some notes on approximate product integration,
J. Inst. Actur., vol. 73, pp. 356-416, 1947.
[3] C. T. H. Baker, On the nature of certain quadrature formulas
For each pair of integral functions f and g and each and their errors, SIAM. J. Numer anal., vol. 5, pp. 783-804, 1968.
pair of real constants. This implies that the degree of [4] P. J. Daniell, Remainders in interpolation and quadrature for-
mulae, Math. Gaz., Vol. 24, pp. 238-244, 1940.
precision of a quadrature formula in n if and only if [5] R. K. Sinha, Estimating error involved in case of Evaluation of
the error E( p( x)) 0 for all polynomials p( x) of de- Integrals of single variable, Int J. Comp. Tech. Appl., Vol. 2, No.
2, pp. 345-348, 2011.
gree k 0,1,....n , but E( p( x)) 0 for some polynomial [6] T. J. Akai, Applied Numerical Methods for Engineerrs, Wiley,
NewYok, 1993.
p( x) of degree n 1 . [7] L. M. Delves, The Numerical Evaluation of Principal Value
Integrals, Computer Journal, Vol. 10, pp. 389, 1968.
[8] Brain Bradi, A Friendly Introduction to Numerical Analysis,
3 CONCLUSION pp. 441-532, Pearson Education, 2009.
[9] R. K. Sinha, Numerical Method for evaluating the Integrable
Increasing, the degree of the approximating polynomi- function on a finite interval, Int. J. of Engineering Science and
al dose not guarantees better accuracy. In a higher de- Technology, Vol. 2, No. 6, pp. 2200-2206, 2010.
gree polynomial, the coefficients also get bigger which [10] C. E. Froberg, Introduction to Numerical Analysis, Addison-
Wesley Pub. Co. Inc.
may magnify the errors. Similarly, reducing the size of
[11] Ibid, The Numerical Evaluation of class Integrals, Proc. Comb.
the sub-interval by increasing their number may also Phil. Soc.52.
lead to accumulation of rounding errors. Therefore a [12] P .J .Davis and P. Rebinowitz, Method of Numerical Integra-
balance should be kept between the two, i.e. degree of tion, 2nd edition, Academic Press, New York, 1984.
polynomial and total number of intervals. These are
the primary motivations for studying the techniques of
numerical integration/quadrature [6]-[9]. In case of
Simpsons rule technique individually to the subinter-
vals [a ,(a b) 2] and [( a b) 2 , b] ; use error estimation
procedure to determine if the approximation to the
integral on subinterval is within a tolerance of 2 . If
so, then sum the approximations to procedure an ap-
proximation of function f x over interval a, b
within the tolerance . If the approximation on one of
the subintervals fails to be within the tolerance 2 ,
then that subinterval is itself subdivided, and the pro-
cedure is reapplied to the two subintervals to deter-
mine if the approximation on each subinterval is accu-
rate to within 4 . This halving procedure is continued
until each portion is within the required tolerance.
Thus, Numerical analysis is the study of algorithms
that use numerical approximation for the problems of
continuous functions [10]-[12]. Numerical analysis con-
tinues this long tradition of practical mathematical cal-
culations. Much like the Babylonian approximation,
modern numerical analysis does not seek exact an-
swers, since the exact answers are often impossible to
obtain in practice. Instead, much of numerical analysis
is concerned with obtaining approximate solutions
while maintaining reasonable bounds on errors. It
finds applications in all fields of engineering and the
IJSER 2011
http://www.ijser.org
290
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract Accounting Information Systems (AIS) as computer-based systems that processes financial information and
supports decision tasks have been implemented in most organizations but, but they still encounter a lack of Intelligence in their
decision-making processes. Models and methods to evaluate and assess the Intelligence-level of Accounting Information
Systems can be useful in deploying suitable business intelligence (BI) services. This paper discusses BI Assessment criteria,
fundamental structure and factors used in the Assessment model. Factors and the proposed model can assess the intelligence
of Accounting Information Systems to achieve enhanced decision support in organizations. The statistical analysis identified five
factors of the Assessment model. This model helps organizations to design, buy and implement Accounting Information
Systems for better decision support. The study also provides criteria to help organizations and software vendors implement AIS
from decision support perspectives.
Index Terms Business Intelligence; Decision Support; Accounting Information Systems; Assessment Model.
1 INTRODUCTION
ling and assessment model. Finally, Section 6 concludes agents, was first described by McCarthy (1979, 1982) and
the research work, its findings and proposed future re- later developed by an exclusive group of researchers (Da-
search. vid et al., 2002). Extensions to resources, events and
agents include locations (Denna et al., 1993), tasks and
commitments (Geerts and McCarthy, 2002).
2 ACCOUNTING INFORMATION SYSTEMS
An unshakable stream of research exists within the AIS
An Accounting Information Systems (AIS) is defined as a literature that investigates behavioral issues in relation to
computer-based system that processes financial informa- accounting information systems (Sutton and Arnold,
tion and supports decision tasks in the context of coordi- 2002). This stream of research investigates the impact of
nation and control of organizational activities. Extant ac- IT on individuals, organizations and society.
counting information systems research has evolved from An example of behavioral AIS research is a study car-
the source disciplines of Computer science, organization- ried out by Arnold et al. (2004) on the use and effect of
al theory and cognitive psychology. An advantage of this intelligent decision aids. The authors find that smart ma-
evolution is a diverse and rich literature with the poten- chines must be operated by smart people. If users are in-
tial for exploring many different interrelationships among experienced, they will be negatively impacted by the sys-
technical, organizational and individual aspects of judg- tem. Furthermore, they will not learn by experience. Ab-
ment and decision performance. AIS research also spans ernethy and Vagnoni (2004) found that top management
from the macro to the micro aspects of the information uses the newly implemented system for monitoring. Use
system (Birnberg & Shields, 1989; Gelinas et al., 2005). of AIS is found to have a positive effect on cost con-
The comparative advantage of accounting researchers sciousness, but the cost consciousness is hampered if
within the study of IT lies in their institutional accounting people have informal power. In this context, power is an
knowledge. Systems researchers can contribute insights explanatory variable of AIS use.
into the development of systems utilizing technology, and
the other sub-areas can contribute insights into the task
characteristics in the environment. For instance, systems 3 BUSINESS INTELLIGENCE
researchers have extensively investigated group decision Business Intelligence or BI is a grand, umbrella term in-
support systems (GDSS), but they have only recently been troduced by Howard Dresner of the Gartner Group in
considered in auditing. On the other hand, auditing re- 1989 to describe a set of concepts and methods to improve
search has extensively investigated the role of knowledge business decision-making by using fact-based compute-
and expertise. The merging of the two sets of findings rized support systems (Nylund, 1999). The first scientific
may be relevant to AIS design, training, and use. definition, by Ghoshal and Kim (1986) referred BI to a
As comparable term, Management accounting systems management philosophy and tool that helps organiza-
(MAS) also are formal systems that provide information tions to manage and refine business information for the
from the internal and external environment to managers purpose of making effective decisions.
(Bouwens & Abernethy, 2000). They include reports, per- BI was considered to be an instrument of analysis,
formance measurement systems, computerized informa- providing automated decision-making about business
tion systems, such as executive information systems or conditions, sales, customer demand, and product prefe-
management information systems, and also planning, rence and so on. It uses huge-database (data-warehouse)
budgeting, and forecasting processes required to prepare analysis, as well as mathematical, statistical, artificial in-
and review management accounting information. telligence, data mining and on-line analysis processing
Research on management accounting and integrated (OLAP) (Berson and Smith, 1997). Eckerson (2005) un-
information systems (IIS) has evolved across a number of derstood that BI must be able to provide the following
different research streams. Some research streams put tools: production reporting tools, end-user query and re-
heavier emphasis on the management accounting side, porting tools, OLAP, dashboard/screen tools, data min-
while other research streams put emphasis on the infor- ing tools and planning and modelling tools.
mation systems side. Likewise, different research streams BI includes a set of concepts, methods, and processes
approach the topic from different perspectives (Anders to improve business decisions, which use information
Rom & Carsten Rohde, 2007). from multiple sources and apply past experience to de-
A major stream of research within AIS research deals velop an exact understanding of business dynamics (Ma-
with the modeling of accounting information systems. ria, 2005). It integrates the analysis of data with decision-
Several modeling techniques stay alive within the infor- analysis tools to provide the right information to the right
mation systems literature (e.g. entity-relationship dia- persons throughout the organization with the purpose of
grams, flowcharts and data flow diagrams). Whereas improving strategic and tactical decisions. A BI system is
these modeling techniques can be used when modeling a data-driven DSS that primarily supports the querying of
accounting information systems (Gelinas et al., 2005), But, an historical database and the production of periodic
the REA modeling technique is particular to the AIS do- summary reports (Power, 2008).
main. The REA model, which maps resources, events and Lnnqvist and Pirttimki (2006), stated that the term,
IJSER 2011
http://www.ijser.org
292
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
measure their attitude, based on the BI Assessment Crite- On the subject of decision-type, the majority of inter-
ria listed in Table 2. The selected response was evaluated viewees make semi-structured and unstructured deci-
by a Likert Scale (Likert, 1974) and the responses could sions in their work. Table 2 also shows the seniority of the
be: very strongly disagree, strongly disagree, disagree, no participants. As can be seen, 20.4 per cent have over 15
opinion, agree, strongly agree or very strongly agree. In years of seniority, 36.4 per cent have 10-15 years, and 43.2
other words, the second part of questionnaire measures per cent have less than 10 years seniority.
their opinions the importance of each BI specification in
terms of the intelligence Assessment Criteria of AIS.
The main targets of the sampling were accounting 5.4 Factor Extaction and Labeling
managers who are involved in systems efforts and deci- Factor analysis can be utilized to examine the underlying
sion-making. patterns or relationships for a large number of variables
and to determine whether the information can be con-
densed or summarized in a smaller set of factors or com-
5.1 Reliability analysis ponents (Hair et al., 1998).
With reliability analysis, you can get an overall index of An important tool in interpreting factors is factor rota-
the repeatability or internal consistency of the measure- tion. The term rotation means exactly what it implies.
ment scale as a whole, and you can identify problem Specifically, the reference axes of the factors are turned
items that should be excluded from the scale. The Cron- about the origin until some other position has been
bachs is a model of internal consistency, based on the reached. The un-rotated factor solutions extract factors in
average inter-item correlation. The Cronbachs (Likert, the order of their importance. The first factor tends to be a
1974) calculated from the 34 variables of this research was general factor with almost every variable loading signifi-
0.941 (94 percent), which showed high reliability for de- cantly, and it accounts for the largest amount of variance.
signed measurement scale. The second and subsequent factors are then based on the
residual amount of variance. The ultimate effect of rotat-
ing the factor matrix is to redistribute the variance from
5.2 Data collection earlier factors to later ones to achieve a simpler, theoreti-
The research targets were accounting managers who were cally more meaningful factor pattern. The simplest case of
involved in systems efforts and decision-making in or- rotation is an orthogonal rotation, in which the axes are
ganizations. The number of questionnaires sent out was maintained at 900 (Hair et al., 1998).
210 and the number returned was 176, which showed a In order to determine whether the partial correlation of
return rate of 83 per cent. the variables is small, the Kaiser-Meyer-Olkin was used to
measure of sampling adequacy (Kaiser, 1958) and Bar-
tletts test of Sphericity (Bartlett, 1950) before starting
5.3 Demographic profiles of interviewees
the factor analysis. The result was a KMO of 0.963 and
The demographic profile of interviewees who participate Bartlett test p-value less than 0.05, which showed good
in the survey has been summarized in Table 2. The results correlation. The factor analysis method is principle com-
show that most of the members (87.5 per cent) are male. ponent analysis in this research, which was developed
Most of the interviewees (88.7 per cent) have a Bachelor of by Hotteling (1935). The condition for selecting factors
Science (BS) or a higher degree, as shown in Table 2. was based on the principle proposed by Kaiser (1958):
TABLE 2 TABLE 3
DEMOGRAPHIC PROFILES OF INTERVIEWEES ROTATED FACTOR ANALYSIS RESULTS
were rotated according to Varimax rotation method. [5] Azadivar, F., Truong, T. and Jiao, Y., 2009. A decision support
To indicate the meaning of the factors, they have been system for fisheries management using operations research and
given short labels indicating their content. Analytical systems science approach, Expert Systems with Applications
Decision-support, Providing Integration with Envi- 36: 29712978.
ronmental and Experimental Information, Optimiza- [6] Bartlett, M.S., 1950. Test of significance in factor analysis, Brit-
tion Model, Reasoning and finally, Enhanced Deci- ish Journal of Psychology, Vol. 3, 77-85.
sion-making Tools are the names which have been as- [7] Berson, A., & Smith, S., 1997. Data warehousing, data mining,
signed to the extracted factors. and OLAP: McGraw-Hill, Inc. New York, NY, USA.
[8] Birnberg, J. G., & Shields, J. F. (1989). Three decades of beha-
vioral accounting research: A search for order. Beha-vioral Re-
search in Accounting, 1, 2374.
6 CONCLUSION [9] Bolloju, N., Khalifa, M. and Turban, E., 2002. Integrating know-
Enterprise systems like Accounting Information Systems ledge management into enterprise environments for the next
(AIS) converts and store the data. Therefore, it is impor- generation decision support, Decision Support Systems 33: 163
tant to integrate decision-support into the environment of 176.
these systems. Business intelligence (BI) can be embedded [10] Bose, R., 2009. Advanced analytics: opportunities and chal-
in these enterprise systems to obtain competitive advan- lenges, Industrial Management & Data Systems, Vol. 109, No. 2,
tage. 155-172.
This research confirmed the necessity to assess the In- [11] Bouwens, J., & Abernethy, M. A. (2000). The Consequences of
telligence of Accounting Information Systems and dem- Customization on Management Accounting System Design.
onstrated that this assessment can advance a decision- Accounting, Organizations and Society, 25(3), 221241.
support environment. From a wide-ranging literature [12] Courtney, J.F., 2001. Decision making and knowledge man-
review, 23 criteria for BI assessment were gathered and agement in inquiring organizations: toward a new decision-
embedded in the second part of the research. The inter- making paradigm for DSS, Decision Support Systems 31: 1738.
viewees selected the more important criteria from these [13] Damart, S., Dias, L. and Mousseau, V., 2007. Supporting groups
23 variables by assigning ranks to them. The research in sorting decisions: Methodology and use of a multi-criteria
then applied factor analysis to extract the five factors for aggregation/disaggregation DSS, Decision Support Systems 43:
evaluation. These factors were Analytical Decision- 14641475.
support, Providing Integration with Environmental and [14] David JS, Gerard GJ,McCarthyWE. Design science: anREA-
Experimental Information, Optimization Model, Rea- perspective on the future of AIS. In:ArnoldV, Sutton SG, edi-
soning and finally, Enhanced Decision-making Tools. tors. Researching accounting as an information systems discip-
The authors believe that after this research, organizations line. Sarasota, FL, USA: American AccountingAssociation; 2002.
can make better decisions for designing, selecting, evalu- [15] Delorme, X., Gandibleux, X. and Rodrguez, J., 2009. Stability
ating and buying Accounting Information Systems, using evaluation of a railway timetable at station level, European
criteria that help them to create a better decision-support Journal of Operational Research 195: 780790.
environment in their work systems. Of course, further [16] Denna EL, Cherrington JO, Andros DP, Hollander AS. Event-
research is needed. One area is the design of expert sys- driven business solutions. Homewood, IL, USA: Business One
tems (tools) to compare vendors products. The other is Irwin; 1993.
the application these criteria and factors in a Multy Crite- [17] Eckerson , W., 2005. Performance dashboards: Measuring, mon-
ria Decision Making framework to select and rank AIS, itoring, and managing your business. Wiley Press.
financial and banking systems based on BI specification. [18] Elbashir, M., Collier, P. and Davern, M., 2008. Measuring the
The complex relationship between decision-making satis- effects of business intelligence systems: The relationship be-
faction of managers, and these factors should also be ad- tween business process and organizational performance. Inter-
dressed in future research. national Journal of Accounting Information Systems, 9(3), 135-
153.
[19] Eom, S., 1999. Decision support systems research: current state
REFERENCES and trends, Industrial Management & Data Systems 99/5
213220.
[1] Abernethy MA, Vagnoni E. Power, 2004. Organization design
[20] Evers, M., 2008. An analysis of the requirements for DSS on
and managerial behaviour. Account Organ Soc ;29(3/4):20725.
integrated river basin management, Management of Environ-
[2] Alter, S., 2004. A work system view of DSS in its fourth decade.
mental Quality: An International Journal Vol. 19 No. 1, pp. 37-
Decision Support Systems, 38(3), 319-327.
53.
[3] Anders Rom, Carsten Rohde, 2007. Management accounting
[21] Fazlollahi, B. and Vahidov, R., 2001. Extending the effectiveness
and integrated information systems: A literature review, Inter-
of simulation-based DSS through genetic algorithms, Informa-
national Journal of Accounting Information Systems 8 4068
tion & Management 39: 53-65.
[4] Arnold V, Collier PA, Leech SA, Sutton SG. Impact of intelli-
[22] Feng, Y., Teng, T. and Tan, A., 2009. Modelling situation
gent decision aids on expert and novice decision-
awareness for Context-aware Decision Support, Expert Systems
makers'judgments. Account Finance 2004;44(1):126.
IJSER 2011
http://www.ijser.org
295
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
with Applications 36: 455463. Mitra, G., 2000. A prototype decision support system for stra-
[23] Galasso, F. and Thierry, C., 2008. Design of cooperative tegic planning under uncertainty, International Journal of Phys-
processes in a customersupplier relationship: An approach ical Distribution & Logistics Management, Vol. 30 No. 7/8, 640-
based on simulation and decision theory, Engineering Applica- 660.
tions of Artificial Intelligence. [42] Kwon, O., Kim, K. and Lee, K.C., 2007. MM-DSS: Integrating
[24] Gao, S. and Xu, D., 2009. Conceptual modeling and develop- multimedia and decision-making knowledge in decision sup-
ment of an intelligent agent-assisted decision support system port systems, Expert Systems with Applications 32: 441457.
for anti-money laundering, Expert Systems with Applications [43] Lamptey, G., Labi, S. and Li, Z., 2008. Decision support for
36: 14931504. optimal scheduling of highway pavement preventive mainten-
[25] Geerts GL, McCarthy WE. An ontological analysis of the eco- ance within resurfacing cycle, Decision Support Systems 46:
nomic primitives of the extended REA enterprise information 376387.
architecture. Int J Account Inf Syst 2002;3(1):116. [44] Lee, J. and Park, S., 2005. Intelligent profitable customers seg-
[26] Gelinas UJ, Sutton SG, Hunton JE. Accounting information mentation system based on business intelligence tools, Expert
systems. 6th edition. Thomson, OH, USA: South-Western; Systems with Applications 29: 145152.
[27] 2005. [45] Li, D., Lin, Y. and Huang, Y., 2009. Constructing marketing
[28] Ghoshal, S. and Kim, S.K., 1986. Building Effective Intelligence decision support systems using data diffusion technology: A
Systems for Competitive Advantage, Sloan Management Re- case study of gas station diversification, Expert Systems with
view, Vol. 28, No. 1, 4958. Applications 36: 25252533.
[29] Gonnet, S., Henning, G. and Leone, H., 2007. A model for cap- [46] Li, S., Shue, L. and Lee, S., 2008. Business intelligence approach
turing and representing the engineering design process, Expert to supporting strategy-making of ISP service management, Ex-
Systems with Applications 33: 881902. pert Systems with Applications 35: 739754.
[30] Gonzlez, J.R., Pelta, D.A. and Masegosa, A.D., 2008. A frame- [47] Likert, R., 1974. The method of constructing an attitude scale, in
work for developing optimization-based decision support sys- Maranell, G.M. (Ed.), Scaling: A Sourcebook for Behavioral
tems, Expert Systems with Applications. Scientists, Aldine Publishing Company, Chicago, IL, 21-43.
[31] Gottschalk, P., 2006. Expert systems at stage IV of the know- [48] Lin, Y., Tsai, K., Shiang, W., Kuo, T. and Tsai, C., 2009. Research
ledge management technology stage model: The case of police on using ANP to establish a performance assessment model for
investigations, Expert Systems with Applications 31: 617628. business intelligence systems, Expert Systems with Applica-
[32] Goul, M. and Corral, K., 2007. Enterprise model management tions 36: 41354146.
and next generation decision support, Decision Support Sys- [49] Loebbecke, C. and Huyskens, C., 2007. Development of a mod-
tems 43: 915 932. el-based netsourcing decision support system using a five-stage
[33] Gngr Sen, C., Baral, H., Sen, S. and Baslgil, H., 2008. An methodology, European Journal of Operational Research.
integrated decision support system dealing with qualitative [50] Lnnqvist, A. and Pirttimki, V., 2006. The Measurement of
and quantitative objectives for enterprise software selection, Business Intelligence, Information Systems Management, 23:1,
Expert Systems with Applications. 3240.
[34] Hair, J.F., Anderson, R.E., Tatham, R.L. and Black, W.C., 1998. [51] Makropoulos, C.K., Natsis, K., Liu, S., Mittas, K. and Butler, D.,
Multivariate Data Analysis, Prentice-Hall, Upper Saddle River, 2008, Decision support for sustainable option selection in inte-
NJ, 7-232. grated urban water management, Environmental Modelling &
[35] Hedgebeth, D., 2007. Data-driven decision making for the en- Software 23: 14481460.
terprise: an overview of business intelligence applications, The [52] Maria, F., 2005. Improving the utilization of external strategic
journal of information and knowledge management systems information. Tampere University of Technology, Master of
Vol. 37 No. 4, 414-420. Science Thesis.
[36] Hemsley-Brown, J., 2005. Using research to support manage- [53] Marinoni, O., Higgins, A., Hajkowicz, S. and Collins, K, 2009.
ment decision making within the field of education, Manage- The multiple criteria analysis tool (MCAT): A new software tool
ment Decision Vol. 43 No. 5, pp. 691-705. to support environmental investment decision making, Envi-
[37] Hewett, C., Quinn, P., Heathwaite, A.L., Doyle, A., Burke, S., ronmental Modelling & Software 24: 153164.
Whitehead, P. and Lerner, D., 2009. A multi-scale framework [54] Metaxiotis, K., Psarras, J. and Samouilidis, E., 2003, Integrating
for strategic management of diffuse pollution, Environmental fuzzy logic into decision support systems: current research and
Modelling & Software 24: 7485. future prospects, Information Management & Computer Securi-
[38] Hotteling, H., 1935. The most predictable criterion, Journal of ty 11/2, 53-59.
Educational Psyhology, Vol. 26, 139-142. [55] McCarthy WE. An entity-relationship view of accounting mod-
[39] Kaiser, H., 1958. The varimax criterion for analytic rotation in els. Account Rev 1979;54(4):66786.
factor analysis. Psychometrika, 23(3), 187-200. [56] McCarthy WE. The REA accounting model: a generalized
[40] Koo, L.Y., Adhitya, A., Srinivasan, R. and Karimi, I.A., 2008. framework for accounting systems in a shared data environ-
Decision support for integrated refinery supply chains Part 2. ment. Account Rev 1982;57(3):55478.
Design and operation, Computers and Chemical Engineering [57] Nemati, H., Steiger, D., Iyer, L. and Herschel, R., 2002. Know-
32: 27872800. ledge warehouse: an architectural integration of knowledge
[41] Koutsoukis, N., Dominguez-Ballesteros, B., Lucas, C.A. and management, decision support, artif icial intelligence and data
IJSER 2011
http://www.ijser.org
296
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
warehousing, Decision Support Systems 33: 143 161. decision support system for managing inventory at GlaxoS-
[58] Nie, G., Zhang, L., Liu, Y., Zheng, X. and Shi, Y., 2008. Decision mithKline, Decision Support Systems.
analysis of data mining project based on Bayesian risk, Expert [76] Shi, Z., Huang, Y., He, Q., Xu. L., Liu, S., Qin, L., Jia, Z., Li, J.,
Systems with Applications. Huang, H. and Zhao, L., 2007.MSMinera developing platform
[59] Noori, B. and Salimi, M.H., 2005. A decision-support system for for OLAP, Decision Support Systems 42: 2016 2028.
business-to-business marketing, Journal of Business & Industri- [77] Shim, J., Warkentin,M., Courtney, J., Power, D., Sharda, R. and
al Marketing 20/4/5: 226236. Carlsson, C., 2002. Past, present, and future of decision support
[60] Nylund, A., 1999. Tracing the BI Family Tree. Knowledge Man- technology, Decision Support Systems 33: 111 126.
agement. [78] Sutton SG, Arnold V. Foundations and frameworks for AIS
[61] O zbayrak, M. and Bell, R., 2003. A knowledge-based decision research. In: Arnold V, Sutton SG, editors. Researching account-
support system for the management of parts and tools in FMS, ing as an information systems discipline. Sarasota, FL, USA:
Decision Support Systems 35: 487 515. American Accounting Association; 2002.
[62] Petrini, M. and Pozzebon, M., 2008. What role is Business [79] Wadhwa, S., Madaan, J. and Chan, F.T.S., 2009. Flexible deci-
Intelligence playing in developing countries? A picture of Bra- sion modeling of reverse logistics system: A value adding
zilian companies. In: Rahman, Hakikur (Eds.), Data Mining MCDM approach for alternative selection, Robotics and Com-
Applications for Empowering Knowledge Societies, IGI Global, puter-Integrated Manufacturing 25: 460469.
237257. [80] Yu, L., Wang, S. and Lai, K., 2009. An intelligent-agent-based
[63] Phillips-Wren, G., Hahn, E. and Forgionne, G., 2004. A mul- fuzzy group decision making model for financial multicriteria
tiple-criteria framework for evaluation of decision support sys- decision support: The case of credit scoring, European Journal
tems, Omega 32: 323 332. of Operational Research 195: 942959.
[64] Phillips-Wren, G., Mora, M., Forgionne, G.A. and Gupta, J.N.D., [81] Zack, M., 2007. The role of decision support systems in an inde-
2007. An integrative evaluation framework for intelligent deci- terminate world, Decision Support Systems 43: 16641674.
sion support systems, European Journal of Operational Re- [82] Zhan, J., Loh, H.T. and Liu, Y., 2009. Gather customer concerns
search. from online product reviews A text summarization approach,
[65] Pitty, S., Li, W., Adhitya, A., Srinivasan, R. and Karimi, I.A., Expert Systems with Applications 36: 21072115.
2008. Decision support for integrated refinery supply chains [83] Zhang, X., Fu, Z., Cai, W., Tian, D. and Zhang, J.,
Part 1. Dynamic simulation, Computers and Chemical Engi- 2009. Applying evolutionary prototyping model in
neering 32: 27672786. developing FIDSS: An intelligent decision support
[66] Plessis, T. and Toit, A.S.A., 2006. Knowledge management and system for fish disease/health management, Expert
legal practice, International Journal of Information Manage- Systems with Applications 36: 39013913.
ment 26: 360371.
[67] Power, D. and Sharda, R., 2007. Model-driven decision support
systems: Concepts and research directions, Decision Support
Systems 43: 1044 1061.
[68] Power, D.J., 2008.Understanding Data-Driven Decision Support
Systems, Information Systems Management, 25:2, 149 154.
[69] Quinn, N.W.T., 2009. Environmental decision support system
development for seasonal wetland salt management in a river
basin subjected to water quality regulation, agricultural water
management 96, 247 254.
[70] Raggad, B.G., 1997. Decision support system: use IT or skip IT,
Industrial Management & Data Systems 97/2: 4350.
[71] Ranjan, J., 2008. Business justification with business intelli-
gence, VINE: The journal of information and knowledge man-
agement systems, Vol. 38, No. 4, 461-475.
[72] Reich, Y. and Kapeliuk, A., 2005. A framework for organizing
the space of decision problems with application to solving sub-
jective, context-dependent problems, Decision Support Systems
41: 1 19.
[73] Ross, J.J., Dena, M.A. and Mahfouf, M., 2008. A hybrid hierar-
chical decision support system for cardiac surgical intensive
care patients. Part II. Clinical implementation and evaluation,
Artificial Intelligence in Medicine.
[74] Santhanam, R. and Guimaraes, T., 1995. Assessing the quality
of institutional DSS, European Journal of Information Systems 4
(3).
[75] Shang, J., Tadikamalla, P., Kirsch, L. and Brown, L., 2008. A
IJSER 2011
http://www.ijser.org
297
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
Abstract The method for segmentation of color regions in images with textures in adjacent regions being different can be arranged in two
steps namely color quantization and segmentation spatially. First, colors in the image are quantized to few representative classes that can be
used to differentiate regions in the image. The image pixels are then replaced by labels assigned to each class of colors. This will form a
class-map of the image. A mathematical criteria of aggregation and mean value is calculated. Applying the criterion to selected sized windows
in the class-map results in the highlighted boundaries. Here high and low values correspond to possible boundaries and interiors of color
texture regions. A region growing method is then used to segment the image.
Key Words: Texture segmentation, clustering, spital segmentation, slicing, texture composition, boundry value image, median-cut.
1. INTRODUCTION
mogeneous regions or, equivalently, by finding edges or With an increasing speed and decreasing costs of compu-
boundaries. Regions of an image segmentation should be tation; relatively inexpensive color camera the limitations
uniform and homogeneous with respect to some charac- are ruled out. Accordingly, there has been a remarkable
teristics such as gray tone or texture Region interiors growth of algorithms for segmentation of color images.
should be simple and without many small holes. Adjacent Most of times, these are kind of dimensional exten-
regions of segmentation should have significantly differ- sions of techniques devised for gray-level images; thus
ent values with respect to the characteristic on which they exploit the well-established background laid down in that
are uniform. Boundaries of each segment should be sim- field. In other cases, they are ad hoc techniques tailored
ple, not ragged, and must be spatially accurate. on the particular nature of color information and on the
Thus, in a large number of applications in image physics of the interaction of light with colored materials.
processing and computer vision, segmentation plays a More recently, Yining Deng and B. S. Manjunath [1][2]
fundamental role as the first step before applying the uses the basic idea of separate the segmentation process
higher-level operations such as recognition, semantic in- into color quantization and spatial segmentation. The
terpretation, and representation. quantization is performed in the color space without con-
Earlier segmentation techniques were proposed sidering the spatial distributions of the colors. S Belongie,
mainly for gray-level images on which rather comprehen- et.al [3], in their paper present a new image representa-
sive survey can be found. The reason is that, although tion, which provides a transformation from the raw pixel
color information permits a more complete representation data to a small set of image regions that are coherent in
of images and a more reliable segmentation of them, color and texture space. A new method of color image
IJSER 2011
http://www.ijser.org
298
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
segmentation is proposed in [4] based on K-means algo- assured of using every color in the colormap, and thereby
rithm. Both the hue and the intensity components are ful- reproducing the original image more closely.
ly utilized.
The most popular algorithm for color quantization,
algorithm attempts to approximate the optimal solution. tures, image segmentation, is often an essential step in im-
The process of color quantization is often bro- age analysis. A great variety of segmentation methods
ken into four phases . have been proposed in the past decades. They can be ca-
1) Sample image to determine color distribution. tegorized into
2) Select colormap based on the distribution Threshold based segmentation: Histogram thresholding
3) Compute quantization mapping from 24-bit colors to and slicing techniques are used to segment the image.
representative colors Edge based segmentation: Here, detected edges in an
4) Redraw the image, quantizing each pixel. image are assumed to represent object boundaries, and
Choosing the colormap is the most challenging used to identify these objects.
task. Once this is done, computing the mapping table Region based segmentation: Here the process starts in
from colors to pixel values is straightforward. the middle of an object and then grows outwards until it
In general, algorithms for color quantization can meets the object boundaries.
be broken into two categories: Uniform and Non- Clustering techniques: Clustering methods attempt to
Uniform. In Uniform quantization the color space is bro- group together patterns that are similar in some sense.
ken into equal sized regions where the number of regions Perfect image segmentation cannot usually be
NR is less than or equal to Colors K. Uniform quantiza- achieved because of oversegmentation or undersegmentation.
tion, though computationally much faster, leaves much In oversegmentation pixels belonging to the same object
room for improvement. In Non-Uniform quantization the are classified as belonging to different segments.
manner in which the color space is divided is dependent In the latter case, pixels belonging to different
on the distribution of colors in the image. By adapting a objects are classified as belonging to the same object.
colormap to the color gamut of the original image, it is
IJSER 2011
http://www.ijser.org
299
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
and can be summarized as: It is difficult to handle a 24-bit color images with
For image segmentation a parameter is calculated. This thousands of colors. Image are coarsely quantized with-
involves minimizing a cost associated with the partition- out significantly degrading the color quality. Then, the
ing of the image based on pixel labels. quantized colors are assigned labels. A color class is the
Segmentation is achieved using an algorithm. The nota- set of image pixels quantized to the same color. The image
tion of Boundary-images, correspond to measurements of pixel colors are replaced by their corresponding color
each image region contains pixels from a small subset of two regions. One region contains class 1 and the other
the color classes and each class is distributed in a few im- one contains classes 2 and 0.
map. Let , z= (x,y), z Z , and m be the mean, number of points in region k, N is the total number of
m = 1/N z points in the class-map, and the summation is over all the
SW is the total variance of points belonging to the same image is not practical , J, if applied to a local area of the
class. class-map, indicates whether that area is in the region
interiors or near region boundaries
J = ST SWSW
J=1.72 J=0.855
IJSER 2011
http://www.ijser.org
301
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
correspond to these J values calculated over small win- Here the segmentation parameter is calculated and boun-
dows centered at the pixels. These are refered as Boun- dary image is created. It scans the color vector in boun-
dary-values Images and the corresponding pixel values as dary image and calculates initial segmented image. Then
local J values. The higher the local J value is, the more for the number of iterations mentioned it segments im-
likely that the corresponding pixel is near a region boun- ages repeatedly by converging the values. The accuracy of
dary. The Boundary-image contains intensities that actual- segmentation depends on the number of iterations the
useful for detecting texture boundaries. Often, multiple simple region growing algorithm. It runs a classic stack-
scales are needed to segment an image. In this implemen- based region rowing algorithm. It finds a pixel that is not
tation, the basic window at the smallest scale is a 9 x 9 labeled, labels it and stores its coordinates on a stack.
window. The smallest scale is denoted as scale 1. The While there are pixels on the stack, it gets a pixel from the
window size is doubled each time to obtain the next larg- stack (the pixel being considered), Checks its neighboring
er scale. pixels to see if they are unlabeled and close to the consi-
The characteristics of the Boundary-images allow dered pixel, if are, label them and store them on the stack
us to use a region-growing method to segment the image. Repeats this process until there are no more pixels on the
Initially the entire image is considered as one region. The image.
segmentation of the image starts at a coarse initial scale. It Figure 5.1: Snapshot of Selected Image and Segmented Image
(NeuQuant,12)
then repeats the same process on the newly segmented
regions at the next finer scale. Region growing consists of
out using JDK 1.5. and JAI 1.3 (Java Advanced Imaging
IJSER 2011
http://www.ijser.org
302
International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011
ISSN 2229-5518
fied with the threshold values that are low near the
boundary.The accuracy of segmentation depends on the
algorithm used for quantization and the number of itera-
tions.
Figure 5.2: Snapshot of Selected Image and Segmented Image (Mediun Cut,8)
Image (NeuQuant,16)
In Figures 5.1 thru 5.3 ,we can see the border dis- with appropriate borders to differentiate between the
played on the original image, border displayed on the segments. The images having different colors in the adja-
segmented image, original image and segmented image , cent regions are the best candidates for the segmentation
shown in top left to bottom right in that order using the proposed method.
However, it has does not handle pictures where [4] Pappas, T.N., An adaptive clustering algorithm for
smooth transition takes place between adjacent regions image segmentation, International Conference on
and there is no clear visual boundary. For instance, the Pattern recognition, Vol 3, pp 3619, 2000.
color of a sunset sky can vary from red to orange to dark [5] H.D. Cheng, X,H Jiang, Y Sun and Jingli Wang,
The authors like to thank Singhania Universaity, Rajas- Analysis and Machine Intelligence ,Vol 22, pp 888-
for boundary detection and segmentation", IEEE using Java, Nick Efford, Pearson Education
Transactions on Vol. 9, pp. 1375-88, August 2000.
[3] S.Belongie, C. Carson, H.Greenspan and
IJSER 2011
http://www.ijser.org
304
Abstract XML (eXtensible Markup Language) is a standard and universal language for representing information. XML has become integral to
many critical enterprise technologies with its ability to enable data interoperability between applications on different platforms. Every application that
processes information from XML documents needs an XML Parser which reads an XML document and provides interface for user to access its con-
tent and structure. However the processing of xml documents has a reputation of poor performance and a number of optimizations have been de-
veloped to address this performance problem from different perspectives, none of which have been entirely satisfactory. Hence, in this paper we
developed a Fast Parser tool for domain specific languages. In this we can execute Parser user friendly without having any constraints.
Index TermsXML, Parser Tool, Document Object Model, SAX, XML Document, Document Validation.
1 INTRODUCTION
parsing [9], lazy parsing [10] and schema-specific parsing [4]. ment. The Fast XML Parser Tool is as follows:
Su Cheng Haw and G. S. V. Radha Krishna Rao have
presented a model called Comparative Study and Benchmark- Load an XML document: Before an XML document can be ac-
ing on XML Parser. In that, they compare the xerces and .NET cessed and manipulated, it must be loaded into an XML Parser.
parsers based on the performance, memory, usage and so on The XML Parser reads XML document and converts it into a
[2]. Giuseppe Psaila have been developed a system called meaningful format. The job of the XML Parser is to make sure
Loosely coupling Java algorithms and XML Parsers. In that, that the document meets the defined structure and constraints.
he conducted a study about the problem of coupling java algo- The validation rule for any particular sequence of character
rithms with XML parsers. Su-cheng Haw and Chien-Sing Lee data is determined by the type of definition for its enclosing
have been presented a model called Fast Native XML Storage element. Fig.2 shows the sample XML document.
and Qurey Retrieval. In that, they proposed the INLAB2 ar-
chitecture comprises of five main components namely XML Algorithm
parser, XML encoder, XML indexer, Data manager and Query Step 1: Read XML File
processor [10]. Fadi El-Hassan and Dan Ionescu presented An Step 2: Search for start tag using regular expression <>
efficient Hardware based XML parsing techniques. In that, Step 3: If a start tag found then
they proposed hardware based solutions can be an obvious Step 4: Add attributes to hash table
choice to parse XML in a very efficient manner. Step 5: Search or end tag using regular expression </>
The existing XML Parsers spend a large amount of Step 6: Add element to hash table
time in tokenizing the input. To overcome all the drawbacks, Step 7: End if
here we have developed a new Fast Parser tool for domain spe- Step 8: Repeat step 3 to 7 until End Of File
cific languages. Though careful analysis of the operations re- Step 9: Verify and validate each element in hash table
quired for parsing and validation, we are using hash table to Step 10: End XML Parser
store element information, this will enhance the speed of acces-
sibility while searching for an element. More over we are using Reading an XML document: Reading an existing load XML
regular expressions to search for the tags and attributes, this file. It provides the createXMLReader function that returns an
will enhance the speed while reading XML contents. implementation of the XMLReader interface. It reads the root
element first and reads the sub element and corresponding
2 Fast XML Parser data. Finally creates the XML document and sends it to the user
To parse an XML document in software, the processing application as a XML document.
sequence starts by loading the XML document, then reading its
characters in sequence, extracting elements and attributes and Writing an XML document: Read the XML document and se-
then validating the XML document, writing parsed information parates the content and writes it to the corresponding hash
and finally reading the resulting parsed data. Our initial ap- table such as root element, sub element, attributes and data. It
proach separates the process of reading the XML document provides the CreateXMLWriter function to return an imple-
and stores the contents in to the hash table using regular ex- mentation of the XMLWriter function.
pressions. Fig.1 shows the architecture of the fast XML parser
tool. Knowledge based search: Search a particular element or con-
tent from an XML document by our fast XML Parser tool. In-
itially it checks the content in storage unit using hash key that
Application XML is root element. If it is available then it goes to the sub element
Parser and corresponding data and displays it as output. If it is not
available means it terminates the search.
4 CONCLUSION
5 REFERENCE
pp.321-325, Feb-2007.
[3] Shiren Ye and Tat-Seng Chua, Learning Object Models from Se-
mistructured Web Documents, IEEE Transactions on Knowledge
and Data Engineering, pp. 334-339, 2006.
[4] Zhenghong Gao, Yinfei Pan, Ying Zhang and Kenneth Chiu, A
High Performance Schema-Specific XML Parser, Third IEEE In-
ternational Conference on e-Science and Grid Computing, IEEE Com-
puter Society, pp.245-252, 2007.
[7] Yunsong Zhang Lei Zhao* Jiwen Yang Liying Yu, NEM-XML:
A Fast Non-extractive XML Parsing Algorithm, Third Interna-
tional Conference on Multimedia and Ubiquitous Engineering, IEEE
Computer Society, 2009.
[10] Su-Cheng Haw and Chien-Sing Lee, INLAB2: Fast Native XML
Storage and Query Retrieval, 3rd International Conference on In-
telligent System and Knowledge Engineering, IEEE Computer So-
ciety, pp.44-49, 2008.
[13] Gang WANG, Cheng XU, Ying LI, Ying CHEN, Analyzing XML
Parser Memory Characteristics: Experiments towards Improv-
ing Web Services Performance, IEEE International Conference on
Web Services, IEEE Computer Society, 2006.
IJSER 2011
http://www.ijser.org