Telecom & IT
Telecom & IT
Telecom & IT
SATHYAN
Fundamentals of EMS,
NMS, and OSS/BSS
JITHESH SATHYAN
From the initial efforts in managing elements to the latest management standards,
the text:
• Covers the basics of network management, including legacy systems,
management protocols, and popular products
• Deals with OSS/BSS—covering processes, applications, and interfaces
in the service/business management layers
• Includes implementation guidelines for developing customized
management solutions
The book includes chapters devoted to popular market products and contains case
studies that illustrate real-life implementations as well as the interaction between
management layers. Complete with detailed references and lists of Web resources
to keep you current, this valuable resource supplies you with the fundamental
understanding and the tools required to begin developing telecom management
solutions tailored to your customers’ needs.
AU8573
ISBN: 978-1-4200-8573-0
90000
w w w. c rc p r e s s . c o m
9 781420 085730
www.auerbach-publications.com
Jithesh Sathyan
OTHER TELECOMMUNICATIONS BOOKS FROM AUERBACH
Handbook on Mobile Ad Hoc and Pervasive Transmission Techniques for Emergent Multicast
Communications and Broadcast Systems
Edited by Mieso K. Denko and Laurence T. Yang Mario Marques da Silva, Americo Correia, Rui Dinis,
ISBN 978-1-4398-4616-2 Nuno Suoto, and Joao Carlos Silva
ISBN 978-1-4398-1593-9
HSDPA/HSUPA Handbook
Edited by Borko Furht and Syed A. Ahson Underwater Acoustic Sensor Networks
ISBN 978-1-4200-7863-3 Edited by Yang Xiao
ISBN 978-1-4200-6711-8
IP Communications and Services for NGN
Johnson I. Agbinya Wireless Sensor Networks: Principles
ISBN 978-1-4200-7090-3 and Practice
Fei Hu and Xiaojun Cao
Mobile Device Security: A Comprehensive ISBN 978-1-4200-9215-8
Guide to Securing Your Information in a
Moving World ZigBee Network Protocols and Applications
Stephen Fried Chonggang Wang, Tao Jiang, and Qian Zhang
ISBN 978-1-4398-2016-2 ISBN 978-1-4398-1601-1
AUERBACH PUBLICATIONS
www.auerbach-publications.com
5P0SEFS$BMMr'BY
E-mail: [email protected]
Auerbach Publications
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
Foreword............................................................................................................xv
Preface............................................................................................................ xvii
About the Author............................................................................................. xix
v
vi ◾ Contents
7. Standardizing Bodies............................................................................85
7.1 Introduction.....................................................................................85
7.2 ITU (International Telecommunication Union)...............................85
7.3 TMF (TeleManagement Forum)......................................................87
7.4 DMTF (Distributed Management Task Force).................................88
7.5 3GPP (Third Generation Partnership Project)................................. 90
7.6 ETSI (European Telecommunications Standards Institute)..............91
7.7 MEF (Metro Ethernet Forum).........................................................93
7.8 ATIS (Alliance for Telecommunications Industry Solutions)............94
7.9 OASIS (Organization for the Advancement of Structured
Information Standards)....................................................................95
7.10 OMA (Open Mobile Alliance).........................................................96
7.11 SNIA (Storage Networking Industry Association)............................96
7.12 Conclusion.......................................................................................97
Additional Reading.....................................................................................97
13.7 Conclusion................................................................................185
Additional Reading................................................................................186
14. SNMP...............................................................................................187
14.1 Introduction..............................................................................187
14.2 SNMPv1...................................................................................188
14.3 SNMPv2...................................................................................192
14.4 SNMPv3...................................................................................196
14.5 Conclusion............................................................................... 200
Additional Reading................................................................................201
15. Information Handling.....................................................................203
15.1 Introduction..............................................................................203
15.2 ASN.1....................................................................................... 204
15.2.1. ASN.1 Simple Types.................................................... 204
15.2.2. ASN.1 Structured Type.................................................207
15.3 BER.......................................................................................... 208
15.4 SMI...........................................................................................210
15.5 Conclusion................................................................................ 215
Additional Reading................................................................................216
16. Management Information Base (MIB).............................................217
16.1 Introduction.............................................................................. 217
16.2 Types of MIB............................................................................ 218
16.3 MIB-II..................................................................................... 220
16.4 SNMPv1 MIB...........................................................................227
16.5 SNMPv2 MIB...........................................................................227
16.6 SNMPv3 MIB...........................................................................229
16.7 Conclusion................................................................................230
Additional Reading................................................................................230
17. Next Generation Network Management (NGNM)..........................231
17.1 Introduction..............................................................................231
17.2 NGNM Basics...........................................................................232
17.3 TR133.......................................................................................236
17.4 M.3060.....................................................................................240
17.5 Conclusion................................................................................243
Additional Reading............................................................................... 244
18. XML-Based Protocols......................................................................245
18.1 Introduction..............................................................................245
18.2 XMLP Overview...................................................................... 246
18.3 XML Protocol Message Envelope..............................................248
18.4 XML Protocol Binding..............................................................248
x ◾ Contents
21. NGOSS.............................................................................................325
21.1 Introduction..............................................................................325
21.2 NGOSS Overview.....................................................................326
21.3 NGOSS Lifecycle......................................................................327
21.4 SANRR Methodology...............................................................330
21.5 eTOM Model............................................................................331
21.6 SID............................................................................................333
21.7 TNA..........................................................................................334
21.8 TAM.........................................................................................335
Contents ◾ xi
21.9 Conclusion................................................................................336
Additional Reading................................................................................337
34.2 Introduction..............................................................................524
34.3 Legacy NMS.............................................................................525
34.4 Issues with Legacy NMS...........................................................526
34.5 NGNM Solution.......................................................................527
34.5.1. NGNM Framework......................................................527
34.5.2. Generic Functionalities for NGNM..............................530
34.5.3. Specialized Functions....................................................531
34.5.4. Customization for a Specific Network...........................532
34.6 Adoption Strategy......................................................................532
34.6.1. Using Mediation Layer.................................................532
34.6.2. Staged Migration..........................................................533
34.6.3. Combining Mediation and Migration...........................539
34.7 Analysis of NGNM Framework.................................................539
34.8 Conclusion............................................................................... 540
Supporting Publications........................................................................ 540
References..............................................................................................541
Index............................................................................................................543
Foreword
xv
xvi ◾ Foreword
book will be a valuable asset for academicians and professionals. It is with great
pleasure that I introduce this book. I wish Jithesh all the success in this and future
endeavors.
Manesh Sadasivan
Principal Architect, Product Engineering
Infosys Technologies Limited (www.infosys.com)
Preface
xvii
xviii ◾ Preface
managing elements to the latest management standards are covered in this part.
The second section deals with the basics of network management; legacy systems,
management protocols, standards, and popular products are all handled in this
part. The third section deals with OSS/BSS, covering the process, applications, and
interfaces in the service/business management layers in detail. The final section
gives the reader implementation guidelines to start developing telecom manage-
ment solutions.
This book is dedicated to my beloved wife for her patience and consistent sup-
port that helped me a lot in completing this book. I would like to thank the product
engineering team at Infosys for giving me numerous engagements in EMS, NMS,
and OSS/BSS space with a variety of telecom clients in multiple countries that went
a long way in giving me the knowledge to write this book. Writing this book has
been a very good experience and I hope you enjoy reading it as much as I enjoyed
writing it.
Jithesh Sathyan
Technical Architect
Infosys Technologies Limited
About the Author
xix
Element I
Management
System (EMS)
Chapter 1
What Is EMS?
This chapter is intended to provide the reader with a basic understanding of Element
Management System. At the end of this chapter you will have a good understand-
ing of what is a network element, the need for managing a network element, how an
element management system fits into telecom architecture, and some of the major
vendors involved in the development of EMS.
1.1 Introduction
Industries are highly dependent on their networking services for day-to-day activi-
ties from sending mail to conducting an audio/video conference. Keeping services
running on the resource is synonymous with keeping the business running. Any
service is offered by a process or set of processes running on some hardware. A set
of hardware makes up the inventory required for keeping a service up and running.
If the hardware goes down or when there is some malfunction with the hardware,
this would in turn affect the service. This leads to the need to monitor the hardware
and ensure its proper functioning in order to offer a service to a customer. This
hardware being discussed is the element and an application to monitor the same
constitutes the element management system. Since the element is part of a network,
it is usually referred to as network element or NE.
3
4 ◾ Fundamentals of EMS, NMS and OSS/BSS
Most element management systems are available in the market support man-
agement data collection from multiple network elements though they cannot be
called a network management system. Hence an application is not necessarily a
network management system just because it supports data collection from mul-
tiple NEs. The functionalities offered by the systems decide whether it is a network
management system (NMS) or an element management system (EMS). Let us
make this point clear with an example. A topology diagram in an EMS would
have the nodes configured in the element while a topology diagram in an NMS
will show all the configured network elements managed by the NMS. Again a fault
management window on an EMS would show only the logs and alarms generated
on the network element it is managing. The functionalities like fault correlation
and decision handling based on events from various NEs are shown in an NMS.
The functionalities of an EMS are covered in detail in the chapters to follow.
This introduction is intended to only give the reader a feel of what a network ele-
ment and EMS is in telecom domain. In the sections to follow we will explore the
architecture of EMS and take some sample EMS products to gain familiarity with
EMS applications available in the market.
1.2 EMS in Telecom
As shown in Figure 1.1, an element manager (EM) collects data from the network
elements (NE). An ideal scenario would involve one element manager to collect
data from a single network element as in EM-2 and NE-n. It can be seen that this is
not the case for an actual EMS product. Almost all the EMS products available in
the market support management of a set of elements as shown for element manag-
ers other than EM-2 in Figure 1.1.
NMS server
It is seen that while enhancing an EMS product or to make the product com-
petitive in the market, many EMS solution developers offer some of the NMS
related functionalities that involve processing of collected data from multiple NEs
to display network representation of data. This point needs to be carefully noted by
the reader to avoid getting confused when handling an actual EMS product that
might include some NMS features as part of the product suite.
Figure 1.1 showing layered views closely resembles the TMN model with the
network elements in the lowest layer, above which is the element management layer
followed by network management layer. The TMN model is handled in much
detail in Chapter 2.
The NMS server may not be the only feed that collects data from an EMS
server. The EMS architecture is handled in more detail in the next section where
the various feeds from an EMS sever are depicted pictorially and explained.
1.3 EMS Architecture
The element management system in Figure 1.2 is the EMS server that provides
FCAPS functionality. The FCAPS is short for Fault, Configuration, Accounting,
Performance, and Security. Basic FCAPS functionality at element level is offered
in EMS layer. The FCAPS is also referred in conjunction with the NMS layer as an
enhanced version of these functionalities come up in the NMS layer. While EMS
looks at fault data from a single element perspective, an NMS processes an aggre-
gate of fault data from various NEs and performs operations using this data.
Data collected by the EMS from the network elements are utilized by:
◾◾ NMS Server: The NMS uses collected as well as processed data from the
EMS to consolidate and provide useful information and functionalities at a
network management level.
◾◾ EMS Client: If data from EMS needs to be represented in GUI or as console
output to a user then an EMS client is usually implemented for the same.
The EMS client would provide help in representing the data in a user-friendly
manner. One of the functions in ‘F “C” APS’ is configuring the network
elements. The EMS client can be used to give a user-friendly interface for
configuring the elements.
◾◾ Database (DB): Historical data is a key component in element management.
It could be used to study event/fault scenarios, evaluate performance of the
network elements, and so on. Relevant data collected by EMS needs to be
stored in a database.
◾◾ Interface for web client: Rather than an EMS client, most users prefer a light-
weight, web-based interface to be a replacement for the web client. This func-
tionality is supported by most EMS solutions.
◾◾ The EMS output could also be feed to other vendor applications. These exter-
nal applications might work on part or whole of the data from the EMS.
◾◾ There are an increasing variety of NEs: New network elements are being
developed to provide new services. In the network itself there are mul-
tiple telecom networks like GSM, GPRS, CDMA, IMS, and so on with
independent work being carried out in the development of access and core
network.
◾◾ Need to support elements from different vendors: Mergers and acquisitions
are inevitable in the telecom industry. So a single EMS product might be
required to collect data from a family of network elements.
What Is EMS? ◾ 7
Meeting these challenges requires element management systems (not just EMS but
good NMS, OSS, and BSS also) with the maximum efficiency and flexibility with
respect to accomplishing tasks.
◾◾ User-friendly interface: The interface to work with the EMS (EMS client
or web client) needs to be an intuitive, task-oriented GUI to allow opera-
tions functions to be performed in the shortest possible time with minimal
training.
◾◾ Quick launch: This would enable a user working at the NML, SML, or BML
to launch any EMS desired (when multiple EMS are involved).
◾◾ Troubleshoot NE: It should be possible for a technician to directly log into
any NE from the EMS that manages it for ease in troubleshooting an issue.
Single-sign-on is a popular capability that allows the same user id and pass-
word to be used when logging into the applications involved in different layers
of TMN model that will be discussed in the next chapter.
◾◾ Low-cost operations platform: This would help in minimizing the total cost
to own and operate the computing platform on which the EMS runs.
◾◾ Ease of enhancement: New features need to be added to an EMS product and
the product might require easy integration and work with another product.
Some of the techniques currently adopted in the design of such systems is
to make it based on service oriented architecture (SOA), comply to COTS
software standards, and so on.
8 ◾ Fundamentals of EMS, NMS and OSS/BSS
Above all, the key to being a leader in EMS development is to make the product
functionality rich and to ensure that there are key features that differentiate the
product from its competitors.
1.7 Conclusion
This chapter thus provides a basic understanding of an element management system
(EMS) and what a network element is from the perspective of EMS. The chapter
also helps to understand the role of EMS in the telecom layered view and the EMS
What Is EMS? ◾ 9
4 BroadWorks • Auto-discovery
Element • Configuration management
Management
System • Centralized administrator management
• Fault management
• Performance management
• Capacity management
• Multirelease Support
(Continued)
10 ◾ Fundamentals of EMS, NMS and OSS/BSS
• Software administration
• Fault monitoring and management
• Performance monitoring and management
• Interfaces that can be changed, added, and deleted
• Off-the-shelf system services for distribution
• Security
• Database integration and database persistence
(Continued)
12 ◾ Fundamentals of EMS, NMS and OSS/BSS
(Continued)
14 ◾ Fundamentals of EMS, NMS and OSS/BSS
• Security
• Service activation
• Service level agreements
(Continued)
16 ◾ Fundamentals of EMS, NMS and OSS/BSS
(Continued)
18 ◾ Fundamentals of EMS, NMS and OSS/BSS
• Subscriber management
• Scheduled data in-/export via XML-files
• Scheduled export of all logfiles
• Data export in standard SDF formatted file
• Standby-solution for the whole system CORBA-
based NML–EML interface (according MTNM
Model) for integration in umbrella management
architecture. The need for element management systems and the characteristics
of an ideal EMS is covered. To give the reader familiarity with the EMS products
and functionalities, a data sheet is also provided that shows some of the leading
EMS products and their features. The information provided in the data sheet can
be obtained from the company Web site that will give more information on the
products.
Additional Reading
1. Vinayshil Gautam. Understanding Telecom Management. New Delhi: Concept
Publishing Company, 2004.
2. James Harry Green. The Irwin Handbook of Telecommunications Management. New
York: McGraw-Hill, 2001.
3. Kundan Misra. OSS for Telecom Networks: An Introduction to Network Management.
New York: Springer, 2004.
Chapter 2
TMN Model
2.1 Introduction
TMN stands for Telecommunications Management Network. The concept of
TMN was defined by ITU-T in recommendation M.3010. The telecommunica-
tions management network is different from a telecommunication network. While
the telecom network is the network to be managed, TMN forms the management
system for the network to be managed. This is shown in Figure 2.1 and is explained
in more detail in the next section.
The recommendations from ITU-T on TMN are listed in Table 2.1.
Recommendation M.3010 defines general concepts and introduces several man-
agement architectures to explain the concept of telecom network management.
The architectures are:
◾◾ Functional architecture: It defines management functions.
◾◾ Physical architecture: It defines how to implement management functions
into physical equipment.
◾◾ Information architecture: Concepts adopted from OSI management.
◾◾ Logical architecture: A model that splits telecom management into layers
based on responsibilities.
21
22 ◾ Fundamentals of EMS, NMS and OSS/BSS
Recommendation
Number Title
Hence we have the TMN interface that is comprised of the following elements:
Since the scope of TMN is clear from Figure 2.2, now let us identify the items that
are defined by TMN inside its scope with TMN entities and the managed network.
TMN can be said to be:
WS
OS1 OS2 OSn
TMN
Data communication
WS
network
SS-Switching system (Exchange)
TS-Transmission systems
OS-Operations system
WS-Workstation
TMN interface
SS TS SS TS SS
Telecommunication network
The TMN recommendation M.3010 does not make any reference to the
Internet though it includes a number of concepts that are relevant to the Internet
management community. There is also a strong relationship between TMN and
OSI management. The discussion on the relation of TMN and OSI model will be
covered in Chapter 5 after the concepts of the OSI model are discussed.
The function blocks that are fully part of the TMN are:
◾◾ Reference points
◾◾ Data communication function
◾◾ Functional components
26 ◾ Fundamentals of EMS, NMS and OSS/BSS
MF
TMN
Reference TMN
point QAF NEF
OSF NEF
Non-TMN Non-TMN
1. Reference points: The concept of the reference point is used to delineate func-
tion blocks.
2. Data communication function: DCF is used by the function blocks for
exchanging information.
3. Functional components: TMN function blocks are composed of a number
of functional components. The following functional components are defined
in M.3010 and this standards document can be referred for details on the
functional components:
a. Management application function
b. Management information base
c. Information conversion function
d. Human machine adaptation
e. Presentation function
f. Message communication function (MCF)
The interactions between the function blocks in TMN functional architecture are
shown in Figure 2.3.
These building blocks generally reflect a one-to-one mapping with function block.
However multiple function blocks can be mapped to a single physical block (build-
ing block). For example, most legacy network elements (NE) also have a GUI inter-
face for technicians to configure and provision the NE. In this scenario a NE in a
building block is having OSF and WSF functions in addition to NEF. There are
different possible scenarios of multiple function blocks mapping to a single building
block and a single function block distributed among multiple building blocks. The
possible relations between function and building block are shown in Figure 2.4.
28 ◾ Fundamentals of EMS, NMS and OSS/BSS
Function blocks
NEF MF QAF OSF WSF
NE Mandatory Optional Optional Optional Optional
Building blocks
2.6 Logical Architecture
TMN logical architecture splits telecom management functionality into a set of hier-
archal layers. This logical architecture is one of the most important contributions of
TMN which helps to focus on specific aspects of telecom management application
development. Though new detailed models were built based on TMN as a reference,
the TMN logical architecture still forms the basic model which reduces manage-
ment complexity by splitting management functionality so that specializations can be
defined on each of the layers (see Figure 2.6).
TMN Model ◾ 29
Managed object
Business
management
layer
Service management
layer
1. Network element layer (NEL): This is the lowest layer in the model and it cor-
responds to the network element that needs to be managed and from which
management information is collected. This layer can also include adapters to
adapt between TMN and non-TMN information. So both the Q-adapter
and the NE in the physical TMN model are located in the NEL and NEF
and QAF functions are performed in network element layer.
2. Element management layer (EML): This is the layer above the network
element layer and deals with management of elements in the NEL. The
development of an element management system (EMS) falls in this layer
and the functionalities discussed in the first chapter performed by element
managers are grouped in this logical layer of TMN. When mapping it with
functional architecture the OSF of the network elements are associated
with this layer.
30 ◾ Fundamentals of EMS, NMS and OSS/BSS
Currently, the operation support system (OSS) mostly used in association with
service management applications is supposed to incorporate all management func-
tions as in OSF of the TMN functional architecture. When we segregate manage-
ment functionality into business, service, network, and element management, the
business management application got associated with BSS, service management
application with OSS, network management application with NMS, and element
management applications with EMS.
4. Service management layer (SML): This layer, as the name signifies, is con-
cerned with managing services. Service is what is offered to the customer
and hence this layer is the basic point of contact with customers. To realize
any service there is an underlying network and so the SML uses information
presented by NML to manage contracted service to existing and potential
customers. The SML also forms the point of interaction with service provid-
ers and with other administrative domains.
Some of the functions performed in this layer are:
Service provisioning
User account management
Quality of service (QoS) management
Inventory management
Monitoring service performance
5. Business management layer (BML): This is the top most layer in the TMN
logical architecture and is concerned with business operations.
Some of the functions performed in this layer are:
High-level planning
Goal setting
TMN Model ◾ 31
Market study
Budgeting
Business-level agreements and decisions
2.7 Conclusion
The telecommunications industry is evolving rapidly. With emerging technologies,
acquisitions, multivendor environment, and increased expectations from consumer,
companies are presented with a challenging environment. There is a need for tele-
com management to support multiple vendor equipment and varied management
protocols, as well as continue to expand services, maintain quality, and protect
legacy systems. The telecommunications management network (TMN) provides a
multivendor, interoperable, extensible, scalable, and object-oriented framework for
meeting these challenges across heterogeneous operating systems and telecommu-
nications networks. This has lead to a mass adoption of TMN standards in building
telecom management applications.
Additional Reading
1. CCITT Blue Book. Recommendation M.30, Principles for a Telecommunications
Management Network, Volume IV: Fascicle IV.1, Geneva, 1989.
2. CCITT. Recommendation M.3010, Principles for a Telecommunications Management
Network. Geneva, 1996.
3. Masahiko Matsushita. “Telecommunication Management Network.” NTT Review 3,
no. 4 (1991): 117–22.
4. Divakara K. Udupa. “TMN: Telecommunications Management Network.” New York:
McGraw-Hill, 1999.
5. Faulkner Information Services. Telecommunications Management Network (TMN)
Standard. Pennsauken, NJ: Faulkner Information Services, 2001.
Chapter 3
ITU-T FCAPS
This chapter is intended to provide the reader with an understanding of FCAPS intro-
duced by ITU-T (International Telecommunication Union-Telecommunication
Standardization Bureau) and defined in recommendation M.3400. At the end of
this chapter you will have a good understanding of FCAPS functionality and how
it applies to different layers of the TMN.
3.1 Introduction
Associated with each layer in the TMN model are five functional areas called
FCAPS that stand for fault, configuration, accounting, performance and security.
These five functional areas form the basis of all network management systems for
both data and telecommunications (see Figure 3.1).
The information in telecom management is classified into functional areas using
FCAPS. It was introduced for telecom network management by ITU-T in recom-
mendation M.3400. The ISO (International Standards Organization) made the
FCAPS also suited for data networks with its OSI (open system interconnection)
network management model that was based on FCAPS. Further in this chapter,
each of the FCAPS functional areas are taken up and discussed in detail. All ele-
ment management systems are expected to provide the FCAPS function or a subset
of the same. At higher levels like business, service, and network layer, derivatives
of basic FCAPS functionality can be implemented like a complex event processing
engine that triggers mail on business risks when network elements carrying major
traffic goes down or a service request is sent to the nearest technician or set of com-
mands to the network elements to replace load or restart.
33
34 ◾ Fundamentals of EMS, NMS and OSS/BSS
Security
Performance
Accounting
Configuration
Fault S
AP
C
Business management layer F
The first part involves detection of an error condition by the element manage-
ment system. This can be achieved in the following ways:
There are other methods that an EMS uses to detect fault also. Some of them
include checking and filtering the system logs on the network element, adding code
that defines thresholds for attributes of the network element and generating an
event when threshold is crossed, and so on. Some EMS employ combinations of the
above fault detection methods, like having the NE send event notifications as well
as running routine checks if some data was lost or not sent by NE.
Once detected, the fault needs to be isolated. Isolation includes identifying the
element in the network or resource (can be a process or physical component) in the
element that generated the fault. If the fault information is generic, then isolation
would also involve filtering and doing data processing to isolate the exact cause of
the fault.
After the fault is detected and isolated, then it needs to be fixed. Correcting
fault can be manual or automatic. In the manual method, once the fault is dis-
played on a management application GUI, a technician can look at the fault
information and perform corrective action for the fault. In the automatic method,
the management application is programed to respond with a corrective action
when a fault is detected. The management application can send commands to the
network elements (southbound), send mail to concerned experts (northbound),
or restart processes on the management application server. Thus in automatic
method interaction with northbound interface, southbound interface, or to self
is possible.
Fault information is presented as a log or alarm in the GUI. A log can represent
nonalarm situations like some useful information for the user. A log specifying that
an audit was performed on a network element is an info log and does not raise an
alarm. An alarm or log usually has several parameters (see Table 3.1).
A detailed set of alarm parameters for specific networks are available in 3GPP
specification 32 series on OAM&P.
Having discussed the basic fault functionality, let us look into some applica-
tions that are developed using the fault data that is detected and displayed. Some of
the applications that can be developed using fault data are:
◾◾ Fault/event processing
◾◾ Fault/event filtering
◾◾ Fault/event correlation
Event Processing: There are three types of event processing: simple, stream, and
complex. In simple event processing when a particular event occurs, the manage-
ment application is programed to initiate downstream action(s). In stream event
processing, events are screened for notability and streamed to information subscrib-
ers. In complex event processing (CEP) events are monitored over a long period of
time and actions triggered based on sophisticated event interpreters, event pattern
definition and matching, and correlation techniques. Management applications
built to handle events are said to follow event driven architecture.
36 ◾ Fundamentals of EMS, NMS and OSS/BSS
Notification The same alarm can occur multiple times. An easy way to
ID track an alarm is using its notification ID.
Generation This is the time when the error scenario occurred and the
time notification was generated on the NE. Some applications also
have “Notification time,” which signifies the time when the
management application received an alarm notification from
the NE.
Severity The usual severity conditions are minor, major, critical, and
unknown severity.
Event Filtering: The corrective action for alarms are not the same. It varies with
the probable cause. There could also be corrective action defined for a set of alarms
as a single event and can generate multiple alarms on different network elements.
For example, a connection loss between two NEs would generate multiple alarms
not just related to connection but also on associated resources and its attributes that
are affected by the connection loss. The relevant log(s)/alarm(s) needs to be filtered
out and corrective action defined for the same.
Event Correlation: An integral part of effective fault handling is event correla-
tion. This mostly happens in the network and service management layers. It involves
comparing events generated on different NEs or for different reasons and taking
corrective actions based on the correlated information. For example, when the con-
nection between a call server and media gateway goes down, alarms are generated
on the media gateway as well as the call server, but the NMS handling both these
NEs will have to correlate the alarms and identify the underlying single event. The
process of grouping similar events is known as aggregation and generating a single
event is called event aggregation.
Event processing usually involves event filtering as well as event correlation.
Event processing in a service layer can generate reports on a service that could help
to analyze/improve quality of service or to diagnose/fix a service problem.
Some of the applications where basic fault data can be used as feed are:
◾◾ Quality assurance application: The lesser the faults the better the quality
◾◾ Inventory management application: Faulty NEs would need service or
replacement
◾◾ Service and resource defect tracker
◾◾ Event driven intelligent resource configuration and maintenance
◾◾ Product and infrastructure budget planning
3.4 Accounting Management
Accounting management involves identification of cost to the service
provider and payment due for the customer. Accounts being calculated
based on service subscribed or network usage.
In accounting, a mediation agent collects usage records from the network ele-
ments and forwards the call records to a rating engine (see Figure 3.2). The rating
engine applies pricing rules to a given transaction, and route the rated transaction to
the appropriate billing/settlement system. This is different from customer account
ITU-T FCAPS ◾ 39
Rating engine
Mediation
Network
◾◾ Track service and underlying resource usage: The mediation agent collects data
from NEs, which is used to determine charges to customer accounts. Account
data processing must be carried out in near real-time for a large number of cus-
tomers. On the data collected by the mediation agent, a tariff is applied based
on the service level agreement the customer has with the service provider.
◾◾ Tariff: The tariff or rating of usage is also part of accounting. This involves
applying the correct rating rules to usage data on a per customer basis and
then applying discounts, rebates, or charges as specified in the service level
agreement.
◾◾ Usage restrictions: Based on the subscription taken up by the customer there
is a limit set on the usage of resources, for example, disk space, network band-
width, call duration, services offered, and so forth.
◾◾ Converged billing: A single service is realized using multiple network ele-
ments. Traditionally accounting information is collected from all the NEs
and a single bill is generated. With converged billing the customer can have
multiple services and still get a consolidated billing for all the services.
40 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ Audits: Accounting is a critical aspect of business and forms the revenue for
physical network and service offered. Hence billing records and reports are
handled carefully in management applications. Schedule audits and reports
check the correctness of information and helps in archiving of data. An inter-
nal scheduler can perform report generation. Reports give consolidated data
that form input to business and technical planning.
◾◾ Fraud reporting: Fraud management in telecom has evolved to be an inde-
pendent area of specialization and there are applications to detect fraud that
are delivered independently without being part of a management applica-
tion. As part of the initial definitions of ITU-T, fraud reporting was part of
accounting. Some of the telecom frauds are: Subscription Fraud, Roaming
Subscription Fraud, Cloning, Call-Sell Fraud, Premium Rate Services (PRS)
Fraud, PBX Hacking/Clip-on Fraud, Pre-paid Fraud, Internal Fraud, and
so on.
The account management system should be such that it can interface to all
types of accounting systems and post charges.
The functionalities in performance management that are built into most man-
agement applications can be classified into:
ITU-T FCAPS ◾ 41
◾◾ Performance data collection: For each network element or for a service offered
by network there are a set of key performance indicators. For a network,
collection of data on these performance indicators help in determining and
forecasting network health and with a service it helps to indicate the quality
of service (QoS). Collection of this performance data is a key functionality in
most management applications.
◾◾ Utilization and error rates: The traffic handled by a trunk to the maximum
traffic it can handle is an indicator of trunk utilization. This in turn shows
how the trunk is performing independently and compared to other trunks in
the network. Even when a trunk is getting utilized, the overall network may
not be effectively managed.
A typical example is under utilization of one trunk and over utilization of
another. Thresholds need to be set and error rates determined so that there is
optimal utilization at network level of all possible resources. Different algo-
rithms are used to implement this and make sure there is minimal manual
intervention.
◾◾ Availability and consistent performance: The most basic check of performance
is availability of a resource. If the resource is generating reports then avail-
ability is confirmed. Performance data is usual for forecasting, trend develop-
ment, and planning only when the performance levels are consistent.
◾◾ Performance data analysis and report generation: All performance manage-
ment applications have data collection, data analysis, and report generation.
Multiple reports are generated as graphs and plots of the performance
data.
Data analysis would also involve creating correlations and finding corre-
lated output on threshold data. In addition to graphical representation, there
are also features in management applications that permit export of data for
later analysis by third-party applications.
◾◾ Inventory and capacity planning: Performance data is used as input for inven-
tory planning on deciding what network elements need to be replaced or
monitored and for capacity planning on deciding the routing path of trunks,
how much traffic to route, and so on.
1. Bulk download from the NE: In this method PM data are generated and
stored in a predetermined folder on the NE. The management applications
collect the data at regular intervals of time.
2. Send from the NE on generation: The network element contains code to send
the data to the management application using a predefined protocol and the
management applications keeps listening for data from NE.
3. Queried from NE database: The network element can contain a man-
agement information base (MIB) where performance data is stored and
42 ◾ Fundamentals of EMS, NMS and OSS/BSS
ception, a call in progress between two users is intercepted for listening by a law
enforcing agency.
Core network
Security gateway
Access network
3.7 Conclusion
The FCAPS forms the basic set of functionalities that are required in telecom man-
agement. All telecom management applications would use data obtained from these
functionalities in one way or the other.
Additional Reading
1. Vinayshil Gautam. Understanding Telecom Management. New Delhi: Concept
Publishing Company, 2004.
2. James Harry Green. The Irwin Handbook of Telecommunications Management. New
York: McGraw-Hill, 2001.
Chapter 4
EMS Functions
The motivation for writing this chapter is the four function model suggested by
International Engineering Consortium in their EMS tutorial discussing how the
EMS functions provide feed to the upper network and service management layers.
At the end of this chapter you will have a good understanding of how the feed from
EMS gets utilized by upper layers of telecom management.
4.1 Introduction
The basic FCAPS functionality offered by the element management system gen-
erates data that gives feed to the functionalities in network and service man-
agement layers. Some of the functionalities in upper layers that get input from
EMS are:
◾◾ Network/service provisioning
◾◾ Network/service development and planning
◾◾ Inventory management
◾◾ Integrated Multivendor environment
◾◾ Service assurance
◾◾ Network operations support
◾◾ Network/service monitoring and control
45
46 ◾ Fundamentals of EMS, NMS and OSS/BSS
application used by the service provider will be from a single vendor, the under-
lying network used for realizing a service will consist of network elements from
multiple vendors. Each operation equipment manufacturer provides a network ele-
ment and an EMS solution to manage his element. Only when the EMS solutions
from different vendors integrate easily will there be cost-effective deployment of
process in layers above element management. This also offers significant cost and
time reduction for integration of products and adding high level processes over the
integrated EM.
4.2 Network/Service Provisioning
When an order is placed by a customer for a specific service, the service and under-
lying network needs to be provisioned to offer the service.
The first check is to verify if the existing inventory can offer the service. The
inventory management system is used to identify and allocate appropriate resources
to be used in the network. Next the network is provisioned, which would involve con-
figuring the network elements in the network. The appropriate software and patches
are loaded on the network elements and the parameters for proper operation are set.
The backup and restore settings are then made. Once each network element is ready,
the interconnections between the elements are made to make up the network. Service
activation is the final stage when the network and elements in the network start oper-
ating to offer the service placed in the order. Activation signals are sent to a billing
subsystem to generate billing records on the service used and the order is completed.
(see Figure 4.1).
Network elements
Most of the operations involved in service and network provisioning need sup-
port from the element management system. Let us consider the EMS functional-
ities involved in provisioning:
◾◾ Using an auto discovery process, the EMS identifies its resources and updates
its data base. This information is synchronized with the inventory manage-
ment system so that the inventory management application can identify and
allocate appropriate resources for a service.
◾◾ Installing software and patches for a network element is done by the element
management application for the specific network element.
◾◾ Backup and restore is part of configuration management in FCAPS function-
ality of element management system.
◾◾ Configuration of a network element for stand-alone operation and after inte-
gration with other elements is performed by EMS.
◾◾ Generating billing records after provisioning and on service activation is an
activity controlled by accounting in FCAPS.
◾◾ In addition to centralized security, ensuring that the service provisioning is in
a secure environment for the individual NE, which is controlled by the NE
using its secure access and security logging system.
In most scenarios the operator at the network operations center only works on
a single screen that involves taking up the order and doing an automatic service
provisioning. If the required inventory is not in place, the technicians in appropri-
ate locations will be notified to set up the inventory. The EMS database will consist
of parameters that promote auto configuring and sending messages to support its
auto discovery. Most modern telecom management systems usually use just a single
service provisioning application or a service provisioning functionality that is part
of a bigger application and fills up data in a few graphical user interfaces. Based on
the parameter messages that are send to the appropriate network and its network
elements to auto discover the elements, configure the same and activate the service.
Only when there are faults while provisioning will a technician be involved in using
applications like NMS and EMS of the network elements to identify the fault and
fix it. There are telecom management products available on the market that are
solely dedicated to provisioning of specific services.
◾◾ Using fault management, the elements that require more maintenance and
frequently show trouble conditions can be identified. For example, the call
server that usually has a higher number of dropped calls compared to its
peers, triggers frequent auto reboot, or needs a technician to reconfigure
its parameters can be replaced with another call server that would give
better results.
◾◾ Fault management also gives inputs on the common fault scenarios that can
disrupt service and can be used to find out the software and hardware capa-
bilities to improve or reduce the fault scenarios. For example, a particular
algorithm might cause high congestion on some trunks and under utilization
on other trunks. This information can be easily identified from logs generated
showing percentage utilization of trunks during an audit of resources.
◾◾ Performance reports generated at regular intervals are a key input to fore-
casting those elements that would be best suited for investment on enhanc-
ing existing network capabilities. The performance reports also form input to
identifying the quality of service. Users are requested to give feed back on their
experience of the service.
The experience perceived by the user while using a service is termed as
Quality of end user experience (QoE).
The quality of service and quality of experience are an integral part of plan-
ning and development of service and resource.
◾◾ Configuration of an existing network element can be used as inputs for the
detailed design of a network element for a purpose similar to the existing
element.
EMS Functions ◾ 49
◾◾ Billing records help in identifying the services that are of key interest to sub-
scribers and are used to find usage patterns and forecast the future trends
and services that can capture the market. Thus billing records are analyzed
and the statistical data generated from the analysis are used for service and
resource planning.
◾◾ Asset database
◾◾ Inventory analysis functions
◾◾ Inventory reporting functions
Asset database is the repository of information about the inventory in the net-
work. In addition to location and connectivity information that can change, the
asset database even stores intricate static data like part number, and manufacturer
of the equipment in the database. The analysis functions would include tools for
manipulation, optimization, and allocation. When a fault occurs, based on the loca-
tion of an inventory the nearest technician can be allocated to service the equipment
and fix the fault. This helps for better work allocation and in meeting service level
agreements offered to the customer. The reporting functions provide a detailed view
or report of the inventory and also would include customized functions for generat-
ing a report on inventory based on location, type, number of services, and so on.
Having a network management system helps to monitor the network, but
an inventory management system is required for planning, design, and life cycle
management capabilities of elements in the network. The auto discovery process
triggered at an element management system identifies the resources in the network.
The updated resource information is then synchronized with the inventory man-
agement system. This information should be available in a central repository with
easy user access that provides the ability to access up-to-date network information
required for high-reliability environments.
In addition to taking feed from lower layers, the inventory management system
also gives feed to upper layers. For example, it gives feed to a trouble ticketing
50 ◾ Fundamentals of EMS, NMS and OSS/BSS
system that allows device specific alarm information to be provided, as a part of the
dispatch process, to a network engineer for trouble resolution. This in turn reduces
the mean time to repair (MTTR).
Some of the operations involved in inventory management need support from
the element management system. Let us consider the EMS functionalities involved
in inventory management:
Adaptation layer
to all the elements there must be consistency in the data handled at NMS. The
adaptation layer performs the conversion required to a common format for han-
dling the data at NMS.
Another method to solve this problem is to have well-defined standards of
interaction. The set of queries that the NMS will send to an EMS or the underly-
ing network element is to be defined. TeleManagement Forum (TMF) is working
toward this initiative with its standards on multitechnology network management
(MTNM).
◾◾ When a trouble occurs in the delivered service, this would be associated with
the occurrence of some event on a resource. At the service management level,
an event is generated notifying the disruption of service. This is followed by
an event trigger to automatically fix the trouble condition or a notification/
assignment of work to a technician for looking into the issue and fixing it.
A service level log/alarm will not be of much help to the technician. The
log/alarm generated on the resource that caused disruption of service and
displayed by the fault management system in the EMS would be the most
precise fault notification with maximum information for the technician to fix
the issue. For example, if there was a loss of service when a set of patches (bug
fixes) were installed on a network element, then the exact patch that caused
the network element to malfunction can be selected and removed. This helps
fast restoration of service to meet SLA.
◾◾ As already discussed, performance management reports generated on the ele-
ment management system is a key input on QoS. The periodic performance
records collected on quality metrics and key parameters characterize the per-
formance of the network resources over service intervals. Performance man-
agement also has functionality to set thresholds on specific parameters. The
EMS generates events when thresholds are crossed, which helps to identify
trouble scenarios before they arise and hence actions can be taken that would
guarantee the commitments on service assurance specified in SLA.
◾◾ Customer support is also at times bundled with service subscription. There
would be specific guidelines on the duration to complete a customer request. It
could be setting up a service, fixing a trouble condition or answering a query.
Good customer support is a key differentiating factor for better business and
improved customer satisfaction and loyalty.
◾◾ The configuration and performance data can give information on the element
utilization. Configuration parameters directly give the time the element was
out of service, maximum load the element can handle, and so on and the per-
formance data complements the configuration data in identifying utilization
and performance matrix of the network element. This would help in resource
planning for a network offering a service. Optimal configurations can be
determined over time that would best suit the customer and meet the level of
service assurance.
◾◾ Service assurance does not always mean a set of measures taken by the ser-
vice provider to keep a customer happy. Assurance is to make sure that both
EMS Functions ◾ 53
customer and service provider meets the terms and conditions specified in
the service level agreement. In cases where the customer is deviating from
the SLA, the service provider notifies the customer and takes corrective
actions. A typical example where EMS functionality comes into play is
for implementing security. The actions performed by the customer can be
collected as security logs or using an event engine that listens for occur-
rence of a specific event and generates a report when an event occurs. When
the customer performs an action that results in security violation, the cus-
tomer service can be temporarily or permanently disconnected based on
the SLA.
◾◾ To ensure a good quality of service that is essential for service assurance,
there is a requirement to monitor network fault, performance, and utilization
parameters to preempt any degradation in service quality. The role of fault,
performance, and utilization parameters on service assurance has already
been discussed earlier but a collective role of these functions is to ensure
good QoS.
There are multiple service assurance solutions available in the telecom market that
mainly provides overall network monitoring, handling of trouble, and isolation of
issues. Service assurance is closely associated with business and is not completely a
technical entity. Some of the business benefits in service assurance are:
There are products solely developed for service assurance in telecom industry.
54 ◾ Fundamentals of EMS, NMS and OSS/BSS
In addition to cost reduction, another issue that is of major concern is the lack
of trained technicians to manage the network. The telecom network is evolving
rapidly with new standards and enhanced hardware and software capabilities.
The technicians need to keep pace with this change for managing the network
effectively.
Minimum manual intervention is a requirement set out by standardizing bodies
to be part of next generation network management. By making intelligent network
and service management solutions, the manual intervention involved would be
reduced, cutting down the cost of operations support.
Some of the operations involved in network operations support need function-
alities in the element management system. Let us consider the EMS functionalities
involved in operations support:
much importance in fault monitoring. The fault control associated with monitoring
then could add filters to selectively hide the log, not collect it from the element or
clear the specific log.
Some of the operations involved in network/service monitoring and control
need support from functionalities in the element management system. Let us con-
sider the EMS functionalities involved in monitoring and control. First of all it
can be seen that a monitoring and control system is a subset of functions in NMS
and hence EMS being the layer below the NMS would also be the prime feed for a
monitoring and control system.
The major difference here is that the EMS only sends specialized feed on a spe-
cific area like security, fault, and so forth, to the monitoring tool in most scenarios
unless the application is more generic and it requests more information similar to
NMS. Most of the functionalities of an element management system are of vital
importance to the monitoring and control system. Based on the server–agent archi-
tecture discussed above, it can be seen that the design of a monitoring and control
system is similar to an element management system that gets feed from multiple
network elements.
In the functionality listing of most network management solutions, monitor-
ing, and control is usually listed. So monitoring and control can be an independent
tool, a specialized tool, or embedded into an NMS. Service monitoring is a layer
above network monitoring, though the discussion of the association of EMS with
network monitoring and control also applies to service monitoring and control.
4.9 Conclusion
Just above the network elements, the element management system layer (EMS)
forms the first management layer collecting data from physical components. This
chapter was intended to show the importance of the EMS in building up function-
alities on the upper layers of the TMN model.
Additional Reading
1. International Engineering Consortium (IEC). Online Tutorial on Element Management
System, 2007. http://www.iec.org/online/tutorials/ems/
Chapter 5
OSI Network
Management
5.1 Introduction
The OSI network management architecture mainly consists of four models.
They are:
◾◾ Organization model
◾◾ Information model
◾◾ Communication model
◾◾ Functional model
The OSI organizational model, also called OSI system management overview, was
defined by ISO-10040. It defines the components of the network management
57
58 ◾ Fundamentals of EMS, NMS and OSS/BSS
s ystem, the functions and relations exhibited by these components that make up
the network management system.
The OSI information model defines how management information is handled
in ISO-10165. It details the structure of management information with SMI (struc-
ture of management information) and storage of information with MIB (manage-
ment information base).
The OSI communication model defines how management information is
exchanged between entities in the network management system. The transport
medium, message format, and content are also discussed.
The OSI functional model defines the functions performed by the network
management system. The capabilities of the network management system are clas-
sified into a set of basic functions, with the capabilities being derived from these
basic functions.
OSI network management concepts had a major influence in telecom manage-
ment. Most of them form the basic principles of telecom management including the
agent–manager framework that is used for data collection from elements. The object-
oriented approach in OSI network management was appealing for managing the
complex telecom network. The CMIP (common management information protocol),
an application layer protocol, was developed based on OSI guidelines and is discussed
later in this book. Though CMIP is not as popular as its counterpart SNMP (simple
network management protocol) in implementation and deployment, the protocol
forms the initial attempt on implementing a management protocol based on OSI,
which can be used to replace SNMP used at that time for internet management.
The OSI management concepts that were adopted by TMN and the differences
between TMN and OSI network management are also discussed in this chap-
ter. The TMF (TeleManagement Forum), the successor of Network Management
Forum (NMF) is now looking into development of network management stan-
dards based on both OSI network management and TMN.
◾◾ Network objects
◾◾ Agent
◾◾ Manager
◾◾ Management database
MDB Manager
Management
database
Agent Agent
Managed objects
Unmanaged objects
MDB Manager
MDB Agent/manager
Management database
Agent Agent
Managed objects
Unmanaged objects
◾◾ Two-tier model
◾◾ Three-tier model
◾◾ MoM model
Agent Agent
Manager Manager
The three-tier model satisfies the needs of a network management system. Now
consider the scenario where a network manager is required to manage multiple net-
works at different locations or networks from different vendors or two heterogeneous
networks. This is where MoM fits in by providing a model for a converged network
manager for multitechnology, heterogeneous network, mute vendor environments
that is the current state of the telecom industry (see Figure 5.3). This model can
handle network management of multiple domains as in an enterprise network.
The top most level in MoM model is the “manager of manager” (MoM), which
is the manager that consolidates and manages a set of network domains by collect-
ing data from the network managers that manage individual network domains. The
MoM has a separate MDB for storing data. In between the MoM and the managed
object is an intermediate component having a manager, agent NMS, and an agent.
We will call it the intermediate component similar to the explanation for three-tier
to avoid confusion. The manager in the intermediate component of MoM collects
data from the managed object, the agent in the intermediate component feeds the
MoM, and the “agent NMS” is the management process that deals with functions
for managing network in a specific domain.
There is one more relationship defined in OSI organizational model that specifies:
functions, and the relationships possible with these components in various network
management scenarios.
Label
Start
State
Location
Gateway Reboot
No of connections
Label
Start
State
The managed object (MO) has ATTRIBUTES to define its static and
dynamic properties, BEHAVIOR to represent the methods/actions performed
by the MO and NOTIFICATIONS are for events. The class-label uniquely
identifies the class and every managed object is to be registered as a unique
object in the management information tree (MIT). The “registered as” part reg-
isters the MO definition on the ISO registration tree. The “derived from” section
describes the base class whose definitions are inherited by the MO child class. A
sample of the managed object template is given in Figure 5.5. The “characterized
by” part includes the body of data attributes, operations, and event notifications
encapsulated by the MO.
The SMI was introduced in ISO-10165 and it was elaborated in the guidelines
for the definitions of managed objects (GDMO). The GDMO provides syntax for
MO definitions. The GDMO introduces substantial extensions of ASN.l (abstract
syntax notification) to handle the syntax of managed objects. The ASN will be
covered in detail in the chapter on SNMP.
MDB Manager
MIB
Agent Agent
MIB MIB
Managed objects
the root to a given node, a unique distinguishing name (DN) is obtained that identi-
fies the node. The node can be any information about a managed object and the DN
will be used by the manager to store and query any particular information.
Both agent and manager use the MIB to store and query management infor-
mation. The agent MIB covers information only about the managed object it is
associated with while the manager MIB covers information about all the network
elements managed by the manager. While MDB is an actual physical database like
Oracle or SQL, MIB is a virtual database.
The MIB implementation falls under of the following three types:
◾◾ MIB Version I: This first version of MIB was presented in RFC 1066 and later
approved as a standard in RFC 1156.
◾◾ MIB Version II: This version of MIB was proposed to cover more network
objects as part of RFC 1158. It was later approved by the IAB (Internet
Architecture Board) as a standard in RFC 1213.
◾◾ Vendor specific MIB: These complement the standard MIB with additional
functionality required for network management of vendor specific products.
This book has a chapter dedicated to the discussion of MIB considering the wide
acceptance of this schema in information management.
Attributes Operations
Notifications
Access
Access Managed object
Access instances in MIT
ACSE
Get/Set/Action Event notification
CMISE
There are three kinds of messaging that happens between the manager and the
agent as shown in the message communication model (see Figure 5.8). They are:
Resquests/operations
Manager Agent
Responses
Manager Agent
communication SNMP (Internet) communication
module CMIP (OSI) module
Physical medium
The transfer protocols in the OSI communication model is detailed with three lay-
ers as shown in Figure 5.9 on communication transfer protocols. They are:
5.6 OAM
The OAM stands for operation, administration and maintenance. The OA&M
and OAM&P are words popular in the telecom management world for network
management functions. The five function areas are covered as part of OAM, hence
it is quite common to refer to network management systems as OA&M systems.
When the NMS involves a lot of provisioning functions, then the NMS solution
is referred to as OAM&P where P stands for provisioning. It can be seen that the
SNMP functional model adopts the concept of OAM (see Figure 5.11).
The ITU-T defined OAM in its document M.3020 and 3GPP uses the term
OAM in its documents on network management standards for specific networks.
5.8 Conclusion
The OSI and TMN do have some conflicts in concepts. But this does not make
them two separate standards. Much of the concepts in one were adopted by the
other and these models are the backbone for all developments that were made in
telecom management. Now enhancements on standards from OSI management and
TMN management are handled by TMF. This has resulted in a forum for unified
OSI Network Management ◾ 69
Additional Reading
1. Mani Subramanian. Network Management: Principles and Practice. Upper Saddle River,
NJ: Addison Wesley, 1999.
2. Vinayshil Gautam. Understanding Telecom Management. New Delhi: Concept
Publishing Company, 2004.
Chapter 6
Management Protocols
The element management system interacts with network elements in its lower
layer and network management system in its upper layer based on a TMN model.
Proprietary protocols were initially used for the interaction between the EMS and
the network elements. There was a requirement to have the same EMS used by
multiple elements. Standard protocols were introduced to define the interactions of
EMS with the network element. At the end of this chapter you will have an under-
standing of the protocols commonly used for interaction between EMS and NE.
It should be noted that management protocols are not limited to interaction
between EMS and NE. It can be between an EMS and NMS, NMS and OSS, or
even between two OSS applications.
6.1 Introduction
Management protocol is a set of rules governing communication
between components in the management system.
71
72 ◾ Fundamentals of EMS, NMS and OSS/BSS
There is no ideal standardized management protocol suited for all kinds of net-
work elements and management applications. The management protocol suited for
web-based network management may not be best suited for a client–server appli-
cation and the management protocol for a packet-based network may not be best
suited for a circuit switched telecom network. Hence various standardizing bodies
worked on developing management protocols for various management applications
and networks. These efforts and resources have resulted in a vast choice of standard-
ized protocols, with vendors offering a wide range of tools and platforms based on
these protocols. These tools and platforms greatly facilitate the development and
integration of the management systems and network elements.
Application and network are not the only items that determine the choice of a
management protocol. It can also be a trade off between complexity and simplicity.
Some protocols are aimed at simplicity, which would ensure easy agent implemen-
tation and short turn around time in software development. These protocols like
SNMP (simple network management protocol) are better structured and easy to
understand. The simplicity in interaction often results in extra functionality to be
inserted in mapping of the information model.
There are protocols like CMIP (common management information protocol)
that are more functionality rich and have an object-oriented structure making it
a complex protocol. The wide range of operations in the protocol results in com-
plex but powerful agents. There can also be scenarios where different management
protocols should be combined within the same network element yielding the best
solution for a specific interaction paradigm. This chapter gives an overview of some
popular standardized management protocols. Based on the information in this
chapter, the reader can make an informed decision on what protocol to use for
implementing the requirements elicited in a project.
information base (MIB) and a relatively small set of commands to exchange infor-
mation. MIB is a collection of information that is organized hierarchically as a tree
structure and uses a numeric tag or object identifier (OID) to distinguish each vari-
able uniquely in the MIB and in SNMP messages.
The SNMP messages to exchange information are:
Message Explanation
Get [GetRequest] This is a request from the manager to the agent, to get
the value at a particular OID (object identifier).
Set [SetRequest] This is a request from the manager to the agent to set
the value at a particular instance in the MIB.
Advantages of SNMP
Disadvantages of SNMP
◾◾ Initial version of SNMP had security gaps and SNMP version 2 fixed some
security issues regarding privacy of data, authentication, and access control.
◾◾ Information cannot be represented in a detailed and organized manner meet-
ing the requirements of next generation network and management due to the
simple design.
Message Explanation
Some more specialized standard messaging with TL1 includes provisioning mes-
sages and protection switching messages.
Advantages of TL1
Disadvantages of TL1
Operation Explanation
GET (Request/ This is a request from the manager to the agent to get
Response) the value of a managed object instance followed by a
response from agent to manager.
SET (Request/ This is a request from the manager to the agent to set
Response) the value of a managed object instance followed by a
response from agent to manager.
Advantages of CMIP
Disadvantages of CMIP
returns a response to the requesting client. The transport protocols for which map-
pings are defined by CLP include Telnet and SSHv2.
The three supported output data formats for CLP are XML (extensible markup
language), CSV (comma separated value), and plain text. The CLP builds on the
DMTF’s common information model (CIM) schema, which has a well-defined
information model.
CLP syntax is as follows:
<Verb> [<Options>] [<Target>] [<Properties>]
“Verb” is the command to be performed. Some of the verbs are:
Verb Explanation
Disadvantages of CLP
The CLP is mainly intended for operational control of the server hardware and
rudimentary control of the operating system and it best suited for scripting envi-
ronments to perform repeatable tasks.
78 ◾ Fundamentals of EMS, NMS and OSS/BSS
The XMLP message is defined in the form of an XML infoset, with element and
attribute information in an abstract “document” called the envelope.
XMLP message envelope
XMLP namespaces: This is used for a unique name that avoids name collision,
helps grouping of elements, and for use as a version control scheme.
Management Protocols ◾ 79
XMLP header: Used for adding semantics for authentication, transaction, con-
text, and so on.
XMLP body: Data to be handled by the receiver.
XML-based protocols have their drawbacks too. Use of XML leads to an
increase in traffic load for message transfer. XMLP-based protocols are slow and
the time duration for an interaction between the agent and manger using an XML
protocol is more compared to most non-XML protocols. Though initially XML-
based protocol was intended for a web-based enterprise management and scenarios
of network management on the web, now XML protocols have gained more accep-
tance. SOAP is the protocol suggested in MTOSI (multi-technology operations
system interface) defined by TeleManagement Forum (TMF) and NETCONF is
hailed as the next generation network management protocol.
HTTP headers
SOAP headers
SOAP body
SOAP uses XML to define an extensible messaging framework that can work
over a variety of underlying protocols, independent of the particular programming
model used and other implementation specific semantics.
Advantages of SOAP:
1. Platform independent
2. Independent of language used for implementation
3. Inherits all the advantages of an XML-based protocol
4. Can easily get around firewalls
Disadvantages of SOAP:
Advantages of NETCONF:
Disadvantages of NETCONF:
Message Explanation
Application objects
ORB
IIOP
TCP/IP
Messages can also be split into fragments for communication. HTTP, which is
widely used for internet communication, was not suited for remote method invoca-
tion (RMI) based on CORBA. Hence, the use of IIOP on CORBA can be used as
an alternative to HTTP for Internet communication. IIOP is based on the client-
server model.
Advantages of IIOP:
Disadvantages of IIOP
◾◾ Complex standard
◾◾ The disadvantages of CORBA and RMI are combined in IIOP
6.10 Conclusion
This chapter only gives an overview on some of the popular management protocols.
There are many other application layer protocols that are used for telecom manage-
ment. For example, it is a common practice to have FTP (file transfer protocol)
used to support performance management. This involves the element to be man-
aged, writing performance records to files on the element. The manager collects
the performance records generated by the element at regular intervals using FTP
protocol and parses the records to collect performance data. Here the interaction
between the element and the manager to get performance data is based on the FTP.
Application layer protocols for connection like TELNET (TELecommunication
NETwork) and SSH (Secure SHell) can be used to have a console for connecting
to the element from the management system. Commands to be triggered on the
element can be sent using these basic application layer protocols, which are simpler
rather than using the pure management protocols like SNMP, CMIP, and so on.
84 ◾ Fundamentals of EMS, NMS and OSS/BSS
Protocols SNMP, CMIP, SOAP, and NETCONF will be handled in detail later
in this book. Different networks have different requirements and it can be seen that
this resulted in the introduction of various protocols. While internet-based man-
agement like a web-based enterprise management would require a management
protocol that works with HTTP, a telecom network would rather use SNMP with
a well-defined MIB for its management.
Additional Reading
1. William Stallings. The Practical Guide to Network Management Standards. Reading,
MA: Addison-Wesley, 1993.
2. Natalia Olifer and Victor Olifer. Computer Networks: Principles, Technologies and
Protocols for Network Design. New York: John Wiley & Sons, 2006.
Chapter 7
Standardizing Bodies
Consider a crossroad in a busy street without traffic signals. The chaotic situation
caused by this scenario is an example of the need for proper rules and regulations
that everyone needs to follow in order to use the road. With multiple vendors coming
with different network elements, element management systems, network manage-
ment systems, and OSS solutions interoperability is a big issue and without defined
standards communication is not possible. There are several organizations that bring
out common standards in management systems to ensure interoperability. This
chapter gives an overview on the different standardizing bodies in management.
7.1 Introduction
Most standardizing bodies used to work independently in defining standards for
management system. For example 3GPP OAM&P standards were oriented toward
specific networks, TMF and ITU-T were working on telecom management, DMTF
was defining standards for enterprise management, and so on. With the demand
for converged management systems that span multiple domains, the standardiz-
ing bodies are coming up with joint standards like the CIM/SID Suite, which
is a joint venture of the Distributed Management Task Force (DMTF) and the
TeleManagement Forum (TMF). This chapter gives an overview on the works of
various standardizing bodies in management domain.
85
86 ◾ Fundamentals of EMS, NMS and OSS/BSS
sector of ITU, which is a specialized agency of the United Nations (UN). The stan-
dards of ITU-T carry more significance in the international market compared to
other standardizing bodies due to the link with the UN. Due to this reason well-
accepted standards from other standardizing bodies are usually send to ITU-T to get
their acceptance. An example of this is the e-TOM model of TMF that was adopted
by a majority of service providers and later was accepted by ITU-T. It is based in
Geneva, Switzerland and has members from both public and private sectors.
Standards from ITU-T are referred to as “Recommendations,” the development
of which is managed by study groups (SG) and focus groups (FG). While a study
group is more organized with regular face-to-face meetings and adherence to strict
guidelines and rules of ITU-T, the focus group is more flexible in organization and
financing. To be more precise, a focus group can be created quickly to work on a
particular area in developing standards, deliverables are determined by the group
unlike guidelines from ITU-T and they are usually dissolved once the standard is
released. There is a separate group called telecommunication standardization advi-
sory group (TSAG) for reviewing priorities, programs, operations, financial mat-
ters, and strategies for the sector. The TSAG also establishes and provides guidelines
to the study groups.
The recommendations of ITU-T have names like Y.2111, where “Y” corresponds
to the series and “2111” is an identifying number for a recommendation or a family
of recommendations. The document number is not changed during updates and
different versions of the same document are identified with the date of release.
There is a separate study group in ITU-T to handle telecom management. This
study group is responsible for coming up with standards on the management of
telecommunication services, networks, and equipment. It should be noted that
TMN, the basic framework for telecom management that is discussed in previous
chapters was a standard from ITU-T.
Next generation network support and management is also an area ITU-T works
on based on demands in current telecom management industry. There are special
projects in telecom management that are also being prepared by focus groups in
ITU-T. “Y.2401” is a recommendation on “principles for management of next gen-
eration network” with multiple sections discussing different aspects of NGNM.
For example:
In addition to technical programs based on NGOSS framework and its four main
concepts, TMF also works on business solutions. The business solution activities
include work on service level agreements, revenue assurance, integrated customer
centric experience, and so forth. Prospero is a portal maintained by TMF to assist
companies to easily adopt the standards of TMF. Prospero has package and guide-
lines based on actual implementation of TMF standards including case studies and
downloadable solutions. According to TMF, knowledge base consists of both stan-
dards and corporate knowledge. A company that wants to adopt a standard needs
to look into both the definitions of the standard as well as make use of corporate
knowledge published by companies that have already adopted the standard. Hence
TMF document base has a good mix of standards, white papers, case studies, solu-
tion suites, review/evaluation, and testimonials.
88 ◾ Fundamentals of EMS, NMS and OSS/BSS
The TMF conducts events to promote its studies and standards and these events
are also a venue for service providers and system integrators to demo their products
that comply with TMF standards. The TeleManagement Forum also offers certifi-
cation. It certifies solutions that comply with its standards, where the solution could
even be a simple implementation of an interface based on the definitions of TMF.
It also certifies individuals based on an evaluation of their knowledge on specific
standards.
“Webinar” or online seminars are also conducted by TMF. People around
the globe can register and listen to seminars on new concepts that need to be
worked on in standardization and studies on implementation of existing stan-
dards. Information about these seminars, hosted on the Web are published on the
TMF Web site and also mailed to TMF members. “Catalyst Projects” are used
by TMF to conduct studies and develop standards on emerging issues in telecom
management. While team projects perform studies on base concepts, the catalyst
project shows how the base concepts can be used to solve common, critical indus-
try challenges.
Web site: www.tmforum.org
◾◾ Technical committee: This committee is used to steer the work group creation,
operation, and closure for developing standards and initiatives of DMTF.
◾◾ Interoperability committee: The work of this committee is handled by focus
groups that work on providing additional data on interoperability in multi-
vendor implementations based on DMTF resources.
◾◾ Marketing committee: The role of this committee is to ensure that the activities
of DTMF get enough visibility and resources have industrial acceptance.
Two forums in the interoperability committee of DMTF are the CDM forum
and system management forum.
◾◾ CDM forum: The main activity of the CDM forum is to support the develop-
ment of test programs and software development kits that will help in devel-
opment and integration of diagnostic modules based on CDM into CIM
framework.
◾◾ System management forum: A major activity to ensure interoperability is
checks to evaluate products compliance to standards. This forum handles
the development of programs that checks a products conformance to DMTF
standards on system management like DASH and SMASH.
conducts events to showcase their work. It also collaborates with other standard-
izing forums in defining standards. Certification for individuals and products is
also offered by DMTF.
Web site: www.dmtf.org
The depth of defining the standard goes even further to the set of values that
can be taken by some of these parameters, like perceivedSeverity, which can have
values critical, major, minor, warning, indeterminate, and cleared.
Another example of 3GPP specifications on FCAPS for a specific technology
is “3GPP TS 32.409 version 7.1.0 Release 7,” which gives information on the per-
formance parameters for specific network elements in an IP multimedia subsystem
(IMS). 3GPP is one of the leading organizations on standardization in telecom
domain. The use of the term “3GPP” in this section can refer to both “3GPP” and
“3GPP2.”
It can be seen that the OAM&P specifications of 3GPP can be very useful
in implementing a network management system because of the depth of defining
standards. Though some of the specifications of 3GPP are generic, in most of the
scenarios its specifications are for a specific technology unlike standards like TMF
that are mostly generic and intended for a set of network technologies. Location
services management and charging managing are some other areas in network
management for which there are standards defined by 3GPP.
Web site: www.3gpp.org and www.3gpp2.org
The standards of ETSI are developed by technical bodies in ETSI, which can
establish working groups to focus on a specific technology area. There are three
types of the technical body in ETSI. They are:
The documents developed by the technical bodies or the ETSI deliverables fall
under one of the below categories:
Only corporate level membership is permitted with MEF. Only corporate can join
MEF by paying a predetermined annual fee and no paid individual membership.
An unlimited number of employees from a corporate member company can par-
ticipate in technical or/and marketing activities of MEF. Completed/released speci-
fications of MEF can be downloaded or viewed by anyone from the MEF Web site
while working documents can be viewed only by members.
Web site: www.metroethernetforum.org
◾◾ Universal functions: There are five functional areas under this category. They
are performance, reliability, and security; interoperability; OAM&P; ordering
and Billing; and user interface.
◾◾ Functional platforms: There are five platform areas under this category. They
are circuit switched and plant infrastructure, wireless, multimedia, optical,
and packet based networks.
The ATIS has made many valuable contributions in network management and
has a dedicated area in its functional framework for OAM&P. There are two main
function groups in ATIS that handles the standardization activity called “forums”
and “committees.” These main groups can form subgroups like subcommittees
based on work programs and create “task forces” to work on a focused area for a
specific amount of time.
A proposal to work on an item is referred to as an “issue” in ATIS. An issue
(issue identification form) needs to define the problem to be looked into, propose
a resolution, and suggest a time line for the completion of work. An issue process
is defined by ATIS on the different activities involved in the life cycle of an issue.
International Telecommunications Union Telecommunications Standardization
Sector (ITU-T) is a standardizing body with which ATIS has worked closely for
defining standards that needs world wide acceptance.
Standardizing Bodies ◾ 95
The ATIS standard documents are named in a specific format. If the number is
ATIS-GGNNNNN.YY.XXXX, then GG corresponds to the two digit functional
group number, NNNNN is the five digit number for standard document, YY is
the supplement or series number and XXXX is the year or it represents a specific
release. For example ATIS-0900105.01.2006 would mean a standard with number
00105 and series 01 for group optical (09) and approved in year 2006.
Only corporate and not individuals can become a member of ATIS. An unlim-
ited number of employees from a corporate member company can participate
in activities of ATIS. Membership is not free and nonmembers need to pay for
downloading some of the standard documents. ATIS conducts different kinds of
events to promote work and visibility of standards. Some of the events include
conference to discuss cutting edge technologies, expo to showcase standards and
solutions and webinars (web-based seminars) to promote work or interest on
a topic. ATIS is also a member of the Global Standards Collaboration (GSC),
which facilitates collaboration work between standards organizations that are
part of GSC.
Web site: www.atis.org
7.12 Conclusion
This chapter provides an overview on some of the standardizing bodies in telecom
management. Their structure and activities are touched upon to give basic famil-
iarity about the organizations. There are many more organizations and alliance
forums that work on developing standards in network management. DSL Forum,
Optical Interworking Forum (OIF), and MultiService Switching Forum (MSF) are
just a few names of the bodies that were not covered in this chapter.
This chapter is intended to give the reader information on what forum to research
when looking for a specific standard, to participate in developing standards on a
specific domain, and to join a forum as a corporate or individual member. The orga-
nization to look for is determined based on requirements. For example TMF would
be a good place to look for telecom management standards as it mainly focuses on
this area and the documents of the forum would have information on collaboration
work with other forums. In the same way DMTF would be a good choice for enter-
prise management and SNIA for storage area network management.
Additional Reading
1. TMF 513 TR133 NGN-M Strategy Release 1.0 Version 1.2. www.tmforum.org
(accessed December 2009).
Chapter 8
The NMS has to interact with multiple EMS and present an integrated view of data.
Now each EMS can be from a different vendor and based on a different technol-
ogy. Moreover the different EMS may not talk in a common vocabulary or have a
common format for data. Then how is it possible to have a single NMS for multiple
EMS? This chapter gives you an understanding on the requirement to have standards
for interaction between element management layer (EML) and network manage-
ment layer (NML). It has a detailed discussion on the MTNM (Multi Technology
Network Management) standards from TMF (TeleManagement Forum).
MTNM is a copyright of TMF (www.tmforum.org)
8.1 Introduction
Consider that service provider X wants to use your EMS and NE with their NMS
solution. To get fault data from an EMS, the NMS solution from X sends query
“getAlarmList,” which your EMS does not understand and cannot respond to. The
obvious solution is to write an adaptor (a process that acts as mediator between
two modules and performs format conversion in message/data for the two modules
to interoperate) that converts message/data from the format of NMS solution X
to a format that your EMS can understand and vice versa. Now there is another
service provider Y who also wants to use your EMS and sends a different query
“getFaultList” for the same operation. For interoperability another adaptor needs
to be written. For NMS solution X to interact with an EMS other than yours will
also require an adaptor.
So in a multitechnology (underlying protocol), multivendor environment,
where interaction between EML and NML is not standardized, for an NMS to
99
100 ◾ Fundamentals of EMS, NMS and OSS/BSS
collect data from multiple EMS or for an EMS to be used with multiple NMS,
the possible method of operation would require writing multiple adaptors. Writing
adaptors is not a cost-effective option in the evolving telecom industry where the
major investment could end up in writing adaptors. Hence there was a need to have
standards for interaction between EML and NML.
For the interaction to be standardized, the following basic requirements need
to be satisfied:
Use of these standards leads to ease of integration, in addition to reduced cost and
time of integration. Multi-Technology Network Management (MTNM) model
of TMF ensures interoperability between multivendor Element and Network
Management Systems. It mainly consists of a set of UML models and CORBA IDLs
to define the EML–NML interface. This chapter handles the basics of MTNM by
discussing the four main artifacts that were release by TMF describing MTNM.
8.2 About MTNM
The first initiative in the development of an EML–NML interface was SONET/SDH
information model (SSIM) of TMF. When TMF509, the first version of SSIM, got
industrial acceptance another team ATM information model (ATMIM) was formed
for working on ATM networks. Later SSIM and ATMIM teams merged and formed
a MTNM team for jointly defining standards in EML–NML interface.
Some of the functional areas in telecom management covered by MTNM are:
Latest release of MTNM also includes connectionless and control plane man-
agement capabilities and the functional areas associated with these capabilities.
The deliverables of MTNM are:
NMS GUI
NMS
GUI cut through
CORBA interface
TMF513 is a set of use cases, and static and dynamic UML diagrams that can
be used as:
SONET DWDM
ATM interface interface MTNM common interface
interface
Most of the interface definitions (requirements are use cases) in TMF513 are
centered on a basic set of objects that communicate using the EML–NML interface
and share a set of common attributes. These objects are (note: there is cross refer-
ence between objects in the description below):
The document defines requirements and use cases based on objects. Also pro-
vided in the TMF 513 business agreement is a set of supporting documentation
(titled starting with SD) that contain details on various specific aspects of the NML-
EML interface and are intended to help clarify some of the specific aspects of the
interface that are discussed in the main document (titled staring with TMF513).
◾◾ Interface problem
◾◾ Interface information requirement
◾◾ Set of interface information use cases
struct Example_T {
string name;
string value;
};
TMF 608 has multiple supporting documents (SD) for details on information
exchange. Some of the most important supporting documents are discussed next.
Object naming: These naming rules will add consistency to the object names
used by different vendors in designing the EML–NML interface. Some of the gen-
eral rules and semantics for equipment naming discussed in “SD1-25_objectNam-
ing” are:
and the state is different from the attributes like congestion level or number of con-
nections. When the state of a component changes it is notified using SC notifica-
tion and a change in any of the attributes will be notified from the EML to NML
interface using an AVC notification. Details of AVC and SC are handled in the
“SD1-2_AVC_SC_Notifications” document.
Document on performance parameters: The names to be used to identify some of
the performance parameters are discussed in the “SD1-28_PerformanceParameters”
document. The parameters to be monitored include average/maximum/minimum
bit rate errors in an interval, count of received or transmitted cells/lost cells/misin-
serted cells, and count of cells discarded due to congestion/protocol errors/HEC
violation.
Probable causes: An alarm notification usually contains the probable reason for
the generation of the alarm. There are a set of fields required in an alarm notifica-
tion, one of them being the probable cause of the alarm. “SD1-33_ProbableCauses”
document includes some standard causes that can be used in the probable cause
alarm field. It should be noted that TMF is not the only standardizing body
involved in defining such standards. From FCAPS perspective the fields required
in an alarm for a specific network are also defined by 3GPP. Some of these details
will be covered in the next chapter.
The remaining supporting documents are for defining modes of operation, per-
formance management file formats, traffic parameters, and so forth. Using UML,
TMF 608 also gives the class diagrams to be used with the interface objects. The
class diagram helps in converting the UML models to program code.
Some of the attributes in EMS Class are:
compliant to TMF 814 means that the EMS uses CORBA IDL for communi-
cation with the NMS. Each object in a CORBA interface is provided a unique
address known as an interoperable object reference (IOR). While in fine-grained
approaches each interface object has an IOR and a coarse-grained approach for cli-
ent/server interface means that only a small number of interface objects have IORs.
A lightweight object managed by a CORBA object (first-level object), is called a
second-level object.
The object model in TMF814 follows a course grained approach. A coarse-
grained approach with many-to-one mapping between object instances in the
information model (TMF 608) and the objects (interfaces) in the interface model
(TMF 814). But the operations vary from fine-grained as in getEquipment opera-
tion returns the data structure for a single equipment, to course grained as in
getAllEquipments operation that returns the data structures for all the equip-
ments identified by the management system. So certain interfaces can return mul-
tiple entities unlike a separate query for each network entity.
The objects in the network exposed using interface are based on the layered con-
cepts and layer decomposition detailed in ITU G.805. While the managed objects
having a naming convention of <Name>_T, the object manager or interfaces objects
used to manage the managed objects including session objects are named <Name>_I.
The key manager interfaces are:
◾◾ EMSMgr_I: The NMS uses this interface to request information about the
EMS. Active alarms, top level equipments, subnetworks, and topological
links information can be obtained using this interface. In addition to collect-
ing information, operations such as add/delete can also be performed using
this interface.
◾◾ EquipmentInventoryMgr_I: The NMS uses this interface to request infor-
mation about equipment and equipment holders. In addition to collecting
information, operations such as provision/unprovision can also be performed
using this interface.
◾◾ PerformanceManagementMgr_I: The NMS uses this interface to request
current and historical performance data from EMS. File transfer is another
mechanism to collect performance data.
◾◾ GuiCutThroughMgr_I: The NMS uses this interface to launch an EMS GUI
from the NMS GUI and hence access data from the EMS.
◾◾ ManagedElementMgr_I: The NMS uses this interface to request information
about managed elements. Information about terminations points like PTP or
CTP and the cross connections can also be obtained using this interface.
◾◾ MaintenanceOperationsMgr_I: The NMS uses this interface to request infor-
mation on the possible maintenance operations and to perform maintenance
operations.
◾◾ MultiLayerSubnetworkMgr_I: The NMS uses this interface to request infor-
mation about connections, managed elements, and topological links in the
108 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ For system vendors to specify in a standard format, the set of MTNM inter-
face capabilities supported by their product
◾◾ For service providers to specific to the system vendors in a standard format,
the set of MTNM interface capabilities required in the product
◾◾ To create an implementation agreement that has a standard format between
multiple management system vendors.
The description of an FIS template is split into modules. Each module has:
a. Module name: This uniquely identifies each module.
b. Datatypes: The attributes associated with the module are listed in a table
with attribute name in the first column in the table. The other columns
are “Set By” describing whether EMS, NMS, or both can set the attribute,
when and how the attribute was set, the format of the attribute that could
be a fixed format or a list of values and a final column called “Clarification
Needed” for specifying the length requirements of the attribute.
c. Interfaces: The operations associated with the module are listed in a table
with the name of the operation in the first column of the table. The other
columns include status to check if the operation is mandatory or optional,
if any additional support is required in the operation, the exception or
error codes that can occur in the operation, and the final column is to add
any additional comments on the operation.
3. Guidelines for using the MTNM interface: These guidelines give more clarity
to MTNM specifications. Some of the guidelines include:
The need for NMS to understand how EMS packages managed elements
to subnetworks.
How the MTNM service states can be mapped to other models like the
ones from ITU-T.
The usage of resource names like userLabel, nativeEMSName, and
so on.
Some examples of probable cause templates.
Set of implementation specific use cases.
Description of modes for state representation.
For a detailed understanding of the usage of MTNM, the TMF 814A document
can be downloaded based on details given in Additional Reading.
8.7 Conclusion
This chapter may not be suitable for beginners and some of the terms and concepts
expressed in this chapter might be difficult to understand for a novice in telecom
management. However once the reader has completed the chapters on network
management, then there will be a better understanding on the EML–NML inter-
face. So the author would like to urge the reader to re-visit this chapter after cover-
ing the chapters on network management.
While the EML–NML involves some aspects of network and its management,
the author decided to cover this chapter under Element Management System
because, the major development to achieve compliancy with MTNM is to be done
in the EMS. The EMS implements all the interfaces to respond for queries from the
NMS. As already discussed in Section 8.5, there is only one interface supported by
the NMS and used by EMS and all the other interfaces are supported by EMS and
used by NMS.
Additional Reading
Latest version of the documents specified below can be downloaded from www.
tmforum.org
1. TMF 513, Multi-Technology Network Management Business Agreement.
2. TMF 608, Multi-Technology Network Management Information Agreement.
3. TMF 814, Multi-Technology Network Management CORBA Solution Set.
4. TMF 814A, Multi-Technology Network Management Implementation Statement (IS)
Templates and Guidelines.
NETWORK II
MANAGEMENT
SYSTEM (NMS)
Chapter 9
Communication Networks
This chapter is intended to provide the reader with a basic understanding of com-
munication networks. At the end of this chapter the reader will be able to work on
the management of most communication networks by correlating components and
functions with the networks discussed in this chapter. This helps in management
data presentation, processing, and defining the management functionalities that
will be useful when working on a specific network.
9.1 Introduction
A network is a collection of elements (equipments) that work together to offer a
service or set of services. For example, the telephone service where you can call your
friend’s mobile from your phone is not a direct one to one interaction between two
user equipments (elements). The call establishment, maintaining the conversation
line, and termination of call in a telephone service is made possible using a mediator
network like UMTS or CDMA that has a set of elements dedicated to perform a
specific function and interoperate with other elements in the network to realize the
call handling service.
The main operation of elements in a communication network is transmission
of information (voice, data, media, etc.), where the information is transformed or
evaluated to make communication between two equipments possible. Example
of “Transformation” is a protocol gateway where the language for information
exchange is changed from one protocol to another. “Evaluation” can happen at a
call server that queries a register to find if the intended service can be fulfilled; or a
billing gateway that evaluates the total bill a customer should pay based on service
utilization.
115
116 ◾ Fundamentals of EMS, NMS and OSS/BSS
Communication networks have evolved based on the need for more information
transfer at a higher speed. There are different kinds of communication networks
based on the area of coverage, type of modulation used, speed of data transfer,
type of switching, bandwidth, and type of interface used for data transfer between
elements. Discussing all the different kinds of communication networks is outside
the scope of this chapter. This chapter is intended to provide an overview on some
of the popular communication networks that are used in telecom. The architecture,
elements, and a brief description of element functionality is handled.
The networks that are discussed briefly in this chapter are: ATM, UMTS, MPLS,
GSM, GPRS, CDMA, IMS, and WiMAX. Based on the information presented in
this chapter, the reader will be able to correlate components in any telecom network
and hence have a basic understanding to manage the elements in the network as
well as identify functionalities required in the management system.
Independent of the network being handled, a management protocol needs to
be used for data collection and information exchange between the network and
management solution. This will ensure that the management solution interacting
with elements in the network is nonproprietary and scalable. It should be noted
that a telecommunication network can be a simple computer network, the Internet
or a PSTN (public switched telephone network) and not restricted to the networks
discussed in this chapter.
9.2 ATM Network
The ATM (asynchronous transfer mode) is a packet switched network where data
traffic is encoded into fixed size cells. It is connection oriented and requires a logical
connection established between the two endpoints before actual data exchange. It
can be used for both telecom and enterprise networks.
An ATM network consists of the following main components (see Figure 9.1):
◾◾ Set of ATM switches: The core layer of the network usually has only ATM
switches.
◾◾ ATM links or interfaces: Logical or physical link are used to connect
components.
◾◾ ATM routers/hosts: The routers are used to connect to connect a switch with
external networks and connections. They are also referred to as the ATM end
system.
ATM switch
ATM router
NNI
Public interface
These interfaces can further be categorized as public or private interfaces where the
private interfaces are usually proprietary (not standards based) and used inside private
ATM networks, while the public interfaces confirm to standards. Communication
between two ATM switches in different networks is handled by a public interface.
ATM cells carry connection identifiers (details of path to follow from one end-
point to another) in their header as they are transmitted through the virtual channel.
These identifiers are local to the switching node and assigned during connection
setup. Based on the ATM circuit, the connection identifier can be a VPI (virtual
path identifier) or VCI (virtual channel identifier). The operations performed by the
ATM switch include
◾◾ Looking up the identifier in a local translation table and finding the outgo-
ing ports.
◾◾ Retransmission of the cell though the outgoing link with appropriate con-
nection identifiers.
9.3 GSM Network
GSM stands for global system for mobile communication. It is a cellular (user cov-
erage area defined in the shape of cells) communication network. It was developed
118 ◾ Fundamentals of EMS, NMS and OSS/BSS
AuC
BTS BSC
MSC
EIR
BTS Core network
MS MS BSC
◾◾ Mobile station (MS): This is the user/mobile equipment that helps the user to
make calls and receive subscribed services.
◾◾ Base station system (BSS): It is linked to the radio interface functions and has
two components.
−− Base transceiver station (BTS): The BTS is the signal handling node
between an MS and the BSC. Its main function is transmitting/receiving
of radio signals and encrypting/decrypting communications with the
base station controller (BSC).
−− Base station controller (BSC): The BSC is the controller for multiple BTS
and its main functionality is to handle handover (user moving from one
cell to another) and allocate channels.
◾◾ Switching system/network switching subsystem/core network: It has switch-
ing functions and a main center for call processing. The components in the
core network are:
−− Home location register (HLR): This is a database for storing subscriber
information. The information includes subscriber profile, location infor-
mation, services subscribed to, and the activity status. A subscriber is first
registered with the HLR of the operator before he or she can start enjoying
the services offered by the operator.
Communication Networks ◾ 119
There can be many more components like message center (MXE), mobile service
node (MSN), and GSM inter-working unit (GIWU) that can be seen in a GSM
network. Some definitions specify operation and support system (OSS) as a part of
GSM, though it is a more generic term that applies to operation and support for any
kind of network.
From telecom management perspective, 3GPP lists a detailed set of specifica-
tions on operation, administration, and maintenance of GSM networks. The fault
parameters and the performance attributes for network management of GSM net-
work is given by 3GPP specifications. It should be noted that a telecom manage-
ment solution for GSM network does not necessarily mean management of all
components in a GSM network. There are element management solutions that spe-
cifically deal with MSCs, BSCs, or HLRs and network management solutions that
only handle the core network in GSM.
9.4 GPRS Network
The general packet radio service (GPRS) network is more of an upgrade to the GSM
network. The same components of the GSM network provides voice service and the
GPRS network handles data. Due to this reason the GSM network providers do
not have to start from scratch to deploy GPRS. GPRS triggered the transformation
from circuit switched GSM network to packet switched network and hence is con-
sidered a technology between 2G and 3G, or commonly referred to as 2.5G.
120 ◾ Fundamentals of EMS, NMS and OSS/BSS
HLR
BTS BSC
MSC
VLR
PCU
BTS
SGSN GGSN
TE TE
Packet network
◾◾ Higher speed
◾◾ Instantaneous access to service (always on)
◾◾ New services related to data communication
◾◾ Ease of upgrade and deployment on existing GSM network
◾◾ Can support applications that does not require dedicated connection
Compared to GSM there are three new components that are required with GPRS
(see Figure 9.3):
◾◾ Terminal Equipment (TE): The existing GSM user equipments will not be
capable of handling the enhanced air interface and packet data in GPRS.
Hence new terminal equipments are required that can handle packet data of
GPRS and voice calls using GSM.
◾◾ Serving GPRS support node (SGSN): It handles mobility management func-
tions including routing and hand over. SGSN converts mobile data into IP
and is capable of IP address assignment.
◾◾ Gateway GPRS support node (GGSN): It acts as a gateway to connect with
external networks like public Internet (IP network) and other GPRS net-
works from a different service provider. It can be used to implement security/
firewall in screening subscribers and address mapping.
The remaining components in GPRS are similar to GSM with minor software
and hardware upgrades. The BSC in GPRS needs installation of hardware com-
ponents called packet control unit (PCU) to handle packet data traffic and some
software upgrades while components like BTS, HLR, and VLR will only require
software upgrades.
Communication Networks ◾ 121
1. GSM elements: Core network (MSC, VLR, HLR, AuC, and EIR) and BSS
(BTS and BSC)
2. GPRS elements: SGSN and GGSN
3. UMTS specific elements: User equipment that can handle media and air
interface and UMTS Terrestrial Radio Access Network (UTRAN) consist-
ing of radio network controller (RNC) and “Node B.”
Node B for the new air interface is the counterpart of BTS in GSM/GPRS.
Based on the quality and strength of connection, Node B calculates the frame
error rate and transmits information to the RNC for processing. RNC for
W-CDMA air interface is the counterpart of BSC in GSM/GPRS. The main
functions of the RNC include handover, security, broadcasting, and power con-
trol. The user equipment in UMTS network should be compactable to work for
GSM/GPRS network.
From a network management perspective, 3GPP has come up with specifica-
tion on interface reference points (IRP) for OAM&P in UMTS network. Some of
the 3GPP specification documents on UMTS OAM&P include:
BSS
Core network
HLR
BTS BSC
MSC
VLR
PCU
BTS
GGSN
SGSN
UE
Node B UE
UTRAN
9.6 MPLS Network
MPLS (multi protocol label switching) was developed with the intent of having a
unified network for circuit and packet based clients. It can be used for transmission
of a variety of traffic including ATM, SONET, and IP. It is called label switching
because each packet in an MPLS has an MPLS header that is used for routing. The
routing is based on label look up, which is very fast and efficient. Connectionless
packet based network fails to provide the quality of service offered by circuit based
networks that are connection oriented. MPLS connection oriented service is intro-
duced in IP based networks. MPLS can be used to manage traffic flows of various
granularities across different hardware and applications. It is independent of layer 2
and 3 protocols and is sometimes referred to as operating on “Layer 2.5.”
There are two main kinds of routers in an MPLS network (see Figure 9.5):
◾◾ Label edge routers (LER): The entry and exist point for packets into an MPLS
network is through edge routers. Incoming packets have a label inserted into
them by these routers and the label is removed when they leave the MPLS
network by the routers. Based on the functionality performed on the packets,
these routers are classified in to Incoming (Edge) Router or Ingress router,
and Outgoing (Edge) Router or Egress router.
◾◾ Label switch routers (LSR): Inside the MPLS network, packet transfer is per-
formed using LSR based on the label. An additional functionality of the LSR
is to exchange label information to identify the paths in the network.
Another terminology for routers is in MPLS based VPN network. Here the
MPLS label is attached with a packet at the ingress router and removed at the egress
Communication Networks ◾ 123
MPLS core
LSR LSR
IP network IP network
router. The label swapping is performed on the intermediate routers. LER is also
referred to as “Edge LSR.” Forward equivalence class (FEC) is another terminology
that is frequently used in discussions on MPLS. FEC represents a group of packets
that are provided the same treatment in routing to the destination. The definitions
of MPLS were given by IETF, which has an MPLS working group.
Management protocols like SNMP is widely used to gather traffic characteristics
from routers in the MPLS network. This information can be used by management
applications for provisioning and for traffic engineering of the MPLS network. Being
an “any-to-any” network there are multiple challenges in management of MPLS.
The biggest concern is to monitor performance and identify the service class when
handling an application. The network management solution that offers traffic control
should be able to identify existing paths and calculate paths that can provide the
guaranteed quality of service. Some management solutions even reshuffle existing
paths to identify optimal paths for a specific application. Multiprotocol support of a
management solution will be useful when migrating to MPLS network.
9.7 IMS
IMS (IP multimedia subsystem) is considered to be the backbone of “all-IP net-
work.” IMS was originally developed for mobile applications by 3GPP and 3GPP2.
With standards from TISPAN, fixed networks are also supported in IMS, leading
to mobile and fixed convergence. Use of open standard IP protocols, defined by the
IETF allows service providers to use IMS in introducing new services easily. With
multiple standardizing organizations working on IMS, it will cross the frontiers of
mobile, wireless, and fixed line technologies.
IMS is based on open standard IP protocols with SIP (session initiation proto-
col) used to establish, manage, and terminate connections. A multimedia session
between two IMS users, between an IMS user and a user on the Internet, or between
two users on the Internet are all established using the same protocol. Moreover, the
124 ◾ Fundamentals of EMS, NMS and OSS/BSS
IMS network
Application layer
Access layer
interfaces for service developers are also based on IP protocols. IMS merges the
Internet with mobile, using cellular technologies to provide ubiquitous access and
Internet technologies to provide appealing services. The rapid spread of IP-based
access technologies and the move toward core network convergence with IMS has
led to an explosion in multimedia content delivery across packet networks. This
transition has led to a much wider and richer service experience.
The IP multimedia subsystem can be thought of as composed of three layers
(see Figure 9.6):
◾◾ The service/application layer: The end services reside in the application layer. It
includes a host of application servers that execute services and communicates
with a session control layer using SIP. It can be a part of the service provider
home network or can reside in a third-party network. With open standards
defined on interactions with an application server, it is easier to build appli-
cations on application servers. The power of IMS lies in easily rolling in/out
services on the fly in minimal time using application servers.
◾◾ The IMS core/session control layer: Session control and interactions between
transport and application layer happen through the session layer. The main
components of the core or session control layer are:
−− Call session control function (CSCF): It can establish, monitor, maintain,
and release sessions. It also manages the user service interactions, policy of
QoS and access to external domains. Based on the role performed, a CSCF
can be serving (S-CSCF), proxy (P-CSCF), or interrogating (I-CSCF) call
session control function.
Communication Networks ◾ 125
−− Home subscriber server (HSS): Database for storing subscriber and service
information.
−− Multimedia resource function plane (MRFP): It implements all the media-
related functions like play media, mix media, and provide announcements.
−− Multimedia resource function controller (MRFC): It handles communi-
cation with the S-CSCF and controls the resources in the MRFP.
−− Breakout gateway control function (BGCF): It mainly handles interaction
with PSTN.
−− Media gateway controller function (MGCF): It is used to control a media
gateway.
◾◾ The access/transport layer: This layer is used for different networks to connect
to IMS using Internet protocol. It initiates and terminates SIP signaling and
includes elements like gateways for conversion between formats. The con-
necting network can be fixed access like DSL or Ethernet, mobile access like
GSM or CDMA, and wireless access like WiMAX.
Some other IMS components in IMS include signaling gateways (SGW), media
gateway (MGW), and telephone number mapping (ENUM).
A network management solution for IMS usually handles components in the
core network with limited support for elements in application and transport layers.
The standards on OAM&P for IMS have been defined by 3GPP. The definitions
include alarm formats and performance data collection attributes. Some of the spe-
cialized NMS functions of IMS include HSS subscriber provisioning and charging
management based on 3GPP.
◾◾ Mobile station (MS): This is the client equipment or user equipment like a
subscriber handset that provides interface for the user to access the services.
◾◾ Radio access network (RAN): It is the air interface components in CDMA for
interacting with the core network. RAN is similar to BSS on GSM networks.
126 ◾ Fundamentals of EMS, NMS and OSS/BSS
BSC/ AuC
BTS
PCF
MSC
EIR
BTS Circuit domain
MS MS AAA
PDSN
Internet HA
Packet domain
It has the BSC and BTS found in GSM network. RAN also contains a packet
control function (PCF) that is used to route IP packets to a PDSN (packet
data serving node).
◾◾ Packet domain: Packet domain in a CDMA network consists of:
−− PDSN/foreign agent (FA): It acts as a gateway for the RAN by routing
packets to an external packet network like the Internet. It can establish,
maintain, and terminate a packet link.
−− AAA (authentication, authorization, and accounting): These are servers
used to authenticate and authorize users for access to network and to store
subscriber call details for accounting.
−− Home agent: Interface component to external packet network: Provides
an IP for mobile messages and forwards it to the appropriate network. A
PDSN can be configured to work as a HA.
◾◾ Circuit domain: The circuit domain is similar to the GSM network with
components like the MSC, GMSC, HLR, AuC, and EIR. For details on
these components refer to the section on GSM in this chapter.
9.9 WiMAX
WiMAX (worldwide interoperability for microwave access) is a wireless broadband
technology based on standard IEEE 802.16. It is a wireless alternative to cable
modems, DSL, and T1/E1 links. The IEEE standard was named WiMAX by the
WiMAX Forum to promote the IEEE standard for interoperability and deploy-
ment. It can support voice, video, and internet data.
The spectrum bands in which WiMAX usually operates include 2.3 GHz, 2.5
GHz, 3.5 GHz, and 5.8 GHz, with a speed of approximately 40 Mbps per wireless
channel. Based on coverage, WiMAX is classified under MAN (metropolitan area
network).
WiMAX can offer both non-line-of-sight and line-of-sight service. In non-
ine-of-sight that operates at lower frequency range, an antenna on the personal com-
puter communicates with WiMAX tower and in line-of-sight that operates at high
frequencies, a dish antenna points directly at the WiMAX tower. It uses orthogonal
frequency division multiple access (OFDM) as the modulation technique.
WiMAX architecture can be split into three parts (see Figure 9.8). They are:
◾◾ Mobile station (MS): This is the user equipment or user terminal that the end
user uses to access the WiMAX network.
◾◾ Access service network (ASN): This is the access network of WiMAX com-
prising of base stations (BS) and one or more ASN GW (gateways). While the
base station is responsible for providing the air interface with MS, the ASN
gateways form the radio access network at the edge.
◾◾ Connectivity service network (CSN): This is the core network that offers the
services and connectivity with other networks. It includes the AAA server for
authentication, authorization, and accounting, the MIP-HA (mobile IP home
agent), the services offered using supporting networks like IMS, operation
support system/billing system that can be a part of the core or a stand-alone
application and the gateways for protocol conversion and connectivity with
other networks.
IEEE 802.16 has a network management task group that works on developing stan-
dards for management of WiMAX. The documents are part of project IEEE 802.16i.
The standards are intended to cover service flow, accounting, and QoS management.
128 ◾ Fundamentals of EMS, NMS and OSS/BSS
MS
AAA MIP-HA
BS
Services
MS
ASN-GW OSS/billing
Gateways
MS
ASN CSN
IP network PSTN
AGW AGW
SAE
MME UPE
3GPP anchor
SAE anchor
While the initiatives for next generation networks development are specialized
domains, by separating the access and core network the standards for network man-
agement of next generation networks are also handled by specialized forums like
TMF where generic standards for managing a variety of networks are developed.
9.11 Conclusion
This chapter gives an overview on some of the most popular telecom networks. It can
be seen that networks mainly have an access and a core component. Most network
management solutions are developed for handling either access or core network.
Technology is moving toward a converged network for enterprise and telecom with
an All-IP infrastructure as the backbone. Network management solutions and
standards are also moving in the direction of managing converged next generation
networks.
130 ◾ Fundamentals of EMS, NMS and OSS/BSS
Additional Reading
1. Ronald Harding Davis. ATM for Public Networks. New York: McGraw-Hill, 1999.
2. Timo Halonen, Javier Romero, and Juan Melero. GSM, GPRS and EDGE Performance:
Evolution Towards 3G/UMTS. 2nd ed. New York: John Wiley & Sons, 2003.
3. Miikka Poikselka, Aki Niemi, Hisham Khartabil, and Georg Mayer. The IMS: IP
Multimedia Concepts and Services. 2nd ed. New York: John Wiley & Sons, 2006.
4. Luc De Ghein. MPLS Fundamentals. 2006TMF 513, Multi-Technology Network
Management Business Agreement. Indianapolis, IN: Cisco Press, 2006.
Chapter 10
Seven-Layer
Communication Model
This chapter is intended to provide the reader with an understanding of the ISO/
OSI communication model. Any communication between two elements or devices
is possible only when they are able to understand each other. The set of rules of
interaction is called a protocol and the tasks in the interaction can be grouped
under seven layers. At the end of this chapter the reader will have an understanding
of the seven layers and the tasks and protocols associated with each layer.
10.1 Introduction
OSI (open system interconnect) seven layer communication model was developed by
ISO as a framework for interoperability and the Internet working between devices
or elements in the network (see Figure 10.1). The different functions involved in the
communication between two elements are split as a set of tasks to be performed by
each of the seven layers. Each layer is self-contained making the tasks specific to
each layer easy to implement. The layers start with the physical layer at the bottom,
followed by data link layer, network layer, transport layer, session layer, presentation
layer, and application layer.
The lowest layer is referred to a layer 1 (L1) and moving up, the top most layer
is layer 7 (L7). The physical layer is closest to the physical media for connecting two
devices and hence is responsible for placing data in the physical medium. In most
scenarios the physical layer is rarely a pure software component and is usually imple-
mented as a combination of software and hardware. The application layer is closest
131
132 ◾ Fundamentals of EMS, NMS and OSS/BSS
Device - A Device - B
Physical layer is the lowest layer in the OSI model concerned with data
encoding, signaling, transmission, and reception over network devices.
◾◾ Encoding: This would involve transformation of data in the form of bits with
the device into signals for sending over a physical medium.
◾◾ Hardware specifications: Both the physical and data link layer together detail
the operation of network interface cards and cables.
Seven-Layer Communication Model ◾ 133
◾◾ Data transport between devices: The physical layer is responsible for trans-
mission and reception of data.
◾◾ Network topology design: The topology and physical network design is han-
dled by the physical layer. Some of the hardware related issues are caused due
to missed configurations in the physical layer.
The encoding can result in conversion of data bits to signals of various schemas.
Some of the signaling schemas that the physical layer can encode to are:
◾◾ Pulse signaling: Where the zero level represents “logic zero” and any other level
can represent “logic one.” This signaling schema is also called “return to zero.”
◾◾ Level signaling: A predefined level of voltage or current can represent logic
one and zero. There is no restriction that zero should be used to represent the
logic zero. This schema is also called “non return to zero”
◾◾ Manchester encoding: In this schema “logic one” is represented by a transi-
tion in a particular direction in the center of each bit, where an example of
transition can be a rising edge. Transition in the opposite direction is used to
represent logic 0.
Physical layer also determines the utilization of media during data transmission based
on signaling. When using “base band signaling” all available frequencies (the entire
bandwidth) is used for data transmission. Ethernet mostly uses base band signaling.
When working with “broadband signaling” only one frequency (part of the entire
bandwidth) is used, with multiple signals transmitted simultaneously over the same
media. The most popular example of broadband signaling is TV signals, where mul-
tiple channels working on different frequencies are transmitted simultaneously.
Physical layer also determines the topology of the network. The network can
have bus, star, ring, or mesh topology and the implementation can be performed
with LAN or WAN specifications. The physical layer should ensure efficient shar-
ing of the communication media among multiple devices with proper flow control
and collision resolution. Some examples of physical layer protocols are: CSMA/CD
(carrier sense multiple access/collision detection), DSL (digital subscriber line) and
RS-232. The physical layer is special compared to the other layers of the OSI model
in having both hardware and software components that makeup this layer. There
can be sublayers associated with the physical layer. These sublayers further break
the functionality performed by the physical layer.
sublayer and handles functions for control and establishment of connection for
logical links between devices in the network. MAC is the lower sublayer in the data
link layer (DLL) and handles functions that control access to the network media
like cables.
The functions of the data link layer are:
The data link layer offers the following types of service to the higher layer:
The elementary data link protocols for simplex unidirectional communication that
form the basis to dictate flow control and development of DLL protocol are:
◾◾ A simplex stop-and-wait protocol: The receiver only has a finite buffer capacity.
Care should be taken to ensure that the sender does not flood the receiver.
◾◾ A simplex protocol for a noisy channel: The channel is noisy and unreliable
hence care should be taken to handle frame loss, errors, and retransmission
while implementing this protocol.
The best-known example of the DLL is Ethernet. Other examples of data link
protocols are HDLC (high level data link control) and ADCCP (advanced data
communication control protocol) for point-to-point or packet-switched networks
and Aloha for local area networks.
10.4 Network Layer
This is the third lowest layer (L3) in the ISO/OSI model. While the data link layer
deals with devices in a local area, the network layer deals with actually getting
data from one computer to another even if it is on a remote network. That is, the
network layer is responsible for source to destination packet delivery, while the data
link layer is responsible for node to node frame delivery. Network layer achieves end
to end delivery using logical addressing.
Addressing performed in data link layer is for the local physical device, while the
logical address given in network layer must be unique across an entire internetwork.
For a network like the Internet, the IP (Internet protocol) address is the logical address
and every machine on the Internet needs to identify itself with a unique IP address.
The functionalities of the network layer are:
◾◾ Addressing: Every device in the network needs to identify itself with a unique
address called the logical address that is assigned in the network layer.
◾◾ Packetizing: The data to be communicated is converted to packets in the
network layer and each packet has a network header (see Figure 10.2). The
data to be packetized is provided by the higher layers in the OSI model.
This activity is also referred to as datagram encapsulation. It can happen that
the packet to be sent to the data link layer is bigger than what the DLL can
handle, in which case the packets are fragmented at the source before it is sent
to the DDL and at the destination the packets are reassembled.
◾◾ Routing: The software in the network layer is supposed to perform routing
of packets. Incoming packets from several sources need to be analyzed and
routed to proper destinations by the network layer.
◾◾ Error messaging: Some protocols for communication error handling resides
in the network layer. They generate error messages about connection, route
traffic, and health of devices in the network. A popular error handling pro-
tocol that is part of the IP suite that resides in the network layer is ICMP
(Internet control message protocol).
136 ◾ Fundamentals of EMS, NMS and OSS/BSS
Frame Frame
Frame data Data link layer
header footer
Though the network layer protocol is usually connectionless, it can also be con-
nection oriented. That is, the network layer protocol supports both connectionless
and connection-oriented protocols. When connectionless protocol like IP is used
in the network layer, data consistency and reliability are handled using connection-
oriented protocols like TCP in the transport layer.
IP (Internet protocol) is the most popular network layer protocol. There are sev-
eral other supporting protocols of IP for security, traffic control, error handling, and
so on that reside in the network layer. These protocols include IPsec, IP NAT, and
ICMP. Another network layer protocol outside the realm of the Internet protocol is
IPX (inter-network packet exchange) and DDP (datagram delivery protocol).
10.5 Transport Layer
This is the fourth lowest layer (L4) in the ISO/OSI model. While the physical layer
is concerned with transmission of bits, the data link layer with transmission in local
area, and the network layer with transition even with remote systems, the trans-
port layer handles transmission of data between processes/applications. There will
be multiple applications running on the same system and there would be scenarios
when more than one application from a specific source machine wants to send data
to multiple applications on a destination machine. Transport layer ensures that mul-
tiple applications on the same system can communicate to another set of applica-
tions on a different system. This would mean that a new parameter in addition to the
destination and source IP address is required for application level communication.
So a transport protocol can have simultaneous connections to a computer and
ensure that the receiving computer still knows what application should handle each
data packet that is received. This task is accomplished using “ports.” The parameter
port is just an internal address that facilitates control of data flow. Ports that have
specific purposes and used for generic applications across systems are known as
well-known ports. For example data is routed through well-known TCP port 80
while using the Internet. While an IP address can be used to route data to a specific
network element and the port number to direct data to a specific application, both
of these tasks can be jointly done by an identifier called the socket. A socket is an
identifier that binds a specific port to an IP address.
Transport layer supports both connection-oriented protocol and connection-
less protocol. Connection-oriented protocol is the more common for reliable data
Seven-Layer Communication Model ◾ 137
The TCP (transport control protocol) and UDP (user datagram protocol) are
the most popular transport layer protocols. TCP sends data as packets while UDP
sends data as datagrams. TCP is connection oriented and UDP is connectionless.
Connection-oriented services can be handled at the network layer itself, though it
is commonly implemented at the transport layer.
Some of the popular session layer protocols are session initiation protocol (SIP) and
AppleTalk session protocol (ASP). Session layer software in most scenarios is more
of a set of tools rather than a specific protocol. For instance RPC (remote procedure
call) usually associated with the session layer for communication between remote
systems gives more of a set of APIs (application program interfaces) for working on
remote sessions and SSH (secure shell), another protocol associated with session
layer also gives the implemented the ability to establish secure sessions. While the
term session is usually used for communication over a short duration between ele-
ments, the term connection is used for communication that takes place for longer
durations.
Let us consider a typical scenario of how a session layer works when handling a
single application. The Web page displayed by a Web browser can contain embed-
ded objects of media, text, and graphics. This graphic file, media file, and text con-
tent file are all separate files on the Web server. If the browser is a pure application
object that cannot handle multiple objects together then a separate download must
be started to access these objects. So the Web browser opens a separate session to
the Web server to download each of these individual files of the embedded objects
in the Web page. The session layer performs the function of keeping track of what
packets and data belong to what file. This is required so that the Web page can be
displayed correctly by the Web browser.
In order to ensure that common syntax definitions are available for data recogni-
tion, ISO defined a general abstract syntax for the definition of data types associ-
ated with distributed applications. This abstract syntax is known as abstract syntax
notation number one or ASN.1. The data types associated with ASN.1 are abstract
data types. Along similar lines, transfer syntax was defined by OSI for use with
ASN.1. This transfer syntax notation developed by OSI is known as basic encod-
ing rules (BER).
The functions performed by the presentation layer are:
There are three syntactic versions that are important when two applications
interact:
The syntax of the application entity at the transmitting or requesting end is con-
verted to syntax used by the presentation entity at the presentation layer of a request-
ing system. In the same way, the syntax used by the presentation entity is converted
to the syntax used by an application entity at the receiving end at the presentation
layer of the receiving system.
Another example of a presentation layer protocol is XDR (external data rep-
resentation). XDR is used as the common data format mostly when interaction is
involved with a system running UNIX OS. For a UNIX system to interact with a
heterogeneous system running another OS, using remote procedure calls (RPC),
the usual presentation layer format is XDR. XDR has its own set of data types and
the application layer types will get converted to XDR format before transfer of data
and the response data also needs to be converted to XDR. In a distributed system
having heterogeneous systems, the presentation layer plays a key role to facilitate
communication.
10.8 Application Layer
This is the top most layer of the OSI model. Being the seventh lowest layer in the
ISO/OSI model it is also referred to as L7. The application layer manages commu-
nication between applications and handles issues like network transparency and
resource allocation. In addition to providing network access as the key functional-
ity, the application layer also offers services for user applications.
Let us consider how the application layer offers services for user applications
with an example of World Wide Web (www). The Web browser is an application
that runs on a machine to display Web pages and it makes use of the services
offered by a protocol that operates at the application layer. Viewing the vast collec-
tion of hypertext-based files available on the Internet as well as providing quick and
easy retrieval of these files using a browser is made possible using the application
layer protocol called hyper text transfer protocol (HTTP).
Some of the other popular application layer protocols include simple mail trans-
fer protocol (SMTP) that gives electronic mail functionality to be implemented
in applications through which all kinds of data can be sent to other users of the
Internet as an electronic postal system and file transfer protocol (FTP) is used to
transfer files between two machines. The network management protocol SNMP is
Seven-Layer Communication Model ◾ 141
As the name signifies, CASE elements are the integrated set of functions that
are commonly used by an application while SASE elements are written on a need
basis for a specific functionality. Some of the CASE related functions include ini-
tialization and termination of connection with other processes, transfer of message
with other processes, or underlying protocol and recovering from a failure while a
task is under execution.
It should be understood that not all applications use the application layer for
communication and applications are not the only users of the application layer. The
operating system (OS) itself can use the services directly from the application layer.
1. Application layer: This is the top most layer of the TCP/IP model and per-
forms functions done by application, presentation, and session layer in the
OSI model. In addition to all the processes that involve user interaction, the
142 ◾ Fundamentals of EMS, NMS and OSS/BSS
Application layer
Presentation layer
Application layer
Session layer
Network layer
Internet layer
Data link layer
Network interface layer
Physical layer
Figure 10.3 Mapping layers in the OSI model to layers in the TCP/IP model.
application also controls the session and presentation of data when required.
The terms like socket and port that are associated with the session layer in an
OSI model are used to describe the path over which applications communi-
cate in TCP/IP model. Some of the protocols in this layer are FTP, HTTP,
SMTP, and SNMP.
2. Transport layer: This layer of TCP/IP model performs functions similar
to the transport layer of an OSI model. There are two important transport
layer protocols, the transmission control protocol (TCP) and user datagram
protocol (UDP). While TCP guarantees that information is received as it is a
connection-oriented protocol, the UDP works on a connectionless setup suit-
able for intermittent networks and performs no end-to-end reliability checks.
Flow control is also performed at this layer to ensure that a fast sender will
not swamp the receiver with message packets.
3. Internet layer: This layer of the TCP/IP model performs functions similar
to the network layer of an OSI model. The Internet layer is suited for packet
switching networks and is mostly a connectionless layer that allows hosts to
insert packets into any network and delivers these packets independently to
the destination. The work of the Internet layer is limited to delivery of packets
and the higher layers perform the function to rearrange the packets in order
to deliver them to the proper application in the destination. The Internet
protocol defines the packet format for the Internet layer. Routing of packets
to avoid congestion is also handled in the Internet layer. This is a key layer
of the TCP/IP model and all data has to pass this layer before being sent to
lower layers for transmission.
4. Network interface layer: The physical layer and data link layer of the OSI
model are grouped into a single layer called network interface layer in the
TCP/IP model. This layer is used by the TCP/IP protocols running at the
Seven-Layer Communication Model ◾ 143
upper layers to interface with the local network. The TCP/IP model does
not give much detail on the implementation or the specific set of functions
to be performed at the network interface layer except it being the point for
connection of host to network and use of an associated protocol to trans-
mit IP packets. Protocols like point-to-point protocol (PPP) and serial line
Internet protocol (SLIP) can be considered protocols of the network interface
layer. The actual protocol used in the network interface layer varies from host
to host and network to network. The network interface layer is also called
the network access layer. In some implementations of the network interface
layer, it performs functions like frame synchronization, control of errors, and
media access.
There are some significant differences between the OSI model and TCP/IP
model.
10.10 Conclusion
The network model is required to have a good understanding of how two network
elements communicate whether the elements are in the same network or in two
separate networks. The OSI model is the most common model used in text book
descriptions to give the reader an understanding of the functionalities in an interac-
tion between elements spit into a set of layers while the TCP/IP is the most popular
model for implementing protocol stacks of interaction. It can be seen that TCP/
IP is a type of OSI model where a single layer performs multiple layer functions.
From a functional model perspective, both the TCP/IP model and OSI model are
the same.
144 ◾ Fundamentals of EMS, NMS and OSS/BSS
Additional Reading
1. Debbra Wetteroth. OSI Reference Model for Telecommunications. New York: McGraw-
Hill, 2001.
2. Uyless D. Black. OSI: A Model for Computer Communications Standards. Facsimile ed.
Upper Saddle River, NJ: Prentice Hall, 1990.
3. American National Standards Institute (ANSI). ISO/IEC 10731:1994, Information
Technology: Open Systems Interconnection: Basic Reference Model: Conventions for the
Definition of OSI Services. ISO/IEC JTC 1, 2007.
4. Charles Kozierok. The TCP/IP Guide: A Comprehensive, Illustrated Internet Protocols
Reference. San Francisco, CA: No Starch Press, 2005.
Chapter 11
What Is NMS?
This chapter is intended to provide the reader with a basic understanding about the
Network Management System (NMS). At the end of this chapter you will have a
good understanding of the components in a legacy NMS, some of the open source
products that implement full or partial functionality of these components, and
where the network management system fits in the telecom architecture.
11.1 Introduction
The network management layer in the TMN model is between the element man-
agement layer and service management layer. It works on the FCAPS (fault, con-
figuration, accounting, performance, and security) data obtained from the element
management layer. The FCAPS functionality is spread across all layers of the TMN
model. This can be explained with a simple example. Consider a fault data, which
involves fault logs generated at the network element layer. These logs are collected
from the network element using an element management system (EMS). The EMS
will format the logs to a specific format and display the logs in a GUI (graphical user
interface). Each log is generated in the network element to notify the occurrence
of a specific event. Fault logs from different network elements are collected at the
network management system in the network management layer. Logs correspond-
ing to different events on different network elements are correlated at the NMS.
An event handler implemented at the NMS can take corrective actions to rectify
a trouble scenario. Since this is a network management layer functionality let us
explain it in more detail with an example.
When the connection between a call processing switch/call agent (CA) and a
media gateway (MGW) goes down due to installation failure of a software module
145
146 ◾ Fundamentals of EMS, NMS and OSS/BSS
on the MGW, a set of event logs are generated. The CA will create a log showing
loss of connection with MGW and the MGW, which is a different network ele-
ment, will generate its own log specifying loss of connection with the CA. There
will also be logs in the MGW specifying failure of a software module on MGW
and many other logs corresponding to the status of the links that connect the CA
and MGW. While at EMS level only collection of data from a specific element
like CA or MGW occurs, at the NMS level log data from both these elements are
available. So the event logs from different network elements can be correlated and
in this scenario the logs from the CA and MGW are correlated and the actual
problem can be identified, which is not possible at the EMS level by just a connec-
tion down log at CA. Knowledge of the actual problem at the NMS level, leads
to the capability of creating applications that can take corrective actions when a
predefined problem occurs.
The fault logs generated at the network element is also used in layers above the
network layer. The time duration for which a link is down can be used as input
information for a service assurance application in the service layer, and this can
be used to provide a discount in the billing module on the business management
layer. There is a separate chapter that details the functions handled in the network
management layer and this chapter is more about giving the reader a good under-
standing of a network management application and the usual modules required
while implementing a network management system.
Once the resources are discovered and the historical data downloaded, then
the NMS application is ready for performing any maintenance level activities.
The maintenance activities or network functions are performed with the down-
loaded data. These functions will include network provisioning, troubleshooting
of network issues, maintenance of configured network elements, tracing the flow
of data across multiple network elements, programming the network for optimal
utilization of elements, and so on. During a connection loss between the NMS
and an already managed element, the NMS will try for a connection establishment
at regular intervals and synchronize the data when the connection is established.
A rediscovery of the network resources and their states will also be performed at
regular intervals to keep the most recent information at NMS.
148 ◾ Fundamentals of EMS, NMS and OSS/BSS
These building blocks and the open source components that can be used in
making these blocks are discussed in detail in the sections that follow. The data col-
lection components can collect data directly from the network elements or from an
EMS. In legacy NMS, solutions and products available from leading NMS vendors,
the data collection was done usually from the network elements with interfaces that
permit collection of data from an EMS or other NMS solution. Use of standard
interface and shared data models as in next generation has led to much reuse of the
NMS functional components.
A network management system will also have a database for storing the raw
or formatted data. The presence of a database avoids re-query for historical data
from an NMS to the network element. The historical data can be stored into a full-
fledged database to which SQL-based commands are executed to send and receive
data or it can be maintained as flat files or compressed flat files.
like “Get” and “Get-Next” to browse the data in the network element. It should be
kept in mind that not all network elements can respond to SNMP queries from the
agent on the NMS. The network element should be maintaining a MIB (manage-
ment information base) with configuration details and also generate logs/events as
SNMP traps that can read by the agent.
There are many protocols that can be used for interaction between the manage-
ment application and the network element. The SNMP, as the name signifies, is
easier to implement and maintain and has been the most preferred network man-
agement protocol. With new requirements to support XML (extensible markup
language) based data, which can be intercepted and parsed by any third-party appli-
cation with sufficient authentication, XML agents based on XML protocols like
XMLP (XML protocol) and SOAP (simple object access protocol) are becoming
popular. SOAP is very effective for web-based network management. The TL1 is
another protocol that is popular to provide command line based network manage-
ment interface. For file transfers there is no better protocol than FTP (file transfer
protocol). So the protocol agent available at the NMS data collection component is
mainly dependent on the type of application for which the NMS will be put to use.
Most of the popular NMS products have support for multiple protocols. Some of
the open source SNMP agents are WestHawk SNMP agent, NetSNMPJ, and IBM
SNMP Bean Suite. Similarly a popular open source FTP agent is edtFTPJ. Most
of these open source products are built in Java programming language and can be
downloaded from sourceforge.net
Platform components
Logging Security/encryption
services services
Object persistence
services
It should be understood that the NMS components discussed in this chapter are
a logical grouping of functionality and there will not be specific process of package in
an NMS application with these component names. The functionalities in the NMS
are grouped under some main headings to explain the essential building blocks. Most
open source packages for NMS are implemented in Java. The platform components
can also include libraries and not just service packages like the examples discussed.
performance correlation, using the data collected from the network elements. These
management functions can be split into program segments or logical blocks that
can developed as individual components. There are multiple open source compo-
nents that offer a set or subset of management functions in this component.
Some of the sample blocks in management function components are:
Open source applications like “OpenNMS” provide all these functions and can be
used as a framework for making specialized NMS for a particular network. Open
source applications that offer just one specific management functionality is also
available, like “Mandrax” or “Drools” for rule-based fault correlation.
11.7 GUI Components
The GUI components are standard applications that convert the formatted data
generated from management functional components to user-friendly graphical rep-
resentations. For example performance records like link utilization data for a day
monitored at regular intervals, would be easier to track and compare when plotted
as in a graph. In similar lines the fault data will have a set of predefined fields, so
when the logs have to be presented to the user, it is best to format the logs and show
the log contents under a set of header in an MS Excel format based GUI. The flow
of data from NE to user through GUI components is shown in Figure 11.4.
Some of the sample blocks in GUI components are:
Data Management
GUI
collection function
NE components
components components
User
150
125
100
Count
75
50
25
0
SSH SMTP HTTP FTP ICMP
Service
easier to locate a faulty element when the elements are displayed over a
geographical map of the local area, specific country or in some cases the
world map. Tools like “MICA” and “Picolo” can be used for implementing
the same.
3. Performance chart viewers: The formatted performance records can be rep-
resented graphically using applications like “JFreeChart,” “iReport,” and
“DataVision.” These applications provide interfaces to represent performance
data to the user in a graphical manner. The figure of the sample performance
chart (see Figure 11.5), shows the count of service outages for a specific period
of time against the specific service.
4. Fault browsers: Similar to performance data, fault data also needs to be pre-
sented in a user friendly manner. The fault data generated at the network ele-
ment is first parsed and formatted by the management function components
that collect this data using data collection components. This formatted data
has a set of fields and this data in GUI should be represented in such a way
that the data can be filtered based on a specific field. A tabular representation
is usually used with a fault data display in GUI.
5. User administration: The administration screen is a necessity to perform net-
work related operations like adding a new node to monitor, deleting a node,
clearing a fault record, adding comments to a log, defining a performance
correlation, and so on. Authentication is required for the user to get access to
the administration screen. Most NMS launch will result in a window or Web
page requesting user name and password. Once the user is authenticated,
based on the group and permissions exercised by the log in id, the user can
access the administration screen and perform any of the operations that the
user, based on privileges, is permitted to perform.
What Is NMS? ◾ 155
The building blocks in the GUI components, thus provide the interfaces that can
be used during the NMS design of a user-friendly GUI. Most of the functional
interface bundles can be downloaded along with code as opensource modules at
sourceforge.net
◾◾ J2EE interface: This involves interface definitions based on Java for publish-
ing the interactions permitted when working with the NMS. J2EE Interface
is specifically for the Enterprise edition of Java and the opensource implemen-
tation framework for J2EE interface can be downloaded from the internet.
“JBean” project provides the required interfaces for a Java-based implemen-
tation and the aim of this project is to utilize Java Beans and Ecomponent
architecture, to easily assemble desperate software components into a single
working program.
◾◾ CORBA interface: Different opensource implementations of common object
request broker architecture (CORBA) specifications defined by the object
management group (OMG) is available for developing interfaces that enable
northbound interface applications written in multiple computer languages
and running on multiple computers to work with a particular NMS solu-
tion. openCCM is an example of opensource CORBA component model
implementation.
◾◾ .NET interface: .Net framework of Microsoft is a popular platform for appli-
cation development that is not written for a particular OS. The .Net also
provides interfaces for communication between desperate components or
applications. Similar to Java, .Net also has a good bundle of interfaces that
can make interfacing of NMS with NBI applications much easier.
156 ◾ Fundamentals of EMS, NMS and OSS/BSS
Some examples of the server that provides the platform for publishing these inter-
faces are:
The north bound interface can also be proprietary modules or adapters that
convert data from the NMS to the format required by the application that wants to
interact with the NMS. For example IBM Netcool Network management product
has a set of probes (generic, proprietary, and standards based), gateways and moni-
tors that can virtually collect data from any network environment like OSS services
or another EMS/NMS application. The gateway itself supports third-party data-
bases and applications like Oracle, SQL Server, Informix, Sybase, Clarify, Siebel,
Remedy, and Metasolv.
11.10 Conclusion
Most books on network management discuss the NMS functions, standards, and
protocols without giving a clear picture of the implementation perspective on how
NMS architecture looks. This level of information is usually available in research
publications created by developers who actually work on designing and coding
the NMS. This chapter gives a solid base to understanding the components for
making a network management system. It should be understood that there are
enough opensource applications available for download, to build most of the com-
ponents in the NMS. Some of these applications are also discussed in this chap-
ter to avoid reinventing the wheel for working on developing an NMS for PoC
(proof-of-concept) or other noncommercial purpose. Use of opensource products
for commercial development should be done carefully after examining the license
agreement of the product.
158 ◾ Fundamentals of EMS, NMS and OSS/BSS
Additional Reading
There are no major book references for the material discussed in this chapter. As
an additional reference it is suggested visiting the web pages on the opensource
components discussed in this chapter. This will give a better understanding of the
specific component and how it fits in the complete NMS package.
1. openNMS: www.opennms.org
2. openORB: openorb.sourceforge.net
3. JacORB: www.jacorb.org
4. JBoss: www.jboss.org
5. JOnAS: jonas.objectweb.org
6. openCCM: openccm.objectweb.org
7. JFreeChart: www.jfree.org/jfreechart/
8. Log4J: logging.apache.org/log4j/1.2/index.html
9. BouncyCastle: www.bouncycastle.org
10. Hibernate: www.hibernate.org
11. edtFTPJ: www.enterprisedt.com/products/edtftpj
12. WestHawk: snmp.westhawk.co.uk
13. NetSNMPJ: netsnmpj.sourceforge.net
Chapter 12
NMS Functions
This chapter is intended to provide the reader with a basic understanding about the
functions available in a network management system (NMS). At the end of this
chapter you will have a good understanding of the categories under which network
functions can be grouped in the eTOM (enhanced telecom operations map) model.
This would include a discussion of the categories and how it maps to actual func-
tions in the NMS.
12.1 Introduction
The TMF (TeleManagement Forum) expanded the network and service manage-
ment layers in the TMN model to have more detailed mapping of the telecom oper-
ations and created the TOM model. Later the TOM model was revised by TMF to
have three main sections: SIP (strategy, infrastructure, and product), OFAB (opera-
tions, fulfillment, assurance, and billing), and enterprise management. This revised
model is called eTOM. The functions of the NMS mainly fall under the resource
management layer of eTOM that spreads across SIP and OFAB. Though the NMS
data can be used for planning phases in SIP, its functions can be directly mapped
to real-time network configuring and maintenance in OFAB. So this chapter will
mainly be about the network management functions from the categories of resource
management in OFAB of the eTOM model.
The following categories are used to discuss resource management functions:
◾◾ Resource provisioning
◾◾ Resource data collection and processing
◾◾ Resource trouble management
159
160 ◾ Fundamentals of EMS, NMS and OSS/BSS
A set of resources are required to develop a service. The term Resource as used
by TMF could be any type of hardware, software, or workforce used to develop
the service and includes within its scope a network or an element in the network.
Hence resource management functions of TMF maps to the functions performed
by NMS. It should be understood that the NMS functions in this chapter are
generic and would be available in most network management solutions. There are
also network management functions that are suitable specifically for a particular
kind of network and high level capabilities that are implemented to satisfy specific
requirements, both of which does not fall in the generic category.
An example of a management function specific to a particular type of network
is HSS (home subscriber server) subscriber provisioning. This operation involves
creating, editing, and deleting customer/subscriber related records on the HSS.
Since the HSS is an element found only in an IP-based network, this provision-
ing functionality can be seen only in IP-based networks like IMS (IP multimedia
subsystem). A high level functionality that is implemented only based on customer
requirements is single-sign-on (SSO). Some NMS solutions where security levels do
not require separate log-ins for working on an EMS or a NE from NMS, then SSO
can be used to have a single user authentication for working with any applications
or subapplications within the NMS, including log-in to other devices or applica-
tions that can be launched from the NMS.
◾◾ Allocate and install: The network elements that are allocated to satisfy a
service by the inventory management system are discovered by the NMS
and the required software installation is performed. The way installation is
usually automated is to connect to a server that has the required software
packages and initiate a command that will install the necessary packages on
the target network element with or without a download of the packages to
the target NE.
◾◾ Configure and activate: Once installation of packages are complete, the net-
work element will be ready for configuring. When the modules in the installed
NMS Functions ◾ 161
software are activated, they need to run using a set of predefined parameters
to satisfy the service specifications. These parameters are read from configu-
ration files. The process of setting these parameters before, during, and after
activation is known as configuring the network elements. When the configu-
ration involved at the preactivation stage is complete, the software modules
can be started or activated.
◾◾ Test: A necessary function of provisioning is running some tests to ensure
that the network and its elements are ready to offer a specific service. For
example, if the element being provisioned is a call agent that provides call
processing capability, then the test would involve running some test scripts
that simulate an actual call and verifying the call records generated against
an expected record set created for the simulated call. A test case can even be a
simple check to ensure that no errors were generated during a software instal-
lation or verifying if the modules are running with the parameters specified
as part of configuration.
◾◾ Track and manage: The network provisioning operations needs to be
tracked and managed. Tracking involves ensuring that the various manage-
ment functions involved in provisioning are executed properly. Tracking
helps to identify errors or issues that are raised during provisioning so that
the same can be posted to an administrator for taking corrective action.
Managing involves interaction with other modules, for example if a par-
ticular element in a network is identified as faulty, the same needs to be
communicated to the inventory management system for allocating a new
replacement element and once network and service provisioning is com-
plete, the order management system needs to be informed to close the
request.
◾◾ Issue reports: The reports issued in network provisioning are quite differ-
ent from the FCAPS reports in maintenance. The reports issued in net-
work provisioning is intended to provide interested parties, information
on the status of provisioning, efficiency achieved with current configura-
tion parameters, total time for provisioning, elements that failed and the
operation that caused failure, and so forth. This would mean that the pro-
visioning reports are customized reports for a specific audience unlike a
maintenance report.
◾◾ Recovery of element: Similar to allocation of resource, the unused resource
or element needs to be deallocated for use in satisfying a new order or as
a replacement for some faulty element in an existing order. The process of
issuing a deallocation request, getting proper authorization, performing the
deallocation and reallocating the same for some other order is performed as
part of recovery of element.
interact with network provisioning. The main modules that interact with the net-
work provisioning module are:
Many NMS product specifications use the term “discovery” instead of “audit.” In
these scenarios, the process of initial discovery is explicitly expressed using the term
“auto discovery.”
In addition to understanding the functions performed by the resource data
collection and processing (RDCP) module it is also important to have an under-
standing of the modules that interact with the RDCP module. The main modules
that interact with the RDCP module are:
◾◾ Customer raises an issue that needs to be handled in the network: When the
customer has an issue with a subscribed service; he/she contacts a customer
care number to report the problem. The customer care representative tries to
resolve the service problem based on available data. In case the service prob-
lem cannot be resolved using the operations of the executive, then a trouble
ticket is raised to track and resolve the issue. The ticket is then forwarded to
the appropriate team (service operators, network operators, application devel-
opers, etc.) for analysis and resolution.
◾◾ Service team identifies a trouble condition: When certain parameters like
the quality of service deteriorates, the service team isolates the issue and
asks the network operations team to resolve the issue or forward the request
to the appropriate team in clearing the trouble condition. A ticket may be
raised to track the issue.
◾◾ Network monitoring/maintenance professional raises a ticket: During the
network monitoring operation, the network professional might encounter a
trouble log that could require manual intervention from the site where the
equipment is deployed to fix. The monitoring professional can raise a trouble
ticket and assign the same to the concerned network operator for resolution
of trouble. Alternatively, the network monitoring professional could issues
commands to the network elements or take necessary action to resolve the
issue when the trouble condition is easy to resolve and does not require par-
ticipation from other teams.
◾◾ NMS takes necessary action for a trouble condition: It is possible to program
the NMS to take a predefined action when a specific event log is received
generated for a network trouble condition. The action taken by the NMS
NMS Functions ◾ 167
The NMS functions that are discussed in this section as part of resource trouble
management are:
◾◾ Report resource trouble: It is not necessarily that the customer will be the
entity reporting a trouble condition. In most scenarios the customer might
only have a service problem and not an actual resource trouble. An NMS
that is used by the operator for monitoring resource trouble serves as the best
report engine. The operator can use the event logs generated for a network
trouble condition to take corrective action or put predefined rules to gener-
ate trouble reports whenever a specific trouble trigger a log that gets collected
by NMS.
◾◾ Localize resource trouble: Network trouble reported by customer or identified
by upper business/service layer needs to be localized. Localization involves
identifying the specific element in the network that caused the trouble condi-
tion. Only after the trouble condition is localized to the lowest level possible,
will it be possible to effectively resolve the issue. Localization is not just lim-
ited to identifying the network element that caused the trouble condition. For
example, consider the scenario in which the customer reports degradation
in quality of service and raises a ticket for the same. Localization starts with
identifying the resources allocated from the inventory management system
to offer the service. The resources offering the service can then be analyzed to
see utilization and KPI (key performance indicator) values. This would lead
to localization of trouble to the specific network element. Next the logs in the
network element can be monitored or historical logs analyzed to identify the
application in the network element causing the deterioration of service. Next
the trouble ticket is assigned to an appropriate team or an operator to resolve
the issue. The intent of a localized operation is to reduce the MTTR (mean
time to repair) value by ensuring that a trouble condition is looked into by the
correct team that can resolve the specific trouble as early as possible.
◾◾ Analyze resource trouble: Analysis of resource trouble is required first to
localize a resource condition and then to resolve it. The NMS is equipped
with multiple tools to help an operator in debugging a trouble condition.
Most NMS products offer these tools as test suites to automate testing of the
168 ◾ Fundamentals of EMS, NMS and OSS/BSS
product. For example, the test suite of AdventNet Web NMS comprises of
simulation tool kits and utilities to perform functionality testing, scalabil-
ity testing, performance testing, application monitoring, and client latency
verification.
◾◾ Resolve resource trouble: Some trouble conditions will require change in
code of an application running on the NE for resolution while most of
the trouble conditions can be fixed by performing some operation on the
NE. The NMS has interfaces to issue commands on the network element,
perform a backup/restore, switch between active and passive elements for
high availability (HA), and so on that could resolve or handle a network
trouble condition.
Some of the modules that interact with resource trouble management mod-
ule are:
The modules that interact with network performance management module are:
12.7 Conclusion
There are multiple modules in an NMS application. Some vendors supply individual
modules for managing specific network management functionality while other ven-
dors provide functions as a fully functional NMS bundle. From a holistic telecom
perspective, the network is just a resource or set of resources that operate to offer a
telecom service. Hence resource management in the eTOM process model includes
network management. This chapter uses the resource management functionalities
given in eTOM to discuss functional modules that make up a complete NMS pack-
age. Only a few of the NMS functions are discussed with the intension of giving
the reader a clear picture of what NMS is and how it is different from an element
management solution that can manage a set of elements. Process-to-process interac-
tions on how the different NMS modules interact is also discussed to show how the
NMS modules can be implemented as independent components in line with SOA
(service oriented architecture) and integration of open source components that will
be discussed later in this book under the development of next generation network
management solution.
Additional Reading
1. Lakshmi G. Raman. Fundamentals of Telecommunications Network Management. New
York: Wiley-IEEE Press, 1999.
2. Kundan Misra. OSS for Telecom Networks: An Introduction to Network Management.
London: Springer, 2004.
3. Alexander Clemm. Network Management Fundamentals. Indianapolis, IN: Cisco Press,
2006.
Chapter 13
NMS Products
This chapter provides the reader with a basic understanding on some of the most
popular network management products. This chapter is intended to give the reader
practical exposure on actual network management products rather than the theo-
retical aspects of network management. This chapter presents the author’s under-
standing of a few network management products and the content is not reviewed
or endorsed by the product owners. Hence the author or publisher takes no position
on the accuracy or completeness of data presented in this chapter.
13.1 Introduction
There are many NMS products on the telecom market. These products can be gen-
erally classified under five main types. They are:
1. NMS framework for customization: These are products that provide generic
fault, performance, and other functions under the FCAPS model. It pro-
vides a generic framework on which specialized applications can be created
as customization for a specific requirement of the client. An example product
that falls in this category would be OpenNMS. Being a generic framework,
products in this category usually are developed by integrating open source
products. Hence the product sells for free and clients are charged for services
offered on the product like bug fixing, maintenance, and customization of
the product. In addition to creating new applications on the framework, cus-
tomization could also be simple look and feel changes (GUI), re-branding,
license administration, or integrating multiple management solutions with
the framework to offer a common GUI.
173
174 ◾ Fundamentals of EMS, NMS and OSS/BSS
2. NMS package with add-ons: Products in this category are complete NMS
suites with almost all the popular NMS functions. The product itself is charged
and the add-ons are charged a premium. Add-ons provide modules with func-
tions that would not be available in the usual NMS products or given as a
specialized function product by other vendors. Example of a product that falls
in this category is IBM Netcool. The add-on could be a module with a set of
standard adapters or a specialized function for managing IP networks.
3. Function specialized NMS: Products in this category are developed to perform
a specific NMS functionality. For example, “Granite Inventory” a product of
Telcordia is intended to perform inventory management and not the general
FCAPS functions. Though there is competition between vendors working on
NMS framework and packages, these products don’t directly compete with
products developed to perform specialized functions. For inventory manage-
ment, the competition for Telcordia is inventory management products from
companies like Cramer, MetaSolv, NetCracker, and so on and not products
like IBM Tivoli, AdventNet, or OpenNMS.
4. Domain specialized NMS: Products in this category are for managing net-
works in a specific domain. For example, “netspan” a product of Airspan is for
performing FCAPS functionality on the WiMAX network and not a generic
product for multiple domains. OEM (original equipment manufacturers) who
want to provide/add a specialized WiMAX NMS as part of their WiMAX
product suite for service providers, will opt to buy netspan rather than go
with a generic NMS product.
5. Hybrid type: Several combinations of the four NMS product types already
discussed is also possible. This type has been termed Hybrid. For example,
IBM Tivoli is an NMS framework that is developed without using open
source products and hence the product is not free. Another example of the
Hybrid again from IBM is Tivoli Netcool IP Multimedia Subsystem (IMS)
Manager. Though the name IMS manager shows it is a domain specialized
NMS, this product is build over a framework/package product and can also be
interpreted as an add-on to the product rather than a separate NMS product.
It should be kept in mind that the NMS product types discussed above are
merely to understand the concept and there is no such classification in telecom
space for NMS products.
13.2 OpenNMS
OpenNMS is a network management solution developed in the open-source
model. So the application and its source can be downloaded for free and distributed
under terms of the GPL/LGPL license. OpenNMS is a functionality rich NMS and
can be used for managing both enterprise and telecom networks. Its features like
NMS Products ◾ 175
Collect
SNMP data
ICMP Discovery
HTTP Event
store
Postgres Data
Services Tasks polling
OpenNMS
DNS
User
notify
SMTP
Threshold
IMAP handling
Action
FTP handling
automated network discovery, configuration using xml, and support for multiple
operating systems, make it one of the most popular NMS products that can be
compared with enterprise grade NMS products from leading network equipment
manufacturers.
After discovery of elements in the network, OpenNMS discovers the services
running on the discovered elements. The status of the services are polled and
updated by OpenNMS. Some of the services that can be identified by OpenNMS
(see Figure 13.1) are Citrix Service, DHCP Service, DNS Service, FTP Service,
HTTP Service, ICMP Service, IMAP Service, LDAP Service, POP3 Service,
SMTP Service, SSH Service, and Telnet Service.
OpenNMS runs as a set of daemon processes, each handling a specific task.
Each of these daemon processes can spawn multiple threads that work on an event-
driven model. This means that the threads associated with the daemon process
listens for events that concern them as well as submit events to a virtual bus. Some
of these processes are discussed next.
There are many more daemon processes like OpenNMS.Eventd that is con-
cerned with writing and receiving all the event information, OpenNMS.Trapd to
handle asynchronous trap information, and OpenNMS.Threshd process to manage
threshold settings. These processes use numerous open source products like joesnmp
for SNMP data collection, log4j for event generation, postgresql as DBMS inter-
face, and (JRobin) RRDTool to store graph data, just to name a few.
OpenNMS got a Gold Award for “Product Excellence in 2007” in the Network
Management Platform category, against competing products like OpenView of HP
NMS Products ◾ 177
◾◾ Fault management: The usual capabilities provided in NMS for alarm moni-
toring, like filtering a particular alarm, suppression of alarms that are not of
interest for user and correlating alarms is performed. Segregation of alarms
based on severity and the element where the alarm originated can be per-
formed from AdventNet Web GUI. Clearing and acknowledging an alarm
is also supported.
◾◾ Configuration management: Network elements are automatically discovered
and their topology is created. An audit is performed on the discovered ele-
ments and their attributes, to get status updates that are not propagated with
asynchronous notifications from the devices. Inventory management capabil-
ity is also in AdventNet with information on all devices in the network.
◾◾ Performance management: The performance data collected from the network
are aggregated and analyzed to generated graphical reports in AdventNet Web
GUI. The functionality of filtering helps to customize performance reports.
Thresholding is also supported and AdventNet gives the user, graphical
178 ◾ Fundamentals of EMS, NMS and OSS/BSS
1. Mediation layer: This layer has components that provide southbound inter-
face capabilities. Multiple southbound protocols are supported for dynamic
deployment. TCP, HTTP, RMI, and CORBA are some of the transports
supported with Java API and XML interface for application development.
Client layer
Mediation layer
Some of the project tools offered by AdventNet for Web NMS are:
13.4 HP OpenView
OpenView was a product of Hewlett Packard. It can be used for network, system,
data, application, and service management. The HP OpenView can be described
as a suite of software applications with multiple optional modules/smart plug-ins
and third-party applications. HP OpenView was rebranded in 2007. Its open archi-
tecture and multiplatform support helps in out-of-the-box integration and rapid
deployment that leads to a quick return on investment (ROI).
180 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ Event adaptation facility: The events generated from the managed nodes and
events that can be collected from other network managers have a different
format from what is expected by Tivoli framework and its enterprise solu-
tions. The event adaptation facility provides the capability to convert the
events from external sources into a format compactable with Tivoli.
◾◾ Application extension facility: This capability allows users to customize Tivoli
products based on requirements at a specific site.
◾◾ Manage accounts: Tivoli product user and group accounts can be created,
modified, or deleted. A field in the user account can be edited and the changes
applied to a target system or multiple systems in the network.
◾◾ Software maintenance: This involves installing new software or an upgrade
of existing software on elements in the network. It is possible to get a list of
installed software with the details on software version and upgrade systems
with old releases.
◾◾ Third-party application management: Applications like “Lotus Notes” can be
launched with Tivoli tools along with management data. Monitors and task
libraries help to maintain these applications in Tivoli environment.
◾◾ Trouble management: While monitoring the network, if Tivoli identifies a
system crash then commands to take corrective actions can be performed or
a page message can be issued to the operator.
◾◾ Inventory management: The assets of an enterprise can be tracked and man-
aged using Tivoli products. It is possible to perform operations, administra-
tion, and maintenance on these assets. Centralized control of enterprise assets
is made possible with Tivoli products.
◾◾ Storage management: Managing the storage devices and the data in it is an
integral part of enterprise management. In addition to products for service,
network, and application management, Tivoli also has products for storage
management.
IBM Tivoli family of products has specialized products for all aspects of man-
agement. In storage management alone IBM Tivoli has multiple products like
Tivoli Storage Manager, Tivoli Storage Manager FastBack, Tivoli Storage Manager
FastBack Center, Tivoli Storage Manager HSM for Windows, Tivoli Storage
Optimizer z/OS, and Tivoli Storage Process Manager. The capabilities of some of
the network management products in the IBM Tivoli family are discussed next.
node and sends it to the monitoring server. Tivoli Monitoring also has the
capability to detect problem scenarios and take corrective action. IBM Eclipse
helps the server by providing help pages for the monitoring agents that aid the
users of Tivoli Monitor.
2. Tivoli Netcool/OMNIbus: This is used by many service providers as an inte-
gration platform acting as a manager-of-managers offering a single console for
data from network managers like HP OpenView, NetIQ, and CA Unicenter
TNG. It supports operation on Sun Solaris, Windows, IBM AIX, HP-UX,
and multiple flavors of Linux platforms. Event handling is the main manage-
ment function in Netcool/OMNIbus. Its interaction with other applications
that facilitates smooth integration is made possible using probes and gateways.
Probes connect to an event source and collect data from it for processing. The dif-
ferent types of Netcool/OMNIbus probes are:
◾◾ Device probes: These probes connect to a device like router or switch and
collect management data.
◾◾ Log file probes: The device to be monitored or target system writes event data
to a log file. This log file on the target system is read by Netcool/OMNIbus
probe to collect data.
◾◾ Database probes: A table in the database is used as the source to provide
input. Whenever an event occurs this source table gets updated.
◾◾ API probes: These probes acquire data using APIs in the application from which
data is collected. The application here is another telecom data management
application for which Netcool/OMNIbus is acting as a manger-of-manager.
◾◾ CORBA probes: These probes use CORBA interfaces to collect data from a
target system. The interfaces for communication are published by vendors in
IDL files.
◾◾ Miscellaneous probes: In addition to the generic common probes discussed
above, there are probes for specific application or to collect data using meth-
ods other than the ones discussed. These are called miscellaneous probes and
an e-mail probe to collect data from a mail server is an example of a miscel-
laneous probe.
While probes are used to collect data from a target, the gateway is used when
Netcool/OMNIbus itself acts as the source for another application. For example a
CRM (customer relationship management) application uses Helpdesk gateway to
get alerts from this Tivoli product.
of provisioning blocks using the development kit provided with Tivoli provi-
sioning manager. Multiple elements in the network can be provisioned with a
single work flow. The automation work flows can be created across domains.
It follows service oriented architecture for more re-use of functionality. It is
more of an IT infrastructure provisioning application rather than just a sys-
tem provisioning manager. It supports operation on AIX, Windows, Linux,
and Sun Solaris. The graphical user interface can be customized to generate
role based views and a provisioning work flow can be re-used just like a tem-
plate for future provisioning operations.
2. Tivoli configuration manager: This product handles secure inventory man-
agement and software distribution. Software distribution mainly involves
identifying missing patches on the client machine and installation of patches.
The product even helps in building a patch deployment plan and can do
package installation in distributed environment. The inventory module in
the product identifies the hardware and software configuration of devices in
the IT infrastructure, with functions to change system configuration. The
secure environment is implemented using multilevel firewalls.
13.7 Conclusion
There are many NMS products and frameworks available in the market. The offi-
cial guide published by the vendors for some of these products itself span 300 to
186 ◾ Fundamentals of EMS, NMS and OSS/BSS
400 pages. A complete explanation of the features and architecture of any specific
NMS product was excluded for this reason and a general overview of the most
popular frameworks/products was done in this chapter. It can be seen from the
Additional Reading section next that some of the products discussed in this chapter
have books written for the sole purpose of understanding a specific capability of
that product alone. This chapter gives the reader familiarity with products from the
leading telecom management players.
Additional Reading
1. OpenNMS Documentation www.opennms.org/index.php/Docu-overview and
AdventNet Web NMS Technical Documentation www.adventnet.com/products/web-
nms/documentation.html
2. Jill Huntington-Lee, Kornel Terplan, and Jeffrey A. Gibson. HP’s OpenView: A Practical
Guide. New York: McGraw-Hill Companies, 1997.
3. Yoichiro Ishii and Hiroshi Kashima. All About Tivoli Management Agents. Austin, TX:
IBM Redbooks, 1999.
4. IBM Redbooks. An Introduction to Tivoli Enterprise. Austin, TX, 1999.
5. Tammy Zitello, Deborah Williams, and Paul Weber. HP OpenView System
Administration Handbook: Network Node Manager, Customer Views, Service Information
Portal, OpenView Operations. Upper Saddle River, NJ, Prentice Hall PTR, 2004.
Chapter 14
SNMP
Chapter Six in this book had a section on SNMP that gives a one page overview of
this protocol. SNMP is a widely used protocol and even the latest developed NMS
solutions provide support for SNMP considering that most of the existing network
elements support only management using SNMP. This chapter is intended to give
the reader a more detailed coverage on SNMP, including its different versions that
show how this management protocol evolved over various versions. SNMP MIB
has been excluded from this chapter.
14.1 Introduction
The SNMP is one of the most popular management protocols used for manag-
ing the Internet. IETF initially suggested Simple Gateway Monitoring Protocol
(SGMP) for Internet standardization, but SGMP was later replaced with Simple
Network Management Protocol (SNMP). While SGMP was mainly defined for
managing internet routers, SNMP was defined to manage a wide variety of net-
work elements. It should be understood that SNMP is not just an enhancement to
SGMP, the syntax and semantics of SNMP are different from those of SGMP.
The components in the network element to be managed are referred to as man-
aged objects that are defined in the SNMP management information base (MIB).
In order to manage the object with SNMP it must follow a certain set of rules
as mentioned in the structure of management information (SMI). The object is
defined using predefined syntax and semantics. Each object can have more than
one instance and the instance can have a set of values.
SNMP has three versions. They are SNMPv1, SNMPv2, and SNMPv3. This
chapter will discuss each of these versions in detail. Please refer to the requests for
187
188 ◾ Fundamentals of EMS, NMS and OSS/BSS
comments (RFCs) mentioned in the Additional Reading section for more informa-
tion. SNMP mainly consists of two entities: the management station and the SNMP
agent that communicate with each other. The management station is the element or
network manager that collects data about the network element while the agent runs
on the network element and sends management information about the element to
the management station.
The basic features of this protocol that makes it so popular are:
It can be seen that the main design philosophy of SNMP was to keep it SIMPLE.
14.2 SNMPv1
The network management framework associated with SNMPv1 is composed of four
RFCs from IETF. They are RFC 1155, RFC 1157, RFC 1212, and RFC 1213.
◾◾ RFC 1155 deals with the syntax and semantics for defining a managed object
and is also referred to as structure of management information (SMI).
◾◾ RFC1157 is dedicated to the details of SNMP including its architecture,
object types, and operations.
◾◾ RFC 1212 gives the format for defining MIB modules.
◾◾ RFC 1213 defines the basic set of objects for network management, which is
also known as MIB–II.
GetRequest
GetNextRequest
SetRequest
SNMP SNMP
manager agent
GetResponse
Trap
◾◾ The get operation of the GetRequest PDU is used by the manager to retrieve
the value of object instances from an agent.
◾◾ The agent uses the response operation of the GetResponse PDU to respond
to the get operation with the object instances requested. If the agent cannot
provide all the object instances in the requested list then it does not provide
any value.
◾◾ The GetNext operation of the GetNextRequest PDU is used by the manager
to retrieve the value of the next object instance in the MIB. This operation
assumes that object values in the MIB can be accessed in tabular or sequen-
tial form.
◾◾ The set operation of the SetRequest PDU is used by the manager to set the
values of object instances in the MIB.
◾◾ The trap operation (Trap PDU) is used by agents to asynchronously notify
the manager on the occurrence of an event.
Let us check how error status works taking an example of GetRequest PDU
sent to the agent.
◾◾ If the agent encounters errors while processing the GetRequest PDU from the
manager then error status is “genError.”
190 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ When the agent receives a GetRequest PDU from the manager, the agent
checks whether the object name is correct and available in MIB present in
the network element being managed. When the object name is not correct
or when the object name corresponds to an aggregate object type (not sup-
ported), then the error status is “noSuchName.”
◾◾ If the response for the GetRequest PDU exceeds the local capacity of the
agent then error status is “tooBig.”
◾◾ If the agent is able to successfully process the request and can generate a
response without errors, then the error status is “noError.”
The Trap PDU has the format shown below:
◾◾ Enterprise tag is used to identify the object that generated the trap. It is the
object identifier for a group.
◾◾ Agent Address is the IP address of the agent that generated the trap.
◾◾ Generic Trap Type is a code value used to identify the generic category in
which the trap can be associated. Example: coldStart, warmStart, linkup,
linkDown, and so on.
◾◾ Specific Trap Code provides the implementation specific trap code.
◾◾ Timestamp is used for logging purpose to identify when the trap occurred.
◾◾ Variable Binding provides a list variable (object instance name)/value pairs.
The SNMP PDU is constructed in ASN.1 (abstract syntax notification) form and
the message to be exchanged between the manager and agent is serialized using
BER (basic encoding rules).
The SNMP message consists of a message header and PDU. The message header
has a version number to specify the SNMP version used and a community name
to define the access environment. The community name acts as a weak form of
authentication for a group of managers. Figure 14.2 shows the method of creating
an SNMP message for sending and how it is handled at the receiving end.
In addition to the agent and manager, another system that is commonly referred
in discussions on SNMP is a proxy. Proxy converts messages from one SNMP ver-
sion to another or converts messages from SNMP to some other protocol and vice
versa. Proxy is used when the manager and agent supports two different protocols
or different versions of the same protocol.
Security in SNMPv1 is implemented using community strings. The authenti-
cation schema shown in Figure 14.3 is a filter module that checks the community
string. Community name or community string is a set of octets used to identify a
member in the community. An application can have more than one community
name and only applications in the same community can interact. Hence commu-
nity helps to pair two application entities.
SNMP ◾ 191
(a) (b)
Data for
sending
Receive message
Parse message
Generate
P trap
fa arse
Add source and ile (Optional)
d
destination
address
Verify if version
and authentication No Discard
data is correct message
Check if source
and destination can No
exchange message
Yes
fa arse
with SNMP
d
ile
P
Parse PDU and
Yes
get data
Serialize SNMP
message using
BER
Figure 14.2 SNMPv1 message handling: (a) sending of SNMP message; (b) receiv-
ing of SNMP message.
The SNMP agent has a community profile that defines the access permissions
to the MIB. The agent can view only a subset of managed objects in a network
element and the agent can have read or write access to the visible objects. A
generic “public” community usually offers read only access to objects. An agent
can be associated with the public community to access the read only objects in
the community.
Let us describe the access policy shown in Figure 14.4. If “A” corresponds to the
agent for managing a Cisco element and “B” corresponds to agent for managing a
192 ◾ Fundamentals of EMS, NMS and OSS/BSS
Authentication schema
SNMP agent
Manager
Community
Agent - A Agent - B
Profile - A Profile - B
Nortel element, then the manager has access to the community having both a Cisco
and Nortel agent. Cisco agent and Nortel agent have a different community profile,
which is “Profile A” and “Profile B.” So the Cisco agent will be able to view only
the managed objects defined by its profile and not Nortel managed objects that
have a different profile, but the manager can view both Cisco and Nortel managed
objects. There is also an entity called manager of manager (MoM) according to
the OSI model. The MoM is expected to process data that is managed by multiple
managers; that is, the MoM must be able to view managed objects from multiple
community. That is, if manager-1 can view community-1 and manager-2 can view
community-2, then an MoM can view both community-1 and community-2.
14.3 SNMPv2
The major weakness in SNMPv1 was the lack of an elaborate security framework.
The main intent of having SNMPv2 was to overcome this security issue. While
SNMP ◾ 193
SNMPv1 and SNMPv3 are IETF standards, SNMPv2 did not get fully standard-
ized and only draft versions are available. In discussions and NMS product specifi-
cations on SNMPv2 there are four popular versions that come up. They are:
◾◾ Large data retrieval possible using Get-Bulk operator, leading to better effi-
ciency and performance.
◾◾ Improved data definition with additions to SMI, new MIB objects, and
changes to syntax and semantics of existing objects.
The Get, GetNext, and Set operations used in SNMPv1 are exactly the same as
those used in SNMPv2. They perform the action of getting and setting of managed
objects. Even the message format associated with these operations has not changed
in SNMPv2.
Though Trap operation in SNMPv2, serves the same function as that used in
SNMPv1, a different message format is used for trap operation in SNMPv2.
SNMPv2 defines two new protocol operations:
◾◾ GetBulk: This operation is used to retrieve a large bulk of data. This opera-
tion was introduced to fix the deficiency in SNMPv1 for handling blocks of
data like multiple rows in a table. The response to a GetBulk request will try
to fill as much of the requested data as will fit in the response message. This
means that when a scenario arises where the response message cannot give all
the requested data, then it will provide partial results.
◾◾ Inform: This operation was introduced to facilitate interaction between net-
work managers. In a distributed environment where one manager wants to
send a message to another manager, then the Inform operation can be used. It
is also used for sending notifications similar to a trap. Inform is a “confirmed
trap” operation.
The GetBulk PDU in SNMPv2 has a different format to handle bulks of data:
SNMPv2 also introduced “Report PDU.” The intent of the report PDU is to
account for administrative requirements. A report PDU could be used for faster
recovery by sending information for time synchronization. It is not defined in
SNMPv2 PDU definitions listed in RFC 1905. Implementers can define their own
usage and semantics to work with “Report” PDU in SNMPv2.
SNMPv2 mainly supports five transport services for transferring management
information. This association of SNMPv2 with a transport service is referred to as
snmpDomains and is described in RFC 1906. The domains are:
snmpDomain Description
14.4 SNMPv3
The five most important RFCs of SNMP Version 3 are:
message, the privacy is obtained by encrypting the message at the sender and
decrypting the message at the receiver.
USM ensures that data security is achieved by:
Preventing unauthorized access to data on transit from sender to
receiver.
Ensuring that the sender/receiver is valid and authorized to send/
receive a particular message.
Ensuring that the message arrives in a timely manner and no re-order-
ing is done by an unauthorized entity.
Secret keys are used by the user for authentication and privacy. The authen-
tication protocols suggested for use are MD5 (Message Digest 5) and SHA-1
(Secure Hash Algorithm 1) and the privacy protocol suggested for encryption
and decryption is DES (Data Encryption Standard). Other authentication
and privacy protocols can also be used.
6. RFC 3415: View-Based Access Control Model (VACM): VACM details admin-
istrative capabilities involving access to objects in the MIB. The community
string that was discussed in SNMPv1 and SNMPv2 is also used in SNMPv3.
Here the community string is used to identify the data requester and its location,
and to determine access control and MIB view information. This is achieved in
SNMPv3 using a mode dynamic access control model that is easy to administer.
SNMP entity
SNMP engine
SNMP applications
◾◾ The message version is used by the message processing unit to identify the
SNMP version used in the request or response message. This ID is critical for
the dispatcher in identifying the version of message.
◾◾ The message ID, message maximum size, message flags, and message security
model is used by the message processing system to define unique properties
of the message. This set of tags are specific to the SNMPv3 message structure
and would not be required when working with other versions of SNMPv3.
◾◾ The message security parameters are used by the security subsystem. There is
a separate tag to specify the security model that is USM for SNMPv3.
◾◾ The context engine ID, Context name, and PDUs are encapsulated to a col-
lection set called scoped PDU. Next let us discuss the concept of “Context” in
SNMPv3. A collection of objects is called an SNMP context. Each of these col-
lections is accessible by an SNMP entity. A context can be a collection based on
a network element, a logical element, multiple elements, or a subset element(s).
Each context has a unique identifier called the contextName, which is unique
within an SNMP entity. A contextEngineID represents an instance of a con-
text within an administrative domain. Figures 14.6 and 14.7 shows sequence
diagramon command and notification. Each SNMP entity and administrative
domain can have multiple contexts, so the combination of contextEngineID and
contextName uniquely identifies a context within an administrative domain.
Send request
Receive response Network
Process incoming
message
Prepare data elements
Register context
engine ID
Receive request
Network
Send response
Generate response
message
Prepare response
message
Process response PDU
14.5 Conclusion
SNMP basic framework is suited for small networks where most of the functional-
ity is implemented at the manager with minimum capabilities in the agent. For
managing large heterogeneous networks RMON (remote network MONitoring)
is used. The RMON collects data from the elements and processes the data locally
and sends processed data to the manager using SNMP. This takes care of scalability
and decentralized management at the SNMP manager implementation.
For distributed and client–server environments, CORBA (common object
request broker architecture) is popularly used. To use SNMP in this domain interop-
erability between CORBA and SNMP needs to be worked out. Performance and
security play a critical role in working in distributed management domain. There
is a separate Internet group called, the distributed management (DISMAN) group
working toward defining standards in this area.
New protocols based on XML are now becoming popular for network man-
agement and most of the legacy NMS solutions are based on SNMP. Adapters for
conversion between protocols are getting popular and some of the NMS solutions
support multiple protocols in addition to different versions of SNMP.
SNMP ◾ 201
The simplicity of SNMP made it one of the most popular network manage-
ment protocols. Though other management protocols were introduced, SNMP is
still the protocol that has the highest number of implementations. To support new
devices or extend existing devices only change in the MIB is required when using
SNMP. SNMP still needs changes and new specifications to make it a complete
management protocol that supports all forms of FCAPS functionality, but that
would probably take out the biggest power of SNMP, which is simplicity as at
stands for simple network management protocol.
Additional Reading
1. Marshall T. Rose. The Simple Book: An Introduction to Internet Management. New York:
Prentice Hall-Gale, 1993.
2. William Stallings. SNMP, SNMPv2, SNMPv3, and RMON 1 and 2. 3rd ed. Upper
Saddle River, NJ: Addison-Wesley, 1999.
3. Allan Leinwand and Karen Fang. Network Management: A Practical Perspective. 2nd ed.
Upper Saddle River, NJ: Addison-Wesley, 1995.
4. Douglas Mauro and Kevin Schmidt. Essential SNMP. 2nd ed. Sebastopol, CA: O’Reilly
Media, Inc, 2005.
5. Sidnie M. Feit. SNMP: A Guide to Network Management. New York: McGraw-Hill,
1993.
6. Sean J. Harnedy. Total SNMP: Exploring the Simple Network Management Protocol. 2nd
ed. Upper Saddle River, NJ: Prentice Hall PTR, 1997.
7. Paul Simonau. SNMP Network Management (McGraw-Hill Computer Communications
Series). New York: McGraw-Hill, 1999.
Chapter 15
Information Handling
This chapter is about management information handling. The concepts that are
discussed include the abstract syntax for application entity using ASN.1 (Abstract
Syntax Notation One), the transfer syntax for presentation entity using BER (Basic
Encoding Rules), and the management information model using SMI (Structure of
Management Information).
15.1 Introduction
Information handling consists of two components. One is storing transfer of data
and the other part is storing the data. When two systems exchange data, the syntax
and semantics of the message containing the data needs to be agreed upon by
the systems for effective communication. The way the data is arranged should be
defined in an abstract form to understand the meaning of each data component in
the predefined arrangement.
This abstract form defined for application interaction is called abstract syntax
and it is an application entity. ASN.1 is one specific example of abstract syntax. The
set of rules for transforming or encoding the data from a local syntax to what is
defined by abstract syntax is called the transfer syntax. It deals with the transfer of
data from presentation to application and hence it is a presentation entity. BER is
one specific example of transfer syntax.
Managed object for which management data is collected needs to have a struc-
tured form of representation and storage. The management data is maintained in a
logical collection called the MIB. MIB objects are objects in the MIB and a set of
MIB objects make up a MIB module. Though MIB can be implemented as a soft-
ware database, it is not necessarily a database and it is best defined as a description
203
204 ◾ Fundamentals of EMS, NMS and OSS/BSS
Standard Description
15.2 ASN.1
The ASN.1 has the capability to represent most types of data involving variable,
complex, and structures that need to be extended. It was developed as a joint effort
from ISO and ITU-T and is mainly defined in ITU-T X.680/ISO 8824-1.
The basic type is similar to data types in most programming languages. The
character string presents different flavors of string for specific applications. An object
type is for defining an object. It has an identifier called OBJECT IDENTIFIER
Information Handling ◾ 205
◾◾ Inheriting characteristics from a reusable type. That is, if multiple types have
common characteristics then these common characteristics can be moved to
a parent type and specialized subtypes can be created by deriving from the
parent type. This way the subtype will include both the parent characteristics
and its own individual characteristics.
◾◾ Subtypes are also used to create subsets from a parent set; that is, when the
parent set is large or for some other reason the contents of a parent type needs
to be put in two subtypes.
A subtype must have a value when it is derived from the parent type.
Syntax for subtype definition: <subtype name> ::= <type> ( <constraint> )
Examples of subtype definition:
Before we move from simple types to structured types, let us look into object
types. Object type discussed under simple type is a very important format for infor-
mation handling, as management information in a MIB has management entities
handled as objects. ASN.1 object identifiers are organized in a hierarchical tree
structure. This makes it possible to give any object a unique identifier. This tree is
shown in Figure 15.1.
206 ◾ Fundamentals of EMS, NMS and OSS/BSS
Root
DoD
6
Internet
1
MIB Enterprises
1 1
There can be scenarios where the object descriptor will not be unique, but the
combination of object descriptor and object identifier will always be unique
globally.
Value Assignment:
◾◾ SEQUENCE OF: Used for collection of variables of same type where order
is significant.
Type Definition:
Value Assignment:
◾◾ SET: Used for collection of variables of different type (can have a few vari-
ables of same type but all variables are not of same type) where order is insig-
nificant. Tagging of items in the set is required to uniquely identify them.
If the tags are not specified explicitly then ASN.1 will automatically tag the
items in a SET.
208 ◾ Fundamentals of EMS, NMS and OSS/BSS
Type Definition:
Value Assignment:
◾◾ SET OF: Used for collection of variables of same type where order is
insignificant.
Type Definition:
Value Assignment:
ASN.1 supports explicit and implicit tagging. An advanced feature of ASN.1 that
is not discussed here is the concept of modules. Module definitions are used for
grouping ASN.1 definitions.
15.3 BER
BER (basic encoding rules) is a transfer syntax discussed in this section. Encoding
involves converting ASN.1 data to a sequence of octets for transfer. Though other
methods of representing ASN.1 exist like packed encoding rules (PER), BER is
suggested for OSI data transfer. BER has two subset forms called canonical encod-
ing rules (CER) and distinguished encoding rules (DER). DER is suited for trans-
fer of small encoded data values while CER is commonly used for large amount of
data transfer.
Information Handling ◾ 209
8 7 6 5 4 3 2 1
◾◾ Contents octets: The ASN.1 data in encoded format is present in this field.
◾◾ End-of-contents octets: When the encoding is using a constructed, indefi-
nite-length method this field denotes end of contents. This field is not present
when other encoding methods are used.
Let us close the discussion on BER with an example of encoding with BER as
shown in Figure 15.3. For encoding zero, the identifier corresponds to a universal
private integer with length one and the contents will have two’s complement of
zero, which is zero itself.
15.4 SMI
Structure of management information (SMI) is a subset of ASN.1 used to define
the modules, objects, and notifications associated with management information.
Any resource that has to be managed is referred to as managed object and is rep-
resented by a class called the managed object class. Defining generic guidelines
for information representations helps to create specific subclasses from the generic
parent classes in SMI.
The main ITU-T documents on SMI are:
Test INTEGER :: = 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
Value 0 (two’s complement)
Universal Integer (2)
Primitive (0)
0 0 0 0 0 0 0 1
Length = 1
◾◾ Simple types: These are taken from ASN.1 and comprised of types like
INTEGER and OCTET STRING that were discussed in the preceding
sections.
◾◾ APPLICATION-WIDE types: These are data types that are specific to SMI.
It includes scalars like Integer32, IpAddress, and Counter32. Note that
integer represented for the simple types of ASN.1 are written in full capital
212 ◾ Fundamentals of EMS, NMS and OSS/BSS
INTEGER, while the integer specific to SMI only has the first alphabet capi-
talized Integer32.
◾◾ PSEUDO types: This type has the look and feel of an ASN.1 type but is not
defined in ASN.1. An example for this is BITS, which is written in full capital
similar to ASN.1 but is not part of the ASN definitions.
The above discussion on types is based on SMIv2. The initial version of SMIv1
did not have data types like Counter32 and Counter64 defined, instead a single
data type called Counter represented both 32 and 64 bit types. In addition to giv-
ing more meaningful definition for existing types of SMIv1, the second version also
introduced new data types like “Unsigned” and made some of the data types like
“NetworkAddress” defined in version one obsolete.
Next let us use an example to discuss the definition of objects. Let us consider
switch as an entity for our example. It has a unique “address” of type IpAddress as
its identifier and its attributes include a “status” showing the usability state of the
switch of type OCTET STRING, number of “connection” of type Integer32 and a
table named “Routing” having a collection of data of type INTEGER showing the
switch routing logic. This can be represented in the tree diagram (see Figure 15.4).
The address, status, connection are all objects of the switch entity. If “status”
has its value as “offline” and to get the status object, the query is to 1.2.1, and to get
the value of the object the query is for 1.2.1.0. So to get an instance of an object, “0”
is added to the query. Similarly when the value of the address is 101.101.101.101,
the query 1.1 returns an address and to get the value in an address the query should
be 1.1.0. For the example in Figure 15.4, the request/query 1.2 or 1.2.0 will lead
to an error as the response. For SNMP MIB query/request a GET command can
be used.
In the routing table, for the sake of simplicity let us consider that “dest” col-
umn is a unique identifier for that table and the elements in “dest” do not repeat
and hence can be used as an index for query. A query on the index 1.3.1.5 will get
a response of “5” while a query on 1.3.2.5 will give the value “3” corresponding to
index value of “5.”
3 2
5 3
Some of the important fields in scalar and table definition are discussed next.
It should be understood that this list does not handle all the fields defined in SMI.
The explanation is also supplemented with examples on usage.
2. The fields that appear in the object definition of leaf objects are:
OBJECT IDENTIFIER: It is used to define a nonleaf object. In the
example we discussed, “Info” object falls in this category.
Example:
Info OBJECT IDENTIFIER: = { MIB 1 }
RowEntries ::=
SEQUENCE { dest INTEGER, next INTEGER
}
REntry OBJECT-TYPE
SYNTAX RowEntries
MAX-ACCESS nonaccessible
STATUS current
DESCRIPTION “An entry specifying a route”
INDEX { dest }
:: = { Routing 1 }
15.5 Conclusion
Another important item to be discussed before we conclude is GDMO. The
GDMO, a part of the SMI document, is a structured description language that pro-
vides a way of specifying the class for objects, object behaviors, attributes, and class
hierarchy. GDMO uses a set of templates for information modeling. Some of the
templates include definitions for managed object, packages, attributes, and notifica-
tions. Packages are used to group characteristics of a managed object and they can
be included using a CHARACTERIZED BY or CONDITIONAL PACKAGES
tag. Inheritance property is achieved using a DERIVED FROM tag and the
object class is registered with an identifier using a REGISTERED AS tag. Refer
X.722 document for more details about GDMO and tags like ALLOMORPHIC,
PARAMETERS, and so on that are not discussed here.
A sample managed object class template is shown below:
This chapter gives the reader an overview on ASN.1, BER, and SMI. These con-
cepts form an important part of coding the network management information
framework.
216 ◾ Fundamentals of EMS, NMS and OSS/BSS
Additional Reading
1. ISO/IEC/JTC 1/SC 6. ISO/IEC 8825-1:2002, Information technology: ASN.1 encod-
ing rules. Washington, DC: ANSI, 2007.
2. John Larmouth and Morgan Kaufmann. ASN.1 Complete. San Francisco, CA: Morgan
Kaufmann Publishers, 1999.
3. Olivier Dubuisson and Morgan Kaufmann. ASN.1 Communication between
Heterogeneous Systems. San Diego, CA: Academic Press, 2000.
4. ISO/IEC/JTC 1/SC 6. ISO/IEC 10165-1:1993, Information Technology—Open
Systems Interconnection—Management Information Services—Structure of management
Information: Management Information Model. Washington, DC: ANSI, 2007.
Chapter 16
Management Information
Base (MIB)
16.1 Introduction
A management information base is a collection of managed objects. This is a virtual
database that stores management data variables (managed objects) that correspond
to resources to be managed in a network. The management information is moni-
tored by updating the information in a MIB. The network manager will request
information from the MIB through an agent (see Figure 16.1). The agent fetches
the management information and responds to the manager. The network manager
can use the MIB to monitor the network and it can also update the MIB.
The actual MIB definition is just a file giving the format information and the
MIB instance is the actual variables associated with the managed object. So the MIB
is instantiated within the agent and an actual copy of the values associated with
the MIB variables are available in the network element that also houses the agent.
The manager usually keeps a local copy of the MIB variables for ease of access in
performing and rendering the management information. A suitable synchronization
mechanism is usually in place to ensure that the copy available with the manager is
updated periodically based on the actual information collected using the agent.
217
218 ◾ Fundamentals of EMS, NMS and OSS/BSS
Network element
MIB
Manager Agent
16.2 Types of MIB
MIB can be broadly classified into the following types:
◾◾ Protocol MIBs: These MIBs relate to protocols at different layers in the OSI
model (see Figure 16.2). It can be seen from the diagram that SNMP MIB
falls in the application layer of protocol MIB. Similar to SNMP protocol for
network management, there are protocols defined for various layers in the com-
munication stack. Protocol MIBs are a general grouping of MIBs associated
with these protocol layers. Between the transmission and network layers there
are interfaces for which MIBs are defined. Both the physical and data link layer
is handled as a single transmission layer in the protocol MIB grouping.
Some of the RFCs associated with protocol MIBs are:
a. Transmission MIBs
• RFC 2515: Definitions of managed objects for ATM management
• RFC 2558: Definitions of managed objects for the SONET/SDH
interface type
• RFC1512: FDDI management information base
Management Information Base (MIB) ◾ 219
TCP
Transport layer MIBs
UDP
IP ARP BGP
Network layer MIBs
ICMP
b. Network MIBs
• RFC 1657: Definitions of managed objects for the fourth version of
the border gateway protocol (BGP-4) using SMIv2
• RFC 2667: IP tunnel MIB
c. Transport MIBs
• RFC 2012: SNMPv2 management information base for the trans-
mission control protocol using SMIv2
• RFC 2013: SNMPv2 management information base for the user
datagram protocol using SMIv2.
d. Application MIBs
• RFC 2789: Mail monitoring MIB
• RFC 1611: DNS server MIB extensions
◾◾ Remote monitoring MIBs: This category is mainly to group MIBs used for
defining remote network monitoring (RMON). RFC 2819 is the main stan-
dard in this category. Other RFCs associated with remote monitoring include:
−− RFC 2021: RMON version 2
−− RFC 1513: Token ring extension to RFC
−− RFC 2613: RMON MIB extensions for switched networks version 1.0
◾◾ Hardware specific MIBs: Data management for equipment like a printer,
modem, and UPS is done using hardware specific MIBs. There are separate
RFCs to define the objects that are associated and need to be managed in
hardware. Some of the RFC documents that fall in this category are:
−− RFC 1696: Modem management information base using SMIv2
−− RFC 1759: This document details printer MIB
220 ◾ Fundamentals of EMS, NMS and OSS/BSS
16.3 MIB-II
This section details MIB-II, which is one of the most frequently updated modules in
the MIB tree. SNMP is covered under MIB-II and is handled in more detail in subse-
quent sections. There was a MIB-I document defined in RFC1156, which is obsolete
now and was replaced with MIB-II defined in RFC 1213. This RFC is about man-
agement information base for network management of TCP/IP-based internets. So
Management Information Base (MIB) ◾ 221
it defines the variables to manage TCP/IP protocol stack. Only a limited number of
these variables can be modified as most of them are read-only. There are several groups
in MIB-II that are detailed in separate RFCs. Some of the groups include TCP (RFC
2012), UDP (RFC 2013), IP (RFC 2011), SNMP (RFC 1907), and so on. A significant
difference between MIB-I and MIB-II, is the presence of CMOT (value 9) in MIB-I
that was removed in MIB-II, and the introduction of SNMP (value 11) in MIB-II.
Some of the design criteria that were used in defining MIB-II are:
There are multiple sections under MIB-II, like System (1), Interface (2), AT (3), and
so on (see Figure 16.3). Each of these sections has a further subsection attribute to
describe it. Let us look into these objects in a little more detail.
a. System (1)
This group has the following variables under it:
sysDescr: This is the system description.
sysObjectID: This is the object identifier that points to enterprise spe-
cific MIB.
sysUpTime: Used to identify how long the system has been running.
More specifically the time since last reinitialization of the system.
222 ◾ Fundamentals of EMS, NMS and OSS/BSS
sysContact: The contact person for the system; usually the e-mail
address is specified.
sysName: The name of the system to identify it in a network.
sysLocation: The physical location of the system.
sysServices: This is the services offered by the system. A number is
used to indicate the functionalities performed (e.g., bridge functions,
repeater functions, router functions, etc.).
Not all these objects are writable. In the above example sysContact, sysName, and
sysLocation are writable while others are read-only.
Not all columns in the ifTable are discussed here, but for collecting all possible
information about the interface, a complete understanding of ifTable is required.
This group also has a “tcpConnTable” connection table. This table has fields to
capture the connection state, local address, local port, remote address, and remote
port. The connection state is a writable field that can be changed and can take val-
ues like sync, abort, listen, and closed.
Root
DoD
6
Internet
1
MIB-2
1
◾◾ Old object types are only deprecated, they are not deleted.
◾◾ The semantics of old object types are not changed between versions.
◾◾ Instead of changing the semantics of an old object, a new object type is formed.
◾◾ Only essential objects are defined to keep SNMP simple.
◾◾ There is enough flexibility to define implementation specific objects.
Some new terms that have been introduced in RFC 1902 for SNMPv2. These
include:
In the hierarchy tree SnmpV2 has many subtrees. Some of these subtrees are:
There are subtrees below the above levels also and further levels below these
subtrees. “snmpMIB” under snmpModules, has snmpMIBObjects and snmp-
MIBConformance subtrees. Below the snmpMIBObjects, we have snmpTrap(4),
snmpTraps(5), and snmpSet(6).
Management Information Base (MIB) ◾ 229
◾◾ snmpVacmMIB: This module has objects for view-based access control. The
vacmMIBObjects can provide details of the access. This module also has
vacmMIBConformance.
16.7 Conclusion
This chapter is dedicated to the discussion of management information base and
SNMP MIBs are also discussed briefly in this chapter. The MIB is a vast topic
and there is much to be included to give a detailed understanding of MIB. There
are books dedicated to understanding MIBs, to the extent that some books only
detail a specific MIB group in the tree. The concept of information handling was
handled in Chapter 15 to give a strong base before giving an overview of MIB. For
professionals implementing a specific MIB, it is suggested to refer to the books sug-
gested under this next section and also the relevant RFC.
Additional Reading
1. Alexander Clemm. Network Management Fundamentals. Indianapolis, IN: Cisco Press,
2006.
2. Larry Walsh. SNMP MIB Handbook. England: Wyndham Press, 2008.
3. Stephen B. Morris. Network Management, MIBs and MPLS: Principles, Design and
Implementation. Upper Saddle River, NJ: Prentice Hall PTR, 2003.
4. M. T. Rose. The Simple Book. Upper Saddle River, NJ: Prentice Hall, 1996.
Chapter 17
Networks are evolving rapidly to meet current demands of unification and conver-
gence. This new holistic view of the network also requires significant change in the
management of the network. Many standardizing bodies are working in this regard
for coming up with the requirements for Next Generation Network Management
(NGNM). This chapter is intended to give the reader an overview of NGNM.
Some of the most popular documents on NGNM, M.3060 from ITU-T, and
TR133 from TMF are discussed to provide information on the direction in which
standards are being defined for managing next generation networks (NGN).
17.1 Introduction
Current network management solutions are targeted at a variety of independent
networks. The wide spread popularity of IP multimedia subsystem (IMS) was a
clear indication that all of the independent networks will be integrated into a single
IP-based infrastructure referred to as next generation networks (NGN). The ser-
vices, network architectures, and traffic patterns in NGN will dramatically differ
from the current networks. The heterogeneity and complexity in NGN, includ-
ing concepts like fixed mobile convergence, will bring a number of challenges to
network management. Various standardizing bodies are coming up with specifi-
cations for NGNM to address these challenges. This chapter is intended to shift
the focus from simple network management concepts to next generation network
management.
231
232 ◾ Fundamentals of EMS, NMS and OSS/BSS
17.2 NGNM Basics
Multiple standardizing bodies are coming up with standards for NGNM. The
intent of TMF with the release of TR133 was to come up with details of the prog-
ress so far by other bodies, how liaison can be formed between bodies working on
developing standards in the same space, and how to realize business benefits by
bringing up standards for NGNM. The holistic view of telecom space is achieved in
TR133 by splitting the NGNM space across business, system/architecture, imple-
mentation, and deployment. It is in line with the TMF (TeleManagement Forum)
concept of NGOSS (Next Generation Operation Systems and Software), which
demarcates the service provider space from service developer space and technology
specific implementation from a technology neutral implementation.
NGNM originates from the need for managing the next generation network
(NGN). NGN according to ITU-T standards is a packet-based network that can
support a wide variety of services. Some of the key buzzwords around NGN are
its ability to offer services anytime and anywhere. It is multiservice, multipro-
tocol, and multiaccess-based network. NGN also fosters a multitechnology and
Next Generation Network Management (NGNM) ◾ 233
addition to ETSI TISPAN are ATIS and ITU-T. TMF has compiled these require-
ments in TR133 document.
Given below are some of the main categories in which NGNM requirements
are defined.
the customer. The business should be focused on meeting the SLA set with
the customer. Along with quality of service from the service provider space,
the quality of end-user experience from customer space needs to be param-
eterized and measured. A key industry direction is toward service oriented
architecture and COTS (commercial off-the-shelf) based products. This
means that reuse should be maximized. Service abstraction, service discov-
ery, and service reusability features of SOA are suitable for providing services
that can be extended to legacy and next generation at the same time. New
NMS solutions especially for next generation network and services are based
on loosely coupled, service discovery architecture. COTS-based implemen-
tation also leads to reuse when the product is developed for integration with
a wide variety of components. The main intent of splitting to components is
to achieve better management at the component level and at the same time
having the entire system controlled by managing specific components, thus
reducing the complexity.
5. Other requirements associated with NGNM: Compliance to standards is
a key requirement for NGNM. This includes standards given by telecom
and enterprise standardizing organizations, compliance specified by regu-
latory bodies and legal obligations. Regulatory requirements include emer-
gency services and legal obligations can include security features like lawful
interception.
17.3 TR133
TeleManagement Forum (TMF) has published a member evaluation version of
NGNM strategies (TR133), which suggests a lifecycle approach to give holistic view
to NGNM. The lifecycle model has four blocks (see Figure 17.1) that correspond to
intersection of stake holder and technology parts. Stake holder part has the service
provider and service developer space. Technology part has the technology neutral
and technology specific space. The four blocks are:
◾◾ Business: This block is the intersection of technology neutral space and ser-
vice provider space. The service provider defines requirements in a technology
agnostic manner, which is the business block.
◾◾ System/architecture: This block is the intersection of technology neutral space
and service developer space. Based on the requirements defined in a business
Next Generation Network Management (NGNM) ◾ 237
Business
TR133 System/
Deployment
NGNM architecture
Implementation
These four blocks constitute the telecom lifecycle and is used to give holistic
vision in all forms of development activities. The lifecycle is discussed in detail in
the chapter on NGOSS lifecycle. NGNM focus is also split into these four blocks
by TMF in the TR133 document. Rather than activities in the block, TR133 shows
what should be the contribution from the key stake holders of individual blocks
toward NGNM.
17.4 M.3060
According to this document, NGNM is not a new generation of management
systems, but a management system that supports monitoring and control of the
NGN resources and services. The resources of NGN has both a service component
and transport component. The role of NGNM is to have seamless flow of data
across interfaces from the network element layer to the business management layer.
Having both network and services in the scope of management, the suggestions of
ITU-T addresses both network operators and service providers.
The NGNM architecture can be divided into four views as shown in the dia-
gram (see Figure 17.2). They are:
◾◾ Business process view (architecture): This view spans across the three other
views and is concerned with the business process framework in the service
provider space. eTOM is suggested as the reference framework for process
view.
◾◾ Management functional view (architecture): This view is to capture the
management functions to be implemented in NGNM. The management
functions are categorized under a set of blocks like the operations systems
function block (OSF) and network element function block (NEF).
◾◾ Management information view (architecture): In order for entities to perform
functions that are defined in a functional view, these entities need to com-
municate management information. The characteristics of this management
information are defined in this view.
Next Generation Network Management (NGNM) ◾ 241
Security considerations
Management functional architecture
Security considerations are also shown as spanning across the physical, informa-
tion, and functional view. Security in the next generation network and the NGN
management infrastructure is part of NGNM and is given under a separate set of
definitions from ITU-T.
Another important aspect in the M.3060 document is the logical layered archi-
tecture (see Figure 17.3) that brings back the conventional concept of dividing
the management pane into a business management layer (BML), service manage-
ment layer (SML), network management layer (NML), element management layer
(NML), and network element layer (NEL). A set of functional blocks are defined
across each of these layers to list the entities and management functions that span
across the different layers in NGNM as defined in M.3060.
The M3060 identifies NGNM functionality to support NGN by:
BML
Enterprise management OSFs
SML
M.3060 was one of the first and most comprehensive documents from a stan-
dardizing body. Many other standardizing bodies followed the lead of ITU-T and
came up with specifications on next generation network management to satisfy the
requirements in various technology and domain segments.
17.5 Conclusion
ITU-T and TMF came up with many more documents for NGNM, other than
M.3060 and TR133. Some of the notable ones include ITU-T M.3016 series on
security for the management plane, ITU-T M.3050 series on enhanced telecom
operations map that is based on the TMF document GB921, ITU-T M.3341 for
QoS/SLA management service, ITU-T M.3350 on ETS management service, TMF
shared information and data model GB922, and TMF513 on multitechnology net-
work management (MTNM).
Other standardizing bodies that have published specifications for NGNM
include:
Additional Reading
1. M.3060 document that can be downloaded from www.itu.int
2. TR133 document from TMF that can be downloaded from www.tmforum.org
3. Guy Pujolle. Management, Control and Evolution of IP Networks. New York:
WileyBlackwell, 2008.
4. Yan Ma, Deokjai Choi, and Shingo Ata. Challenges for Next Generation Network
Operations and Service Management. New York: Springer, 2008.
Chapter 18
XML-Based Protocols
XML (Extensible Markup Language) based protocols are rapidly replacing tradi-
tional management protocols like SNMP and are now the defector standard for use
in web-based network management. Protocols like SOAP, which are XML-based,
can be viewed in management standards like WBEM for enterprise management
defined by DMTF as well as MTOSI for telecom management defined by TMF.
This chapter is intended to give the reader a basic understanding of XML protocol.
SOAP the most popular XML protocol and NETCONF hailed as the next genera-
tion network management protocol is discussed as specific examples.
18.1 Introduction
Traditionally SNMP has been the preferred protocol for network management of
packet networks like Ethernet, Token-ring, and so forth. Efficient and low band-
width transport mechanisms using UDP over an IP) as well as an optimized, but
limited, data model were the two main reasons for its popularity. The simplistic
data model architecture used by SNMP (a flat MIB definition) was well suited for
low complexity legacy packet nodes.
With the convergence of traditional circuit (voice) and packet networks (data)
onto a common IP-based packetized core as in next generation networks like
IMS, the complexity of the network elements involved, and the services supported
have undergone a huge change. With convergence in networks space, a flat and
simple model like SNMP does not scale up to meeting the next generation net-
work management requirements. A typical example of convergence at a network
element level is an L3-L7 switch in a converged network. This switch performs
multiple other functionalities in addition to switching including, but not limited
245
246 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ XML is well suited for distributed environments where there are inbuilt
mechanisms for usage scenarios on involving multiple data readers, multiple
data writers, and other control entities.
◾◾ XMLP provides flexible linkages between managed object and management
application. This is again a projection of the distributed nature of XMLP.
◾◾ XML protocol provides a strong security paradigm with capabilities for auto-
matic and centralized validation.
The intent of defining requirements on XML protocol (XMLP) was to create a foun-
dation protocol whose definition will remain simple and stable over time. The design
based on this foundation that explicitly used modularity and layering will assure
longevity and subsequent extension of the design keeping the foundation intact.
XMLP inherently supports the following:
◾◾ Message encryption: This means that the message header and the associated
payload can be encrypted when using XML protocol.
◾◾ Multiple intermediaries: The protocol support third-party intermediary com-
munication where the message can be collected, parsed, and worked with.
There would be transport and processing intermediaries.
◾◾ Non-XML data handling: This means the protocol can handle sending of
non-XML data, which may be a required functionality when working with
certain legacy systems.
◾◾ Asynchronous messaging and caching with expiration rules that will improve
efficiency due to latency or bandwidth.
Before we get into the details of message encapsulation, let us quickly check some
of the usage scenarios for use of XML protocol.
◾◾ XMLP header block: There is only one header block for multiple body blocks.
The header block consists of:
−− Routing information for the message envelope.
−− Data describing the content of the body blocks.
−− Authentication and transaction information.
−− Flow control and other information that will help in message management.
−− XMP namespace field for grouping. It can also be used for versioning to
ensure a transparent conversion of the message for different versions sup-
ported by the various XMLP processors.
◾◾ XMLP body blocks: This is the actual payload of the message and there will
be multiple body blocks in a single message envelope. The body blocks do
not have any information about message transfer and is only processed at the
final destination.
The XML infoset (short for information set) is an abstract data model to describe
the information available from an XML document based on specification from
W3C. This makes it easier to work on an XML document without the need to ana-
lyze and interpret the XML syntax. There are different ways by which information
XML-Based Protocols ◾ 249
in the XML infoset can be accessed with DOM object and HTML being some of
the ways that have specific APIs to satisfy the requirements of the programming
language. XML protocol binding with HTTP is the most popular transport for
most specifications considering that the XML protocol can be used for Web-based
network and operations management. For packet-based intermittent traffic, even
UDP over the XML protocol can be looked into inline with what is used conven-
tionally with SNMP.
Port 80
Service access point
Network layer
IP
18.5.1 Messaging
The five basic messages that are used in SNMP and needs to be available in a
replacement protocol are GET, GET-NEXT, GET-RESPONSE, SET, and TRAP.
The example here is for binding with UDP, so the connection details needs to be
explicitly provided in the namespace of XMLP envelope.
1. GET and GET-NEXT message: In SNMP these messages are used by the
manager to request specific information from the agent. The header, which
is optional, gives the identification of the sender. The “host” in the header
corresponds to where the information needs to be sent. The URI will be used
while sending the envelope to the packet network. Header can also be used
for authentication and encoding. The body contains the actual request to be
performed.
In the specific example used here, the “Body” of the envelope is used to query
the status of a card in a specific shelf. It can be seen that the XML protocol mes-
sage is easier to read and interpret compared to the SNMP message. The tags
like <getenvp:Header> and <getenvp:Body> in the message make it possible for
third-party parsers to be used in processing the XML protocol message.
GET Message with XML:
<getenvp:getenvp to=‘host’ xmlns=‘reqinfo:client’
xmlns:getenvp=‘---URI---’>
<getenvp:Header>
<getenvp:getenvp from=‘host’ id=‘stream-id’
xmlns=‘reqinfo:client’ xmlns:getenvp=‘---URI---’ />
</getenvp:Header>
<getenvp:Body>
<iq type=‘get’ to=‘mib@host’ id=‘query-id’>
<query xmlns=‘reqinfo:iq:shelf-6.card-2.status’/>
</iq>
</getenvp:Body>
</ getenvp:getenvp >
<getenvp:Body>
<iq type=‘get-next’ to=‘mib@host’ id=‘query-id’>
<query xmlns=‘reqinfo:iq: shelf-6.card-3.status’/>
XML-Based Protocols ◾ 251
</iq>
</getenvp:Body>
</ getenvp:getenvp >
<getenvp:Body>
<iq type=‘get-response’ to=‘mgr@host’ id=‘query-id’>
<query xmlns=‘reqinfo:iq: shelf-6.card-2.status’>
<var>In-Service</var>
</query>
</iq>
</getenvp:Body>
</ getenvp:getenvp >
<setenvp:Body>
<iq type=‘set’ to=‘mib@host’ id=‘query-id’>
<query xmlns=‘setinfo:iq: ‘shelf-2.card-6.link-1.
status’/>
<var>Offline</var>
</query>
</iq>
</setenvp:Body>
</setenvp:setenvp>
4. TRAP message: This message allows the agent to spontaneously inform the
manager of an important event. Traps are very effective in sending fault
information. The body of the trap message sent by the agent will contain an
identifier and the information details on the reason for the generated trap.
These XML traps can be collected by the manager and parsed for details
that can be used for debugging the error scenario captured by the TRAP
message.
TRAP message with XML:
<trapenvp:Body>
<iq type=‘trap’ to=‘mgr@host’ id=‘query-id’>
<query xmlns=‘trap:iq:Trunk.ABC.FaultDetected’>
<trapenvp:Fault>
<faultcode>Fault Code here</faultcode>
<faultstring>Fault String here</faultstring>
</trapenvp:Fault>
</query>
</iq>
</getenvp:Body>
</ getenvp:getenvp >
header. This is not enough for transmission over the network. Before transmission
to a packet network, the message needs to be appended with an IP header. The IP
frame that is to be transmitted will contain an Ethernet header and IP header on
the UDP datagram with the XMLP envelope.
Transmitting the host information in the IP frame helps to identify the host IP
address. Port information should also be available for proper transfer to the specified
application in the host machine otherwise default port is assumed. Multicast is also
possible where the transmission is to a group of machines. This happens when an
arbitrary group of receivers expresses interest in receiving a particular data stream.
The destination in this case for the XMLP–UDP message will be the multicast
group. The source address is specified in the UDP packet and needs to be the IPv4 or
IPv6 address of the sender. The receiver needs to be programed to reject a packaged
XMLP message that has inappropriate values for the source address. The XMLP
envelope needs to fit within a single datagram with inherent encryption capability.
Management application
Web-based server
that supports UDP
WWW
Data conversion/
XML parser DB
The data derived can be sent out via an adapter framework to feed third-party
management applications. An example of this would be CORBA IDL, which is
supported by most third-party management applications. Before XML became
popular, CORBA was the interface of choice for most management applications.
This is quite evident even in standards defined by bodies like TMF, where the
MTNM (multitechnology network management) definitions mapped to CORBA.
Later the working group of MTNM merged with MTOSI (multitechnology
operations system interface) where the mapping is using SOAP, which is an XML
based protocol.
Using a Web server this information can also be passed to the www pool. Any
queries from the www pool can also be managed with the framework. An example
of such a Web server that can support UDP is the Windows Media Server that does
data streaming. There could also be a stand-alone XMLP management application
that takes the FCAPS information, processes the information, and represents the
information in a GUI (graphical user interface).
The XMLP agent that runs on the network element has a set of distinct modules
(see Figure 18.4). Consider the scenario in which XML protocol needs to be used on
a network element where SNMP was previously used. This means that an SNMP
MIB was used to store management data. An SNMP MIB to XMLP schema con-
verter should then be available in the agent to make the initial conversion to XML
and make the protocol stack a drop in for SNMP. This would not be present in an
XML-based agent for which support for SNMP is not required. The data upon con-
version is stored in a XML data structure, which can then spread from a flat MIB
XML-Based Protocols ◾ 255
Network element
UDP and XMLP manager
transfer/reception framework
engine
18.6 SOAP
SOAP (simple object access protocol) is a lightweight protocol that uses XML
to define the messaging framework to generate messages that can be exchanged
over a variety of protocols. SOAP provides a structured information exchange in
a decentralized and distributed environment using all the capabilities offered by
XML technology. It should be understood that SOAP is just a management pro-
tocol and does not require development of client or server in a specific programing
language, though most open source, SOAP-based, client-server developments are
using Java.
SOAP is stateless and does not enforce a specific schematic for the management
data it has to exchange. SOAP was a recommendation from W3C (World Wide
Web Consortium). The SOAP 1.2 recommendation has two parts. The first part
defines the SOAP message envelope while the second part defines the data model
and protocol binding for SOAP.
Following with the XMLP discussion, the SOAP message envelope has a SOAP
header and SOAP body. The header is identified using the tag <env:Header> and
the body is identified using the tag <env:Body>. The header contains the transac-
tion and management information as in most XML protocols. Standardizing bod-
ies for Web services are working toward defining standard entries for the header in
specific domains.
The SOAP header block is not mandatory for message transmission. Specific
information that can be used by SOAP intermediaries or destination nodes can
be put in the header block. The SOAP body blocks/entries are mandatory for any
message envelope to be valid and this block is the key information that is processed
at the receiver. It can contain either the application data, the remote procedure call
with parameters, or SOAP fault information.
Some of the attributes in the SOAP header block are:
◾◾ env:role: The “role” attribute is used to identify the role of the specific target
associated with the header block. This role can be standard roles or custom roles,
and the absence of a role indicates that the role will be the final destination.
◾◾ env:mustUnderstand: The “mustUnderstand” attribute is used to specify
the header block needing to be processed. It is more of a Boolean variable
and can have a value of either “true” or “false.” If the value of “mustUnder-
stand” is true, then destination node must process the header block. If there
is an issue during the processing or the processing of the header block fails,
then a fault event has to be generated. If the value of “mustUnderstand” is
XML-Based Protocols ◾ 257
false then the destination node may or may not process the header block. If
this attribute is not present in the header block attribute then it is equiva-
lent to setting the attribute to false. As discussed earlier the header block
is not mandatory, but there could be critical information at times in the
header specific for an application. In these scenarios, setting the attribute
value as true will ensure that the header block is processed as required for
the application.
◾◾ env:relay: Control of header blocks during a relay operation can be done with
the “relay” attribute. Not all header blocks will be processed by intermediary
nodes and default behavior is to remove the header before a relay operation.
So unnecessary propagation of header blocks can be avoided and maintaining
of specific headers can be done using the relay attribute. It is a feature that is
newly introduced in SOAP 1.2.
SOAP fault as defined with the tag <env:Fault> and can be used to carry error
and status information. The fault information will contain the identifier or group
to which the fault message belongs (tag: <faultcode>), the short description of the
fault (tag: <faultstring>), the node or actor associated with the fault (tag: <faultac-
tor>), and details on the fault that can help in debugging and fixing the fault (tag:
<detail>). An example of a predefined SOAP fault code value is “VersionMismatch”
corresponding to invalid namespace in the envelope. Namespace is an XML con-
struct (xmlns) used for grouping of elements in a SOAP application. It can also be
used for version controlling. Some of the namespaces that are defined for SOAP 1.2
include envelope, serialization, and upgrade.
Example for using header attributes in SOAP message
<env:Envelope xmlns:env=“http://www.w3.org/2003/05/soap-
envelope”>
</env:Header>
</env:Envelope>
18.7 NETCONF
A wide range of techniques were in use for configuring network elements and IETF
formed the NETCONF Working Group to standardize network configuration to
fix the disparity in current configuration techniques, each having its own specifi-
cations for session establishment and data exchange. NETCONF protocol, as the
name suggests, mainly deals with network configuration and provides the specifica-
tions for configuration functions like installing, querying of information, editing,
and deleting the configuration of network devices. NETCONF also offers trouble
shooting functionality by viewing network information as faults and taking correc-
tive action in the configuration.
NETCONF is an XML-based protocol. Since it carries configuration informa-
tion NETCONF inherently supports a secure infrastructure for communication.
The NETCONF requests and responses are always sent over a persistent, secure,
authenticated transport protocol like SSH. The use of encryption guarantees that
the requests and responses are confidential and tamper-proof.
Another security feature in NETCONF is the capability of network elements to
track the client identities and enforce permissions associated with identities. Hence
security is mostly implemented in the underlying transport layer and even the iden-
tities are managed by SSH and then reported to the NETCONF agent to enforce
the permissions imposed by the node.
Two important concepts associated with NETCONF are its commands and
datastores.
NETCONF requests are XML documents encapsulated in the <rpc> tag. There
is a unique message identifier associated with each request along with the XML
namespace.
Example NETCONF Request
The request here is using the “get-config” command to get the current (running)
configuration from the device. The response can be the whole configuration record
as specific fields are not specified, or an error with details why the NETCONF
request could not get the desired response from the agent.
The request and reply both include the same message-id number. In case of
an error, the rpc-error is encapsulated in the rpc-reply. The NETCONF reply can
262 ◾ Fundamentals of EMS, NMS and OSS/BSS
indicate one or more errors of varying severity. The error message is mapped to
a specific type, along with severity to identify how quick a corrective action is
required, and other error-specific details that will help debugging the error. It can
be seen from the example that the error response can be easily parsed and valuable
information displayed in any format that is required for a GUI or Web-based appli-
cation. In an event driven network management system, the contents of the error
could be programmed to trigger events that involves corrective actions.
In addition to the usual configuration management protocols that provide cre-
ate, edit, and delete capability, NETCONF has many other commands that help
the operations team to easily manage configuration. Some of these include the
merge command used to merge the configuration data at the corresponding level in
the configuration datastore and replace command used to replace the configuration
data in the datastore with the parameter specified. Another very useful command
in NETCONF is lock, which can be used by the management system to ensure
exclusive access to a collection of devices by locking and reconfiguring each device.
Only when required will the changes be committed. This provides an environment
where multiple system administrators and their tools can work concurrently on the
same network without making conflicting changes to configuration data.
NETCONF offers sophisticated configuration capabilities where a device con-
figuration can be changed without affecting its current configuration. The configu-
ration changes can be made active only when required. This is made possible using
the <candidate> capability, which provides a temporary data store. A <commit>
operation makes the candidate store permanent by copying the <candidate> to the
<running> data store. The <candidate> data store needs to be locked while the store
before modified, this is because <candidate> is not private by default. If the client
that locked the candidate datastore for changing the configuration finally decides
not to commit the change and then <discard-changes> can be used to ensure that
the running configuration is not altered. For test-based scenarios, where the operator
wants to change the configuration and test if it is working properly and in the case
of issues wants to revert back to the previous configuration, then the confirmed-
commit capability in NETCONF can be used. This capability extends the usual
<commit> operation. A deep dive into NETCONF reveals many features that will
make it the configuration management protocol of choice offering capabilities that
were not available previously.
Before we close the overview on SOAP let us also look into some of the trans-
port mappings offered by NETCONF. Some of the transport mappings are:
18.8 Conclusion
This chapter gives the reader an understanding of XML-based protocols. Rather
than jumping into details about XML-based protocol, this chapter uses a more
methodic approach of first shifting the focus from SNMP to XML by explaining the
XML-based protocol using SNMP concepts that were already discussed in previous
chapters. Then two of the most popular XML protocols, the SOAP and NETCONF
are discussed. The intent was to build a strong foundation on XML protocol for use
in network management.
Additional Reading
1. Anura Guruge. Corporate Portals Empowered with XML and Web Services. Bedford,
Massachusetts: Digital Press, 2002.
2. Don Box, Aaron Skonnard, and John Lam. Essential XML: Protocols and Component
Software. Upper Saddle River, NJ: Addison Wesley, 2000.
3. Eric Newcomer. Understanding Web Services: XML, WSDL, SOAP and UDDI. Upper
Saddle River, NJ: Addison Wesley, 2002.
OPERATION/ III
BUSINESS
SUPPORT SYSTEMS
(OSS/BSS)
Chapter 19
This chapter is intended to give the reader a basic overview on OSS and BSS, which
will help in building a solid foundation for understanding the chapters that follow
on the process and applications associated with OSS and BSS. The ambiguity in
terminologies used and a brief history on the OSS/BSS evolution is also handled
in this chapter.
19.1 Introduction
The communication service and network industry is moving toward a converged
world where communication services like data, voice, and value-added services
is available anytime and anywhere. The services are “always on,” without the
hassle of waiting or moving between locations to be connected or for continu-
ity in services. The facilities offer easy and effortless communications, based
on mobility and personalized services that increases quality-of-life and leads
to more customer satisfaction. Service providers will get much more effective
channels to reach the customer base with new services and applications. With
change in services and underlying network, the ease in managing the opera-
tions becomes critically important. The major challenge is the changing business
logic and appropriate support systems for service delivery, assurance, and billing.
Even after much maturity of the IMS technology, there was a lag in adoption
because of the absence of a well-defined OSS/BSS stack that could manage the
IMS and proper billing system to get business value to service providers when
moving to IMS.
Operations support systems (OSS) as a whole includes the systems used to sup-
port the daily operations of the service provider. These include business support
267
268 ◾ Fundamentals of EMS, NMS and OSS/BSS
systems (BSS) like billing and customer management, service operations like service
provisioning and management, element management, and network management
applications. In the layered management approach, BSS corresponds to business
management and OSS corresponds to service management, while the other man-
agement layers include network management and element management.
Let us start the discussion with the complete picture where OSS corresponds to
support systems that will reduce service provider operating expenses while increas-
ing system performance, productivity and availability. The OSS is both the hard-
ware and software that service providers use to manage their network infrastructure
and services.
There are multiple activities in the service provider space associated with offer-
ing a service that requires an underlying OSS to manage. For a service starting up,
the service needs to be provisioned first, the underlying network including con-
nectivity and the elements needs to be provisioned, then the service and network
needs to be configured, the billing system and SLA management system needs to be
configured, the customer records need to be updated and when the service is acti-
vated the billing system also needs to start working on records. This is just a basic
line up of activities and there are many supporting operations associated with just
getting a service ready for the customer. The OSS brings in automation and reduces
the complexity in managing the services. After activation of service there needs
to be a service assurance operation, customer service operation, service monitor-
ing operations, and many more operations that fall under the service and business
management scope of an OSS.
1. Local exchange carrier (LEC): The LEC is the term used for a service pro-
vider company that provides services to a local calling area. There are two
types of LECs, the ILEC (incumbent local exchange carrier) and competitive
local exchange carriers (CLECs). Congress passed the Telecommunications
Act of 1996 that forced the ILECs to offer the use of the local loop or last
mile in order to facilitate competition. So CLECs attempted to compete
with preexisting LECs or ILEC by using their own switches and networks.
The CLECs either resell ILEC services or use their own facilities to offer
value-added services.
2. Internet service provider (ISP): As the name suggests, these service providers
offer varying levels of internet connectivity to the end user. An ISP can either
have its own backbone connection to the Internet or it can be a reseller offer-
ing services bought from a service provider that has high bandwidth access to
the Internet.
What Is OSS and BSS? ◾ 269
systems need to be able to handle the changed network with complex ele-
ments having an aggregate of functionality and a complex information base.
◾◾ Emerging standards for service providers: The deregulations and defining of
standards for interoperability is a big influence in changing the OSS industry
from legacy modules to interoperating standards compliant modules written
by different vendors. Open OSS solutions are also available for download,
changing the competition landscape in OSS space.
◾◾ Customer oriented solutions: Customer focused modules are also becoming
popular in support systems. Customer management and assurance of ser-
vice is becoming an integral part of business support systems. This change
in focus can be seen even in management standardizing forums with work
groups specifically for customer centric management.
OSS/BSS
e
th e
M ust
ge ctur
an om
a
c
an ru
ag er
M rast
et
f
he
i n
Manage service
and business
events generated during monitoring, the network configuration is fine tuned to rec-
oncile the delivered QoS with customer expectations. These events also help in net-
work planning, so that next time a similar service is configured, better results can be
obtained from initial settings of the network. Service level events are handled by the
support systems. The customer, as well as service providers, also require reports on the
service. The customer would be interested in reports like billing, value-added services
used, and when a service outage occurred. The service provider would be working on
reports related to resource capacity, utilization, bugs raised, and the SLA breaches to
be compensated. The generation of reports is a functionality of the support systems.
1. BSS (business support systems): These are solutions that handle business
operations. It is more customer-centric and a key part of running the busi-
ness rather than the service or network. Some of the high level functions in
BSS include billing, CRM (customer relationship management), marketing
support, partner management, sales support, and so forth. These high level
274 ◾ Fundamentals of EMS, NMS and OSS/BSS
The service OSS is the binding glue between the BSS and NMS domains (see
Figure 19.2). There needs to be seamless flow of data between business solutions,
service solutions, and network solutions in support systems for a service provider.
Together these modules form the support system in the service provider space.
Most service providers do an end-to-end (E2E) between these modules to bring
down the operational expenditure (OPEX).
eTOM (enhanced telecom operations map). Some of the new concepts brought in
to encompass all service provider processes were to introduce supplier/partner man-
agement, enterprise management, and also adding lifecycle management processes.
The eTOM is currently the most popular business process reference framework
for telecom service providers. A separate chapter will handle telecom business pro-
cesses and give a detailed explanation on TOM and eTOM. This section is intended
to give the reader an understanding on how the basic TMN management stack
evolved to the current eTOM.
these elements may be different at each of the locations. Again some of these
elements may be faulty and some of the working elements might be already in
use, booked for a specific provisioning, being fixed, and under maintenance
leading to a different status at a given point of time.
When the provisioning module needs resources based on parameters
like status, location, capacity, and so on, a decision is made on the most
appropriate resources to be used. This information is provided by an inven-
tory management module. The inventory data can also include details on
serial number, warranty dates, item cost, date the element was assigned, and
even the maintenance cost. All inventory information is logically grouped in
the inventory management module. This makes it easy to view and gener-
ate reports on the inventory. Capacity planning can be performed based on
inventory data and new inventory can be procured as per need before an
outage occurs. Inventory affects the CAPEX and OPEX directly and hence
inventory reports are of much importance to senior management. Procuring
inventory from a new vendor usually goes through a rigorous vendor analysis
followed by legal agreements for license including maintenance and support.
◾◾ Billing: Records are generated by the elements in the network that can be
used for billing. In switches where calls are handled, call details records
(CDR) are generated that have billing information. CDRs can have multiple
call records. The CDR is parsed to for parameters like calling party number,
destination number, duration of call, and so on, and the customer is billed. A
mediation module performs the activity of parsing and converting the infor-
mation in CDR to a format that can be used by the billing system. Billing
modules have a rating engine that applies the tariff, discounts and adjust-
ments agreed upon in the SLA as applicable, and create a rated record on how
the bill was calculated. This record is stored in a database and aggregated over
the billing cycle. At the end of a cycle (like a month for monthly billing of
calls), the aggregated records are used by modules like an invoicing system
to prepare an invoice for the customer. There can be different billing models
that can be applied based on the SLA with the customer. It can be flat bill
where a fixed amount is billed for usage of service, or a usage-based bill where
the bill reflects the customer’s utilization. Another example along the same
lines is real-time billing where the debit from a prepaid account or amount to
be paid for on a postpaid account can be viewed as soon as a service is used.
Now let us see an E2E flow of how the modules discussed in this section (see
Figure 19.3) interact to offer a service like telephony. Every activity starts with a
business requirement. So first the customer contacts the sales desk and uses a Web
interface to request the service. Once the credit check is performed and the customer
can place an order, the required customer details and service information is feed to
the order management system. The order management system creates a work order
and sends the details to the provisioning system. The provisioning system sends
What Is OSS and BSS? ◾ 279
Customer
Customer
Sales and Order
relationship Billing
customer care management
management
Service/resource
Inventory Service/network Trouble
management and
management provisioning management
operations
data to the inventory system for allocation of appropriate resource. If some manual
setup is required before starting the automated provisioning, then a work request or
a trouble ticket is created. Using the workforce module, the request is assigned to a
technician for resolution. Once the setup is ready, the provisioning module initiates
provisioning of service and the network. Once provisioned, the service is activated
and the billing module is triggered to collect and work on the billing records.
19.8 Conclusion
The overview presented in this chapter is just the foundation for the chapters that
follow. The difference between OSS and BSS was discussed in the context of sup-
port systems. The ambiguity associated with OSS as a complete support system
including network management, against a support system for service management
alone was also handled in this chapter. The shift from the popular TMN to eTOM
was also handled. Support systems are a vast subject and once the reader gets the
fundamentals from this book there are many support processes where the reader
can specialize as a professional.
Additional Reading
1. Kornel Terplan. OSS Essentials: Support System Solutions for Service Providers. New York:
John Wiley & Sons, 2001.
2. Hu Hanrahan. Network Convergence: Services, Applications, Transport, and Operations
Support. New York: John Wiley & Sons, 2007.
3. Kornel Terplan. Telecom Operations Management Solutions with NetExpert. Boca Raton,
FL: CRC Press, 1998.
Chapter 20
OSS/BSS Functions
20.1 Introduction
Usually operation and business support systems deal with fulfillment, assurance,
and billing (FAB) for a service associated with a customer requirement. The back-
end planning to do the real-time support is more a process rather than the actual
support system. The business map as in eTOM brings in an E2E perspective for
all processes in service provider space having FAB as a major module. This chapter
details the basic building blocks for fulfillment, assurance, and billing that make
up the major portion of all support systems. For ease of understanding, the expla-
nations are given using activities in each process without reference to any standard
models like eTOM.
281
282 ◾ Fundamentals of EMS, NMS and OSS/BSS
Sales Order
Customer management handling
Service
Service Inventory
Service planning and
provisioning management
development
Network
planning and Network
Network
development provisioning
◾◾ Preorder data collection: This involves collecting the customer details and vali-
dating credit (credit check) to ensure that the requested service can be placed.
◾◾ Accepting orders: This activity includes creating the order and making
changes (editing) to the order based on a customer request.
◾◾ Integration with inventory database: The order can be successfully completed
only when the required inventory to offer the service is available. To satisfy
the order, the required resources are reserved or when the resources are not
already available they are procured. In the usual scenario, the inventory man-
agement system ensures that the resources to offer a service is always available
by issuing reports when a specific resource is low and needs to be stocked.
◾◾ Initiate provisioning: When the service provider has flow through provision-
ing modules, the order management system triggers the provisioning modules
to complete the order. The input to provisioning with details of the service
and the SLA parameters comes from the order management system.
◾◾ Customer view of order: The customer needs to be notified on the status of
the order. The order management system centrally tracks and manages the
order and notifies the customer on the order status.
◾◾ Initiate billing: There are multiple modules that integrate with the order
management system. One of the modules is the billing system. To service an
order, the OMS first triggers the provisioning module. When the provision-
ing module notifies the OMS that provisioning is complete, then the service
is activated. Activation of service should also start billing the customer for
the service. So the OMS initiates the billing system as part of completing the
order placed by the customer.
◾◾ Plan development: The order management system helps in services and net-
work planning. Based on orders that are placed, the key services that custom-
ers are interested in can be identified. Based on customer records these orders
can give statistical inputs on what age group prefers what type of services and
to identify new service areas that will generate business. The order informa-
tion will also help in forecasting the services that will be in demand for the
next business cycle and resources (both manpower as well as equipment) to
offer the service in time can be made available. This planning will have a
direct impact on reducing the CAPEX and OPEX by proper procurement
and allocation of resources.
◾◾ Integration with workflow management system: When automated provision-
ing is not possible after placing the order and when a technician needs to be
involved in setting up resources to offer a service, then the order management
system sends data to the workflow manager so that an appropriate technician
can be allocated to fix the issue before auto-provisioning can be triggered.
284 ◾ Fundamentals of EMS, NMS and OSS/BSS
The same is the case when an auto-provisioning action fails and a technician
needs to fix the issue, so that auto-provisioning can be retriggered.
◾◾ Interactive user interface: Web-based ordering is most popular in customer
space and most service providers offer their customers the capability to place
and track their order on the Web. The sales, operations, order management,
and customer support team in the service provider space would mostly have
an application or Web-based user interface to create and edit orders placed by
methods other than World Wide Web (www) like a telephone order, order by
mail, or order placed at a sales center.
◾◾ Ease of integration: The order manager has to integrate with multiple support
solutions. The solutions OMS has to integrate with may be from different
vendors. So it is important that OMS has its APIs exposed for other mod-
ules to integrate. Standard APIs and a scalable architecture is a usual trait
of OMS, which make it easier for other modules to integrate with it. Some
OMS solutions also support multiple platforms that makes it easier for inte-
gration in a multivendor environment.
◾◾ Interaction with trouble ticket manager: The OMS also has to interwork with
the trouble management module. The service terms and QoS parameters
captured in the order needs to be communicated to the problem handling
modules. In case of any trouble condition as part of completing an order or
during the service execution linked to an order, the trouble ticket manager
will raise a ticket and assign it to a technician for resolution. Each order
placed by the customer is uniquely tracked and any trouble conditions are
mapped to the order.
◾◾ Integration with customer relationship manager (CRM): The customer sup-
port desk will get all types of queries with regard to an order like the date
of completing the order, the quote, or the price the customer has to pay for
the service in the order, the status of the order, changes to be made in the
order, cancellations to the order, discounts or adjustments to the order based
on promotions, confirmation on the order, troubles with regard to the order
after service is activated, and so on. In all cases where the customer interacts
with the support desk, the specific order has to be identified to respond to the
query. So the CRM module interacts with the OMS module to get informa-
tion on the order.
◾◾ Reports: The OMS issues different types of reports that are useful for cus-
tomers as well as internally by the service providers. The reporting engines
usually offer the capability to generate reports in a variety of formats like pdf,
word doc, excel, image file, and so forth. The reports from OMS can be the
order status, the orders that were not completed and the reasons, the trouble
conditions that occurred during the completion of an order, the total revenue
generate from orders raised on a specific service, and many more reports that
help in proper planning and development of service along with an increase in
customer satisfaction.
OSS/BSS Functions ◾ 285
◾◾ Reliable service to the customer: The service requested by the customer is cap-
tured as an order. The order management system has predefined formats for
collecting orders for specific services. So when an order management system is
used the completeness of the service details is guaranteed. In addition to clar-
ity in collecting an order, the order management module also tracks the order
to completion, thus ensuring that the service is rendered to the customer.
◾◾ Quick delivery of service: The use of an order management system auto-
mates the order collection and service activation cycle. As a result, the time
to deliver services is minimized. The OMS does quick credit checks to ensure
the customer has sufficient credit, it can check the inventory module for
resources to complete a service request, and the OMS can also trigger flow
through provisioning. The order management module communicates (inte-
grates) with a variety of modules to reduce the net time to deliver a service to
the customer.
◾◾ A lot of manual processes are eliminated: Manual processes are not just time
consuming, they are also error prone. In the absence of an order management
module, the customer requirements need to be captured manually. Also it
should be noted that an order management module is like a central con-
trol module that triggers and coordinates activities with multiple modules.
Absence of the order management system would mean that these triggers
need to be manually activated based on requirement.
◾◾ Proper planning: This helps to reduce time-to-market service. Based on the
orders, the customer interests can be identified. So the right set of resources
could be made available and innovative services based on the customer order
trends can be introduced.
286 ◾ Fundamentals of EMS, NMS and OSS/BSS
At a high level, the steps performed by the order manager can be listed as fol-
lows (see Figure 20.3):
The provisioning system plays a critical role in fulfillment of service. The pro-
visioning process in itself is quite complex where the activities would be across
different kinds of networks and can involve a variety of tasks related to the service.
For example, when a service is provisioned it might involve configuring elements
Customer
Service
management
catalog
(1)
Billing (2)
(9) Order
(8) (3)
Order Inventory
management management
(7) (6)
(5)
Verification and
validation (4) Workforce
Provisioning
management
system
in ATM or FR for transport, RAS for access and some elements in a wireless
network. The actual task to be performed in the element could include, but not
limited to, configuring an application, setting up parameters for security, connec-
tivity setup between elements, associating storage with server or application, and
so forth. The provisioning OSS (see Figure 20.4) can also handle configuring of
network for dynamically oriented transactions like authentication, accounting, and
authorization. To summarize, the provisioning system can do application provision-
ing, service provisioning, resource/element/server provisioning, network provision-
ing, storage provisioning, security provisioning, dynamic SLA-based provisioning,
monitoring provisioning and any other setup activities that will be required to pro-
vision a product package consisting of multiple or at least one service, which is
described as an order. An operation support system will usually consist of multiple
provisioning modules specialized for a specific activity. Provisioning with minimal
or no manual intervention is a goal that most service providers try to achieve in
their OSS to bring down the OPEX.
The inventory management system supports and interacts with multiple mod-
ules relevant to fulfillment by keeping track of all the physical and logical assets
and allocating the assets to customers based on the services requested. Order man-
ager interacts with inventory manager (IM) to check if sufficient inventory exists
to complete an order. The purchase and sales module updates the IM with new
assets added, old assets sold, assets given to an outside party on rental, and assets
that were borrowed from a third-party vendor. Provisioning module interacts with
IM to work on allocated resources and also for updating the status of allocated
resources. The next module that gets feed from IM is the network manager. The
network manager contacts the IM to get static information on the resources being
monitored and managed by the network manager.
Provisioning system
Provisioning commands
Communicate with
component APIs
The network manager user interface will have the capability to display the
a ttributes of the resources being managed. The value of some of these attributes
can be queried from an IM module. Inventory management is mostly automated
with minimal or no manual entries. This is because small errors while manually
feeding inventory data can result in major trouble conditions and network failure.
For example, two bridges intended for the same functions, with a similar look
and feel, and with similar specifications and cost. The only difference is a single
digit difference in the product code. This might correspond to a major impact if
the bridges are intended for working in two different configurations, making one
nonsuitable for working in the other’s configuration. While there is a high prob-
ability that based on external features, the two bridges may look similar and the
manual entry in IM might result in a bug. This problem does not arise when the
product code and other parameters are read using a scanner even when the product
code is an alpha numeric string or a barcode. Auto discovery and status update is
available in some inventory management modules, where the IM module interacts
with element managers or the agents in the network elements to get status updates
and to automatically discover a new resource that is added to the network and not
updated in IM module.
Automation of the inventory management system is a necessity considering the
evolution in communication networks to offer complex services on a multivendor
environment. This increased complexity demands simple and flexible ways to allo-
cate resources and to manage, maintain, and plan how networks are configured.
Planning requires operators to maintain configuration records, to search the records
with simple queries, and refer consolidated reports when required. A proper plan-
ning process would be possible only by integration of records by a common system
like the inventory manager (see Figure 20.5), where these records rapidly become
manageable even when they are stored at multiple localized databases.
An automated inventory management system is a key module for having
flow through provisioning. The IM interacts with an order module to allocate
the resources. The provisioning system checks the inventory modules for allocated
resources to do provisioning and finally updates the IM with the status of the
resources after provisioning or failures during provisioning. Inventory hence is a key
module and needs to be automated for quick provisioning without manual inter-
vention. The quality of work is also improved when the administrative burden in
managing inventory records are performed by an automated inventory manager.
On a single click, the IM can provide the operator information on whether
proper equipment is in place to offer a specific service or new equipment needs to
be installed. The links and capacity circuits that provide backbone transport can
be checked and assigned. Commands to provision systems to do configuration can
also be triggered from the inventory management system. There are many vendors
who specialize in inventory management products. Telcordia is one of the market
leaders in network inventory management. Inventory management can also be a
bundled solution offered as part of a complete OSS solution suite.
OSS/BSS Functions ◾ 289
Provisioning Order
management management
Inventory
management
◾◾ Scalability: The huge volumes of inventory data and associated reports lead to
an inventory database that can scale easily. The storage requirements should
be planned properly when deploying an inventory management solution.
◾◾ Ease of use: The inventory information needs to be properly sorted based on
device types. The data representation should be user friendly with the use of
maps, drawing, and symbols wherever required. The inventory data repre-
sented must support different views, like a physical view showing where a par-
ticular device is placed (location) in the chassis, a hierarchical view that can
be browsed down to get the attributes of individual elements or a service view
where the elements are grouped based on services that are running/installed/
offered on the elements. There should be built-in libraries of maps, locations,
floor drawings, and network object symbols available in the inventory man-
agement system. This makes it easy to feed and represent data. In short, a
proper use of text and graphics is expected in representing inventory data.
◾◾ Distributed architecture: Telecom inventory of the service/equipment pro-
vider may be distributed at different geographical locations. The inventory
management system should adopt a distributed architecture in collection of
data and updating of data. For data lookup, the inventory management sys-
tem should provide a centralized view of all data available to the service/
equipment provider.
◾◾ Assist network design: The network design is dependent to some extent on
the available inventory. To realize a service, the supporting network design
involves effective use of available resources. Pre-planning based on existing
inventory data for an improved network design is also possible.
◾◾ Inventory creation and optimal allocation: The inventory management sys-
tem needs to send alerts to add inventory that is scarce and reallocate inven-
tory that is released after a service is terminated, thus creating inventory for
allocation. Powerful and proven inventory algorithms need to be used to
design and allocate network components. Proper allocation ensures rapid and
accurate service delivery.
specific equipment and get details on its suggested configuration. The planning
performed prior to network design helps the service provider to make appropriate
change in configuration for a smooth provisioning and good network performance
after deployment.
Event or rule-based network design is usually used for network design. Event
driven systems can be programmed to the dynamic of taking corrective actions
to fault scenarios. This intelligence is specified as a rule. It is also common to do
logical modeling whereby elements that can be grouped are considered one logical
entity based on the services that are provided or executed on these elements. This
makes it easier to define configurations, where attribute values are set on the logical
collection rather than on individual elements that make up the collection. Network
templates make design much faster and easier by offering the ability to define and
maintain rules governing the ordering, design, and provisioning of various logical
network systems. These templates have default values for configuration that can be
reused without defining the value each time a new design has to be created, which
is similar to a previous setup. The network design module can be used not just for
setting the configuration of elements in the network; it also supports design of con-
nections such as links between the elements.
Network design reference models currently available are mainly focused on
the service provider and are intended to help them evaluate market penetration
as well as other user-related capabilities of applications or services. The three
parties—the service provider, service developer or software and equipment pro-
vider, and the customer—should be considered for introducing changes in the
network. There should be an evaluation of end-to-end flow from services to net-
work capabilities. When service providers want to offer a new and innovative
service, in most cases the network to support the same will not be available and
when the operational equipment provider comes up with an innovative technol-
ogy in the network there might not a proper business case resulting in a subop-
timal market fit. This affects profit margins of the service developers as well as
impedes rapid service development.
For example, some of the widely used reference models for wireless network
design include WWRF/WSI and I-Centric. One of the areas that these models
do not address is the evaluation of the customer space and the network space as
one full flow, which is required to bring out business-focused technology devel-
opment. With the advent of the converged network, the service providers, con-
tent providers and network equipment providers have to rapidly conceptualize,
evaluate, and integrate their service offerings. Also due to increasing competition,
there is additional pressure to deliver the “right” product/service “on time, every
time.” It should be kept in mind that any focused development outside the realms
of pure research needs to have a supporting business case and technical roadmap
evaluating the value add it will offer to both technology and business. In com-
ing up with a proper business case and technical roadmap, the customer space,
service space, and network space need to be considered even when the actual
292 ◾ Fundamentals of EMS, NMS and OSS/BSS
(10) (3)
Service
Inventory
provisioning and (9) management
configuration
Test
management (4) (5)
(8)
1. First the sales and the customer have an interaction on the various service
offerings. In this interaction, sales will provide details on available services
and answer the queries raised by the customer. The result of this step is that
the customer will be ready to place an order if there is any service that meets
customer requirements.
2. In this step the customer places an order, either directly with the order man-
agement system (2A) or places the order from sales (2B). After clarification of
queries with sales on a phone, the customer places an order for a service pack-
age with sales on the same phone call, then this would be an example of order-
ing through sales. It is also possible that after the customer clarifies queries
with sales, the customer logs onto the Internet and places an order online with
an online order management system, which would be an example of direct
order placed by the customer with order management system.
3. As part of placing the order, the customer and service related details are saved
in the order management system. This information will be used for fulfillment,
OSS/BSS Functions ◾ 293
assurance, and billing functions. This step involves the order management
system requesting the service provisioning and configuration module to acti-
vate the service.
4. To configure the network and its elements, the service provisioning mod-
ule sends commands to the network provisioning and configuration module.
Network provisioning, as already discussed, takes care of elements in the
network and the connectivity between the elements. The intent here is to
configure the service at a network element level.
5. Before configuring elements, the network provisioning and configuration
module first interacts with the inventory management module to check the
availability of resources. The inventory management system does all the func-
tions to make the resources requested for offering the service and informs
the network provisioning module when it can go ahead and configure the
selected resources.
6. Based on inputs from the inventory module, the network provisioning mod-
ule now configures the network to offer the requested service. This includes
configuration of elements, setting up the connectivity, and ensuring that the
services are offered on a secure platform.
7. Next a test management module checks the configurations performed by net-
work provisioning and the configuration module and ensures that the service
is working as intended. The interaction between the test management module
and network provisioning and configuration module ensures that all bugs are
reported and fixed.
8. The network provisioning and configuration module sends a status comple-
tion message to service provisioning and the configuration module once a
confirmation is received from the test management module that the network
has been properly configured. The service provisioning module now performs
any service level configuration required and activates the service.
9. The network provisioning module also updates the inventory management
module with information on the status of the allocated resources.
10. Service provisioning and the configuration module now informs the order
management system to update the status of the order. The order management
system sets the order status as completed.
11. The order management system will then inform the assurance solution to
start collecting reports on the services offered as part of the completed order
and the billing solution to start billing the customer as per the SLA associated
with the order.
12. Finally the order management system will directly update the customer (12A)
or inform the customer through sales (12B) about the activation of the service
package requested by the customer when placing the order.
Before we move from fulfillment to assurance, let us look into the key vendors
providing solutions for fulfillment process. Telcordia would probably top the list
294 ◾ Fundamentals of EMS, NMS and OSS/BSS
20.3 Assurance Process
The assurance process involves the set of operations used to assure that the services
offered to the customer are meeting expected performance levels (see Figure 20.7).
The “assurance” process embodies a range of OSS solutions to ensure that a net-
work is operating properly and that service quality thresholds are maintained. Two
main areas covered under assurance are problem handling and data management
across customer, service, and network. Once an order is completed, continuous
monitoring is required to ensure that the SLA agreed upon with the customer is
being satisfied, the service quality is optimal with the usage of hardware within
permissible limits and the network performance guarantees maintaining the service
as per SLA without any outages. Associated with monitoring is the identification
of bugs and corrective actions to fix fault scenarios. The bugs can be identified and
reported by the customer, or it could be identified as a service or network log as part
of monitoring. Managing the lifecycle of the bugs identified is an essential part of
assurance.
Network and service management is mostly covered under assurance. The
collection of management records, distribution of records to other management
applications, performing maintenance operations like installing patch and chang-
ing configuration, restoring an element or connectivity that failed, and monitoring
performance records are all covered as components in the network management
part of assurance. Service assurance is more on monitoring the service KPIs (key
performance indicators) to ensure that the intended service quality is achieved. The
numbers of outages, the maximum transfer rate, the number of fault scenarios are
all indicators of the quality of service. At the customer level, the assurance process
is more targeted to meeting an SLA. The parameters specified in SLA are moni-
tored and any slippage in the SLA needs to be communicated to the customer and
compensation. Customer retention and loyalty are also a part of assurance process.
Sending reports, mail alerts on schedule maintenance, providing incentives and
discounts, quick closure of troubles reported all help to improve customer satisfac-
tion. A well-defined assurance process can be a key differentiator in increasing a
service provider’s competitiveness.
The network operations are monitored and maintained by administrators work-
ing from the network operations center (NOC; see Figure 20.8). To control and
maintain the network activity there can be multiple NOCs for a single service pro-
vider. The device status data, alarms generated, performance records, and critical
element failure information are all collected by the network management system at
the NOC. The records collected by the NMS are used to generate reports that will
be used for network planning and service management. The fault information from
NMS can be aggregated to provide service level fault data. So a trouble condition
in an offered service can be mapped to the associated network and further drilled
down to the specific element that caused the issue. The NOC professionals have to
continuously monitor the critical performance and fault information required to
ensure smooth network operation.
The management solution in NOC deals with:
◾◾ Collection: The data from the network elements has to be collected. All fault,
configuration, accounting, performance, and security related data are col-
lected using management solution. Proprietary protocols and standard proto-
cols like SNMP, CMIP, and T1 can be used for collecting the data. There can
also be multiple networks involved in offering a service, in which case data
from all participating networks needs to be collected by NOC.
◾◾ Consolidation: The data from different element managers need to be consoli-
dated and fed to the network manager. CORBA IDL or XML based inter-
faces are used to make this communication possible where data needs to be
296 ◾ Fundamentals of EMS, NMS and OSS/BSS
Distribution
Analysis
Consolidation
Element managers
Collection
logs needs to be filtered and correlated to identify the actual problem that
needs to be fixed.
◾◾ Corrective action and maintenance: When analysis of data identifies a trou-
ble condition that can be mapped to a predefined corrective action, then
an event handling module will execute the steps to complete the corrective
action. For the above example of media gateway failure during a new module
load, the event or action handler will trigger the command on the media
gateway to perform a module replace or fallback to the old module/load.
An administrator can also execute commands to take corrective action for a
trouble condition. Service providers aim in having minimal manual interven-
tion for fixing trouble conditions. The actions can involve repair, restoration,
and maintenance activity.
◾◾ Distribution: Data filtered at the network manager or service logs created
by aggregating multiple network level logs gets distributed to several service
management modules and other network management modules. the distri-
bution channel can lead to business level modules like order management and
customer trouble management.
Next let us look into quality assurance, which ensures that the service pack-
age after activation meets the guaranteed performance levels as per the customer
specific SLA. Quality assurance is handled by customer QoS/SLA management,
service quality management, and network performance management modules.
A set of attributes that are classified under network, service, and customer levels are
defined for measuring quality. These attributes are monitored by the quality assur-
ance management modules.
Some of the attributes monitored by network performance management mod-
ule are:
◾◾ Latency: It is the time delay between the initiation of an event and the
moment the effect begins. There are different types of latency.
−− Propagation latency corresponding to time delay for travelling from
source to destination. Again propagation latency can be calculated as one
way or round trip.
−− Transmission or material latency is the time delay cased in transmission
due to the medium used for transmission.
−− The response from a system may not be instantaneous upon receiving the
request. The delay in response caused due to processing of request leads
to processing latency.
−− There are other forms of latency like synchronization latency, caused
when communication occurs between two systems that have different
processing capability due to factors like difference in buffer size or proces-
sor speed. The latency caused due to the mismatch can lead to flooding of
the system with lower capacity.
298 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ MTTR: This attribute is a measure of the availability of the system. For sys-
tems that have a backup facility where a secondary machine takes up control
without delay when the primary goes down, the MTTR is zero. Here the
system has zero MTTR while the individual devices (primary and secondary)
have nonzero MTTR.
−− Mean time to recover: It is the average time a system or network device/
element takes to recover from a trouble condition or failure.
−− Mean time to respond: The average time the support or maintenance team
takes to respond to a request to fix a network issue is calculated in mean
time to respond. The MTTR value is usually mentioned as part of the
support agreement the service provider has with a network or equipment
support vendor.
OSS/BSS Functions ◾ 299
Normal operation
Failure Failure
Operational
period
◾◾ Service downtime: This is the time interval during which a service is unavail-
able. High availability of systems is now a mandatory requirement while
offering most services and a specific value of maximum service downtime
is specified as part of the contractual agreement. This attribute is similar to
MTTR, except that repair and recovery usually deals with a system and its
300 ◾ Fundamentals of EMS, NMS and OSS/BSS
elements, while service downtime looks at the entire service offering without
any reference to the systems that make the service possible.
◾◾ Service order completion time: There is a time gap between placing an order
and completing of the order. This time gap is required to do the provisioning
activity for activating the service. The customer placing the order is provided
with an order completion time. This helps the customer to know when the
services will be ready for use or when to check back on the status. There could
also be price packages associated with order completion time. Some custom-
ers might be in urgent need of a service and can request premium processing
of the order. Rather than waiting for all orders that were placed by other
customers before the current order to be processed, the order with premium
processing is given a higher priority and will be processed as soon as possible
leading to shorter service order completion time.
There are different types of reports created as part of an assurance process, some
of which are used by the service provider and others useful for the customer. Let us
briefly touch upon some of these reports:
◾◾ Network usage reports: The network utilization is evaluated using this report.
The network planning can be done with this report. Any additional usage can
be identified and billed. Under utilization can be monitored and resources
can be reallocated for better utilization without compromising on SLA.
◾◾ Performance dashboards: The performance of the network elements are rep-
resented in dashboards. The dashboards are prepared using performance
reports collected from the network elements at regular intervals of time.
◾◾ SLA report: This report shows the performance of a service against the SLA.
It helps to identify SLA breach and SLA jeopardy scenarios. The service pro-
vider can take corrective actions to ensure that service levels remain within
a predefined boundary values. SLA reports are sometime also shared with
the customer to show transparency of information and improve customer
satisfaction levels.
◾◾ Customer usage patterns: This report is published to the customer providing
details of the usage. The reports are mapped to the billing cycle. For example,
when the customer is billed monthly, the customer usage report will show
the variations in usage based on preceding months. The change in a billed
amount is also provided as a pattern for customer viewing by some service
providers.
◾◾ Service providers also publish reports on general statistics on most popular
services, feedbacks/satisfaction levels from customer, annual or quarter wise
financial reports of the company, and so on.
(9a)
Customer support Billing and
(1) center (CSC) fulfillment
(2)
(8b)
(8a)
Trouble
(3)
ticketing
(4b)
(5) (7)
(6)
(9b) Verification
Network
operations
center (NOC) Provisioning
Network elements
1. Customer calls up the customer support center (CSC) and reports an issue
about the service offered along with details on the customer account.
2. The CSC will verify the account details of the customer with the billing and
fulfillment modules.
3. Next the specific problem and the history of the problem is documented with
the trouble ticketing module.
4. CSC will then try to fix the issue (4a) or hand off the call to technical support
(4b) in the network operations center (NOC).
5. Assuming CSC hands off the call, the NOC will interact with the customer
and try to do the following checks based on professional support expertise
and diagnostic tools available:
a. Identify if this needs to be fixed at service level.
b. Identify if this is caused by a problem with the network.
c. Determine if this problem is caused due to a problem already identified
and tracked by an existing trouble ticket.
OSS/BSS Functions ◾ 303
There are several issues that come up as part of trouble handling. Some of
them are:
◾◾ Customer support team has limited access to data: In most CSCs the support
team only has access to the customer relationship management and billing
data. So the customer might have to elaborate the issue first to the customer
support and then to the billing representative. Even normal queries that
involve the signal strength may require intervention of technical support.
◾◾ Customer Interaction: The customer will always want an immediate solution
to the problem and in some cases may not be able to correctly communicate
the issue. The support team should be well trained to handle even the most
frustrated customers.
◾◾ Most calls are queries rather than reporting an issue: The calls from customer
to support desk can be for more information on a product, details on using
a service, balance query, interpreting a report, and so on. To avoid time loss
in dealing with queries and to devote more time of professional support on
actual trouble scenarios, the support system is mostly automated. So when
the customer dials a customer support number, the customer is requested to
make a selection to describe the reason for the call. Wherever possible the
automated support module will try to satisfy the customer.
◾◾ Call handoff not seamless: There are multiple sections in the customer support
team to handle issues on specific topics like billing, new product information,
service activation status, and so on. It can happen that the multiple call han-
doff happens first at the customer support in CSC before the call goes to a
technical support professional in NOC. A typical example is the customer
304 ◾ Fundamentals of EMS, NMS and OSS/BSS
calling up support to check why the new service is not working. This would
first go with the billing team to check credit, then to the service activation
team to check the status, and if the service is already activated then to the
technical team to isolate and fix the problem.
◾◾ Proactive identification: The problems that are reported by a client and need a
fix are definitely cases that could have been identified by the service provider
professional itself. Proper validation and verification can reduce the number of
bugs reported. When a bug is fixed with regard to a specific service, suitable
actions need to be taken that ensures the same issue in another service or for
another customer is also fixed. Proactive fixing of issues by the service provider
and service developer can reduce the number of bugs reported by the customer.
◾◾ Isolating the issue: This involves mapping network issues to a service and
more specifically to identify the node in the element that has caused the ser-
vice failure. When there are multiple issues to be fixed then a decision has to
be made on what issue needs to be fixed first. A discussion might be required
between teams to identify the best way to solve a set of network level issues
to bring back a service. Some equipments and solutions used in service pro-
vider space may be from third-party vendors. To fix some bugs and to isolate
the issue itself, interaction with third-party vendors may be required. Most
service providers outsource their support activities to IT companies to have
a single point of contact for inquiring on the status of a ticket and the IT
company has to manage the different external vendors.
◾◾ Fix customer issues in minimum possible time: The customer is worried about
his service and will be calling the support desk for a quick and concise solu-
tion to the problem. The quality of customer service depends on how much
the customer is satisfied at the end of the call. For this reason, extensive train-
ing is offered to call center professionals with the intent to improve customer
experience.
◾◾ Real time monitoring of customer experience: Customer service can be a key
differentiator in making a service provider more lucrative to a customer com-
pared to its competitor. Similar to monitoring the service and the network,
the customer experience also needs to be monitored for customer retention
and satisfaction. For this reason the customer calls may be recorded for analy-
sis of customer service quality.
OSS/BSS Functions ◾ 305
◾◾ Identify the service and network performance: After deployment, the issues
raised by the customer is a valuable source to identify the quality of service,
the bugs in the service and the underlying network and the issues that needs
to be fixed in similar service offerings that are active or will be provisioned
in the future. Based on bugs reported by customers, if the bug density of a
product is relatively high and leading to law suits, then those services can
be revoked for releasing an improved version and for planning toward new
products.
◾◾ Identify reliable third-party vendors: There can be multiple third-party ven-
dors handling support and assurance functions. These could be suppliers of
the network equipment, external call center professionals, or developers of
the services. The equipment failures gives an indication of which vendor can
be approached for future opportunities. Time to solve a reported issue by
replacing or fixing the equipment becomes a critical factor. The number of
satisfied customers after a call with a support agent is an indication of how
well the call center professionals are handling support tasks. Finally the num-
ber of bugs reported on the service is an indication of the development and
test effort performed before the service was launched. Identifying reliable
third-party vendors for future opportunities can be done as part of monitor-
ing the assurance reports.
The number of support calls from customers has increased recently and the main
cause of these issues can be attributed to the following changes in service provider
space affecting customer experience:
◾◾ New complex convergent services are being introduced that the customer is
not familiar with. Support calls can be with regard to ordering, service usage,
billing, and so forth.
◾◾ Different types of networks are coming up and the service provider profes-
sionals may not be able to fix all bugs before deployment.
◾◾ Many suppliers and partners with regard to services being offered and the
content associated with the services.
◾◾ Acquisitions and mergers in service provider space, which require aligning
business process and integrating solutions.
◾◾ Nonscalable legacy applications for management impedes proper monitoring
on next generation network and services.
limitations on the technical desk in identifying the network issue. This would
affect the problem handling and service fulfillment time. Filtering the most
relevant events for the issue raised by the customer would be a challenge if
sufficient event abstraction is not already in place.
◾◾ Auto recovering and self-managing network elements can reduce the work
of the support professionals and fix issues of the customer without having to
call up the support desk. With an increase in network complexity, the cost
of maintenance and time to fix issues will increase. Auto recovery mech-
anism and presence of self-managing network are key functionalities in a
well-managed network
◾◾ Standard based development and well-defined contracts for interaction should
be followed by the service developer to ensure interoperability and ease of
integration. The issues that originate in a multisupplier/partner environment
with acquisitions/mergers can only be addressed using standardization.
◾◾ Transformation of legacy application is also a strategy followed by many ser-
vice developers. Since legacy solutions are functionality rich, many service
developers don’t opt for new SOA-based solutions, rather they transform
their legacy solution in stages to an SOA-based framework.
◾◾ COTS software and hardware developed with holistic telecom vision should
be used by service developers in the solutions they offer to service provider. Off
the shelf implementations will reduce cost and integration time. The challenge
is to ensure that a COTS product will interoperate as expected with an exist-
ing product. NGOSS based applications should allow solutions to be rapidly
implemented through integration of off -the-shelf software components. ATCA
(Advanced Telecommunications Computing Architecture) is a modular com-
puting architecture from PICMG for COTS telecom hardware development.
Let us conclude the discussion by mentioning some of the key players that offer
assurance solutions. After acquiring Tivoli Systems Inc. and Micromuse, IBM
has emerged as the market leader in offering a variety of assurance solutions. HP
Openview products of Hewlett-Packard (HP) can manage and monitor a wide vari-
ety of network elements used in data, access, and transport network. Computer
Associates have a suite of solutions for management of application servers. TTI
Telecom offers OSS service assurance solutions suited for fixed, mobile, and cable
networks. Telcordia is another OSS solutions provider worth mentioning in the
market leaders for assurance solutions.
Discounts
Service Rating and
incentives
selected by the customer. This rate is applied on the service consumed to compute
the customer amount due to the service provider at the end of a billing cycle. The
billing system should allow easy management of customer accounts. A central-
ized repository that maintains a record of customer payments will be queried and
updated by the billing system.
The three main processes in billing are (see Figure 20.11):
Billing is no longer limited to a method of collecting money from the customer for
services used. It has evolved into a process that has its own independent standards
OSS/BSS Functions ◾ 309
and can be a key differentiator for a service provider. Billing is often used as a
strategic tool to get and retain business. The results of the renewed approach to bill-
ing has led to different types and models.
Some of the key terminologies used in billing that the reader has to be familiar
with are:
◾◾ Volume-based billing: Here the bill is calculated based on the volume of data
(total size). An internet service provider charging the customer based on the
volume of data upload and download is an example of using volume-based
billing.
◾◾ Duration-based billing: In this billing, time is the determining factor. An
Internet service provider charging the customer based on duration of sessions
the user is connected to the internet is an example of using duration-based
billing.
◾◾ Content-based billing: This model is most common in IP billing. In this bill-
ing, the type of content offered to the customer is the determining factor. It
can be applied on services like video broadcast, multimedia services, gaming,
and music download.
Switch
3
Database
Mediation
5
Rating Billing
engine system
3. After processing of the call records, the mediation device stores relevant
information to a database.
4. Information in the database is accessed by the rating engine to rate the calls.
The rating engine will apply the payment rules on the transactions. Discounts
and promoting can also be applied.
5. The information from a rating engine is sent to the billing system for sending
the invoice. Any bill related settlements can be done by the billing system.
6. Invoices prepared by the billing system are sent to the customer as post,
e-mail, or uploaded to a secure Web page. Most service providers allow the
customer to register in the service provider Web site and give them the capa-
bility to securely access and pay their bills through the Internet.
The billing module interacts with many other support system modules (see
Figure 20.13). The main interactions at the business level happen with customer
care and marketing and sales modules. At the service level, the interaction is with
Billing
Accounts Network
database management
an accounts database and at the network level the interaction is with the network
management module.
The billing system can generate trend reports of services that are generating
more revenue that can be used by the marketing and sales team for designing strate-
gies to increase business. The billing reports help in planning for investments based
on market trends. The billing data is to be used by a sales team in providing infor-
mation to customers on what could be the best service plan for a customer based on
trends in the specific customer segment that the user falls in. Impact of promotions
and discounts on sales of a specific product can be computed from the billing sum-
mary report for a specific product.
The customer care modules will interact with billing to apply service order
adjustments and any specific discounts that need to be offered to the customer on a
need basis. For example, a customer might want to cancel a specific service before
the end of the billing cycle. In this case the number of days for which the customer
has to be billed has to be updated in the billing system based on information pro-
vided to the customer support executive. The customer can also call the support
desk for clarification on billing related queries. The customer support desk will have
access to specific (subset) billing information to service the queries. However, most
support professionals ask sufficient authentication queries to ensure that private
information is disclosed only to the intended owner of an account. Some support
professionals also request permission from customer, if the support personal can
access the account owners billing data and if required, modify it.
The billing system will interact with the accounting modules in NMS to get the
billing records. The billing records generated by the elements in the network are
handled by the element managers and network management systems to provided
relevant billing data to the billing system. There is an account database that the bill-
ing system will query to get information on customer account balance, credit limit,
billing rules to be applied as per SLA, tax to be added as per regulations, additional
charges for usage of specific service, and so on that are associated with generating a
final bill to the customer.
Two important modules in the billing OSS are the mediation system and the
rating system. All service providers have a mediation system that consists of several
subsystems to capture usage information from the network and service infrastruc-
ture and distribute it to upstream BSS modules like billing, settlement, and mar-
keting (see Figure 20.14). The subsystems collect billing records from the various
network infrastructures supported by the service provider and provide feed to the
central mediation system that aggregates the records, performs correlation between
records, repairs records with errors, filters the records based on predefined rules, and
maps the data to a target format required by the business support systems. Some of
the popular formats handled by the mediation systems include CDR, IPDR, and
SDR (see Figure 20.14).
Mediation functions can be classified into usage data collection and validation
functions.
OSS/BSS Functions ◾ 315
Customer
Billing Sales force Marketing
relationship
settlement automation management
management
1. Usage data collection: The usage records may be based on number of calls,
messages sent, traffic utilization, or events based. A set of usage types can be
defined based on originating/terminating point and the service being used.
In fixed to mobile type the originating point is a fixed network while the
terminating point is a mobile network. Similarly there can be other types like
fixed to fixed, mobile to fixed, and mobile to mobile. When the user keeps on
switching networks for a single usage record itself, the type is Roaming. SMS
and call forward are grouped under value added service type.
A part of the usage data collection the mediation system performs is a set
of key activities:
a. Polling for data: The mediation system continuously looks for data on
usage records from network infrastructure. While polling is a pull based
mechanism where the mediation system checks for new record availabil-
ity and collects data, a push based mechanism can also be implemented,
where the billing agents in the network infrastructure actively sends data
to the mediation system when a new record is generated with details
on the usage. To make the billing agents independent of the media-
tion system implementation, it is a usual practice to implement a polling
mechanism for data collection where responsibility to collect and per-
form mediation to map to a format lies with the mediation system.
b. Consolidation of data: Once data is collected by a “push or pull” mecha-
nism, next the data across network elements needs to be consolidated. The
316 ◾ Fundamentals of EMS, NMS and OSS/BSS
information sent from the mediation system to the upper business layer
modules needs to be formatted data that can be directly interpreted. So
the aggregation, correlation, and filtering between billing records takes
place in the mediation system to send consolidation information to other
business modules.
c. Standard interface for interaction: Multiple business modules make use
of the information from the mediation system. Hence it is important that
the mediation system is not designed for communication for a specific
module. The interfaces for data request and response need to be stan-
dardized. These standard interfaces are published for business modules
to easily interoperate with the mediation system. The function here is a
seamless flow of information between the mediation system and business
support systems.
d. Drop nonbillable usage: All information in a usage record will not be
useful for billing. Only specific fields in the raw records collected from
the network elements can be utilized for billing. As part of formatting the
records, the mediation system will drop nonbillable usage information
and the unused fields are filtered. It should be understood that this activ-
ity is quite different from rating. In rating some usage may not be billed
based on terms in the SLA, but the mediation systems needs to collect
and send all billable usage information for rating.
e. Support for real-time applications: Mediation systems have to support
real-time application billing. This requires immediate processing of usage
records with minimum processing time. The usage records are computed
while the service is being used and information of the usage bill has to be
generated at the end of each usage session.
f. Abstraction: One of the most important applications of a mediation
system with regard to usage data is to insulate the biller from the
network elements generating the raw data required for billing. In the
absence of a mediation module, the biller will have to work with indi-
vidual data coming from each network element to compute the usage
data. This abstraction of generating useful information for the biller
makes it easier for business support systems to work on the billing
information.
2. Validation functions: The mediation system performs multiple validation
functions on the usage records. Some of them are:
a. Duplicate checks: This involves identifying duplicate records and elimi-
nating them to avoid discrepancies in billing the customer.
b. Eliminate unwanted record: Call failure and dropped calls should be
dropped as nonbillable records based on validation checks on the col-
lected records.
c. Edit error records: Some records might contain errors that make it non-
compliant for mediation processing tasks like correlation and aggregation.
OSS/BSS Functions ◾ 317
In these scenarios the mediation system will perform edits and transla-
tions to make the content in the record suitable for interpretation.
d. Unique tracking: All data after mediation validation and processing are
assigned a unique tag number to make it easier for business modules to
identify if the relevant records have been received from the mediation
system. This helps to re-transmit lost records or synchronize records with
other management modules. A table look up can be implemented for
proper tracking of records.
Network
Collection
Mediation system
Distribution
Customer invoices
Customers
have a mechanism to collect all the records and process them in the shortest
possible time.
◾◾ Data correlation: The correlation between records to send consolidation
information is also a challenging activity considering the wide variety of ser-
vices being offered and the network infrastructure deployed by the service
providers in current telecom industry.
There are several billing systems that take feed from the mediation system as
depicted in the figure on mediation data usage. Data from the mediation systems
is used to compute the network usage charges, interconnect charges, roaming
charges, and other value added service charges. The data is also achieved in a stor-
age system for future reference or for processing by business support solutions like
planning, which try to identify usage statistics based on aggregated information in
historical records.
Next we will discuss the rating engine whose role it is to apply pricing rules to
a given transaction, and route to the rated transaction to the appropriate billing or
settlement system. Rating takes input from a predefined rules table and applies it
on usage information provided by a mediation system. The charge computed by the
rating engine is not just based on usage. The final charge has various rating com-
ponents taken from modules like the order management, marketing management,
and SLA management system.
The main activities performed in rating are:
◾◾ Applying rating rules on the usage information for each of the customers.
◾◾ Applying any credits for outage.
◾◾ Applying discounts that are agreed upon as part of the customer order.
◾◾ Applying any promotions or offers on the charge.
◾◾ Applying credits for breach of terms in the SLA.
◾◾ Resolving unidentified and zero billed usage scenarios.
◾◾ Adding/reducing the charge based on special plans selected by the customer.
In rating, first the value of the attributes that are used for rating are identified
from the formatted records forwarded by the mediation system (see Figure 20.16).
Some of the attributes that effect rating are:
◾◾ Connection date and time: There could be special rates imposed on specific
dates. For example, the service provider might option to give a promotional
discount to all subscribers on Thanksgiving Day. The discount could also be
associated to time (like 6 AM–6 PM) on the specific date.
◾◾ Time of day (TOD): Service usage has peak hours and off peak hours in
a day. For example the number of users using the Internet would be more
during office hours. To make effective utilization of resources, service pro-
viders offer a low rate for service usage during off peak hours. It is quite
320 ◾ Fundamentals of EMS, NMS and OSS/BSS
Rating engine
Mediation
system
Network infrastructure
has a higher demand for which the users are willing to pay a premium. This
leads to different rates based on the type of content.
◾◾ Jurisdiction: Rates can vary based on geography. The regional rate plan will
be different from the national rate plan, which would be different from the
international rate plan. The state level taxes, rules, and regulations can also
impact rating.
◾◾ Volume of data: Rating could also be based on volume of data. That is the rate
would be applied on per packet or per byte basis. This could be extended to
specifying usage limits and incremental rates based on usage volume.
After the value of attributes to be rated are identified, the next step in rating
is to look up the rate table for applying the rate against the value of the attribute.
The rate tables corresponding to different rate plans are identified using the unique
rate table ID. Once the value of attribute value and rate is identified, next the event
charge is computed. From the event charge the final charge is then calculated by
applying discounts, taxes, promotions, and other parameters that impact the final
bill to customer but may not be directly linked to usage.
Let us now take a specific example of how a call is rated. The following are the
high level steps:
In short, rating mainly loads the transactions, does a lookup on external data,
then applies the business rules and routes the rated transactions to accounting for
bill settlement. Figure 20.17 shows that the interaction of other OSS modules with
rating engine. Some of the key players developing stand-alone rating solutions
include RateIntegration, OpenNet, Am-Beo, Boldworks, Highdeal, and Redknee.
There is more competition in mediation space, where some of the key players are
Ventraq (combined companies of ACE*COMM, TeleSciences, and 10e Solutions),
Comptel, HP, Ericsson, and Intec.
322 ◾ Fundamentals of EMS, NMS and OSS/BSS
CRM
Account information
for settlement Communicate
rating plan Partners
Accounting
Content charges
Rating engine
Historical usage
Service provision
for a rating plan
Usage records
Archival
Provisioning
Mediation
Fraud
Planning
management
Customer
payment
Billing system
Recurring
charges
Bill Charge
calculation collection
Rollup
charges
Invoicing
Discounts
and offers
Rating Archival
engine system
Customer SLA
relationship management
management system
20.5 Conclusion
OSS deals with the range of operations in a service provider space that is critical to
the operator’s success. Fulfillment, assurance, and billing (FAB) systems make up
the major modules in operations and business support systems that are used by the
service provider on a regular basis for seamless business continuity. A closer look
into FAB reveals several core modules like order management, network manage-
ment, and mediation system that interoperate to fulfill a customer order, assure
service continuity, and bill the customer for the ordered service.
The order management systems allow operators to perform the process of con-
figuring the network to deliver the services ordered by a customer. Activating ser-
vices on the allocated resource infrastructure is performed by the operator using
provisioning systems. The allocation of resource and tracking the assets itself is
performed by the operator using an inventory management system. After activa-
tion, the task of monitoring the network to detect and isolate network failures and
quickly restore services is performed using a network management system. A ser-
vice management module works above the network manager to correlate network
performance with service quality requirements, and appropriately communicate
and compensate customers for any disruption. Finally, mediation systems allow
operators to determine service usage. This information is taken utilized by a rating
system that allows operators to determine how much to bill based on the service
usage for a particular customer.
324 ◾ Fundamentals of EMS, NMS and OSS/BSS
Interaction between support system modules can be made simpler when service
oriented architecture (SOA) is used, where the modules are developed as services
with defined interfaces for interaction. Integration activity can be made easier by
deploying an enterprise application integration (EAI) layer. The EAI allow opera-
tors to quickly integrate diverse support systems and network elements. Electronic
integration of support systems is required for an automated back-office in large-scale
operations. This integration layer creates an open data exchange platform optimized
for heterogeneous interprocess communication. The EAI is not limited to OSS
and can be used to support communication between software systems in other
domains also. Some of the key players in EAI development are Tibco, BEA Systems
(acquired by Oracle), and Vitria.
Additional Reading
1. Kundan Misra. OSS for Telecom Networks: An Introduction to Network Management.
New York: Springer, 2004.
2. Dominik Kuropka, Peter Troger, Steffen Staab, and Mathias Weske, eds. Semantic
Service Provisioning. New York: Springer, 2008.
3. Christian Gronroos. Service Management and Marketing: Customer Management in
Service Competition. 3rd ed. New York: John Wiley & Sons., 2007.
4. Benoit Claise and Ralf Wolter. Network Management: Accounting and Performance
Strategies (Networking Technology). Indianapolis, IN: Cisco Press, 2007.
5. Kornel Terplan. OSS Essentials: Support System Solutions for Service Providers. New York:
John Wiley & Sons, 2001.
Chapter 21
NGOSS
21.1 Introduction
Let us start by discussing why TMF introduced NGOSS. The telecom industry
was changing rapidly both from technical and business perspectives. On the tech-
nical front, the focus shifted from simple voice service to data services, which was
required on an IP-based platform as against the previous circuit platform. Also
the focus shifted from a fixed to wireless network and then to convergence of both
fixed and mobile termed FMC (fixed mobile convergence). On the business front,
prices were falling due to competition and there was a demand for higher return
on investments. This new landscape brought in different needs for each of the tele-
com players. The suppliers wanted to have standard interfaces for interaction to
reduce the integration costs and ensure that there is a higher chance for operators
to buy a product that will easily integrate and interoperate with other products. The
operators wanted to reduce service life cycle and achieve minimal time to market by
reducing integration time. Another important need of the operator was to reduce
the operating cost and spending on OSS.
To meet the needs of the key players, the OSS expected to offer rapid service
deployment, customer self-care strategies, flexible real-time billing models and sup-
port for the multiservice, multitechnology, multivendor environment. The business
325
326 ◾ Fundamentals of EMS, NMS and OSS/BSS
models and systems available were not able to scale up to these new requirements.
The suggested solution was to change the approach toward management solution
development in a way that will ensure that OSS solutions can:
21.2 NGOSS Overview
The service providers following legacy OSS process and solutions are having
considerably high operational costs. The lack of automated flow-through process
has resulted in higher manpower costs. Time to market is significantly high in these
industries due to rigid and inflexible business processes. Legacy systems do not
scale with change in requirements leading to higher costs for enhancements and
integration to support changes in technology. The impact of having legacy process
and rigid systems becomes evident in a multiservice, multitechnology, and multi-
vendor environment where adaptability and interoperability are key concerns, and
mergers and acquisitions are quite common. These can also result in poor customer
service because of poorly integrated systems with inconsistent data.
The NGOSS approach to this problem is to adopt the lean operator model
in telecom industry, which has already proved its effectiveness in automobile,
banking, and retail industries. The main attributes of a lean operator are reduced
operational costs and a flexible infrastructure. The reduced cost of operations in
a lean operator model is achieved with high levels of automation. Seamless flow
of information across modules is a major requirement of the lean model, which
can be achieved with standard information model and well-defined interfaces for
NGOSS ◾ 327
Enterprise application
integration
interaction. The seamless flow of data also ensures that data is consistent leading to
better customer service. Customer self-care is also suggested to bring down opera-
tional costs, reducing the need for more customer desk executives.
Commercial off the shelf products that has already been a valuable tool for ease
of integration is another factor to be looked into for reducing operational costs.
When the business and technical requirements change, the infrastructure should
be flexible enough to adopt the changes. The faster the resources can adopt a change
in requirements the more rapid will be the time to market. This means significant
reduction in the service development and delivery time. The adoption of lean prac-
tices should be done without compromise in service quality.
The OSS architecture has migrated over the years from legacy to loosely coupled
and finally to a distributed model (see Figure 21.1). Legacy systems were inflexible
to change and were poorly integrated. There was minimal or no distribution of
data. This changed in a loosely coupled system where service did interoperate but
there existed multiple databases and user interfaces. The NGOSS approach is for
a distributed environment as advocated in SOA, where interfaces are published
and there is a seamless flow of information between modules, which are policy
enabled.
21.3 NGOSS Lifecycle
There are a number of techniques for developing and managing software projects.
Most of them deal with either the management of software development projects
or methods for developing specific software implementations including testing and
maintenance. Some of the most popular frameworks currently used are the Zachman
Framework, Reference Model for Open Distributed Programming (RM-ODP),
Model Driven Architecture (MDA), and Unified Software Development Process
(USDP). Most of these techniques look into a complete life cycle of software
328 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ Model based development as in MDA. A set of meta models are used for
developing business and technology artifacts. This way changes can be
tracked and easily updated.
◾◾ Separate emphasis to enterprise as business, such as in Zachman framework.
◾◾ Support for distributed framework as in RM-ODP.
◾◾ Use case based iterative approach used in USDP.
NGOSS lifecycle is built on two sets of views. In the first set there is a physical or
technology specific and a logical or technology neutral view. The technology spe-
cific view handles implementation and deployment and the technology neutral view
handles requirements and design. When a problem is identified, the requirements
that are captured are not specific to a technology. NGOSS suggests the design
definition also to be independent of the technology to facilitate multiple equivalent
instantiations from a single solution definition. In the implementation and deploy-
ment phase a specific technology is adopted and used.
The second set consists of a service developer view and a service provider view.
While the service provider identifies the business issues and finally deploys the
solution, the design and implementation of the solution is handled by the service
developer. It can be seen that both service provider and service developer has inter-
actions with the logical and physical views of the NGOSS lifecycle.
The overlap of service provider view with the logical view happens in the busi-
ness block where the business problems are identified. All the activities of the ser-
vice provider, for business definition including process and preparation of artifacts
for service developer to start work on the design, happens in the business block. The
overlap of service developer with the logical view happens in the system block where
the technology neutral design is defined. All the activities of the service developer
for system design including preparation of artifacts to start work on the imple-
mentation happens in the system block. The overlap of service developer with the
physical view happens in the implementation block where the technology specific
implementation occurs. All the activities of the service developer for implementing
NGOSS ◾ 329
Knowledge
base
the solution as per the definitions in system design happen in the implementation
block.
The overlap of service provider with the physical view happens in the deploy-
ment or run time block where the technology specific deployment is performed.
This block is expected to capture all the activities of the service provider for deploy-
ing the solution as per the definitions in business, system, and implementation
block. Though the model appears to be hierarchical in passing through the stages of
defining, designing, implementing, and deploying one after the other, it is possible
to move into any one of the intermediate stages and then work on the other stages.
This means that there is traceability between blocks and at any stage a telecom
player can start aligning with NGOSS. The overlap of views then creates four quad-
rants each corresponding to a block in the telecom life cycle (see Figure 21.2).
There are many artifacts that are generated as part of the activities in the various
blocks (the blocks are also referred to as views). The business view will have artifacts
on business requirements, process definition, and policies. In the system view as
part of modeling, the solution there will be artifacts on the system design, system
contracts, system capabilities, and process flow represented as information mod-
els. In the implementation view the system design is validated and implemented
on specific technologies that will generate artifacts on implementation contracts,
class instance diagrams, and implementation data models. The deployment view
has artifacts on contract instances, run-time specifications, and technology specific
guidelines. These make up the NGOSS knowledge base.
NGOSS lifecycle knowledge base, which is a combination of information from
various views, offers end-to-end traceability. It can be considered as a central reposi-
tory of information. It contains three categories of information:
1. Scope: In this phase the area that needs to get aligned with NGOSS is
identified. There are multiple processes spread across customer, service, and
network management. A modular approach is required to make changes to
Scope
Analyze
Normalize
Rationalize
Rectify
to procure resources and to interface with other service providers. The complex
network of business relationships between service providers, partners, and other
third-party vendors are also accounted in the model.
eTOM provides a holistic view of the telecom business process framework.
The normal operations process consisting of fulfillment, assurance, and billing is
differentiated from the strategy and life cycle processes. There are three main pro-
cess areas in eTOM comprising of strategy infrastructure and product manage-
ment, operations, and enterprise management. Strategy infrastructure and product
(SIP) management covers planning and life cycle management of the product.
Operation covers the core of operational management and enterprise management
covers corporate or business support management.
The real-time activities in operations are classified under fulfillment, assurance,
and billing (FAB). The planning to ensure smooth flow of operational activities
is captured as part of the operation support and readiness. This is separated from
FAB in the eTOM model. In order to support operations for a product, there needs
to be a planning team working on product strategy and processes for life cycle
management. This is the strategy component in SIP. The processes that take care of
infrastructure required for the product or directly relate to the product under con-
sideration makes up the infrastructure part of SIP. Finally, processes for planning
the phases in the development of the product make the product group of SIP.
Telecom companies would have a set of processes common to an IT company
like human resource management, knowledge management, and so on. To repre-
sent this and to keep the definition of a telecom process separate from an enterprise
process, a separate enterprise management process group is created in eTOM. This
grouping encompasses all business management processes necessary to support the
rest of the enterprise. This area sets corporate strategies and directions and provides
guidelines and targets for the rest of the business.
While OFAB (operation support and readiness, fulfillment, assurance and
billing) and SIP make up the verticals in eTOM, there is a set of functional process
structures that form the horizontals in eTOM. The top layer of the horizontal
process group is market, product, and customer processes dealing with sales and
channel management, marketing management, product and offer management,
and operational processes such as managing the customer interface, ordering, prob-
lem handling, SLA management, and billing.
Below the customer layer is the process layer for service related processes. The
process grouping of service processes deals with service development, delivery of
service capability, service configuration, service problem management, quality anal-
ysis, and rating. Below the service layer is the layer for resource related processes.
The process grouping of resource processes deals with development and delivery of
a resource, resource provisioning, trouble management, and performance manage-
ment. The next layer is to handle interaction with supplier and partners. The pro-
cess grouping of supplier/partner processes deals with the enterprise’s interaction
with its suppliers and partners. It involves processes that develop and manage the
NGOSS ◾ 333
supply chain, as well as those that support the operational interface with suppliers
and partners.
The entities with which the processes interact are also represented in the TOM
model. The external entities of interaction are:
21.6 SID
The NGOSS SID model provides a set of information/data definitions and rela-
tionships for telecom OSS. SID uses UML to define entities and the relationships
between them. The benefit of using the NGOSS SID and its common informa-
tion language is that it offers business benefits by reducing cost, improving qual-
ity, reduced timeliness for delivery, and adaptability of enterprise operations. Use
of SID allows an enterprise to focus on value creation for its customers. Even the
attributes and processes that make up the entity or object are modeled with UML.
SID provides a knowledge base that is used to describe the behavior and structure
of business entities as well as their collaborations and interactions.
SID acts as an information/data reference model for both business as well as
systems. SID provides business-oriented UML class models, as well as design-
oriented UML class models and sequence diagrams to provide a common view
of the information and data. Mapping of information/data for business concepts
and their characteristics and relationships can be achieved when SID is used in
combination with the eTOM business process. SID can be used to create cases and
contracts to the four views: business, system, implementation, and deployment in
the NGOSS lifecycle.
SID has both a business view and a system view. The SID framework is split
into a set of domains. The business view domains include market/sales, product,
customer, service, resource, supplier/partner, enterprise, and common business.
Within each domain, partitioning of information is achieved using aggregate busi-
ness entities. Use of SID enables the business processes to be further refined making
334 ◾ Fundamentals of EMS, NMS and OSS/BSS
the contractual interfaces that represent the various business process boundaries to
be clearly identified and modeled.
21.7 TNA
TNA offers three types of capabilities as shown in the high level view (see
Figure 21.4):
The high level view is supported by a number of artifacts that make up technology-
neutral architecture. TNA artifacts are either core artifacts or container artifacts.
Core artifacts are the basic units of composition in the architecture and container
artifacts are the artifacts that encapsulate or package core artifacts.
The TNA is built on a set of concepts. Some of them are:
21.8 TAM
The telecom applications map is a reference framework on the set of applications
that enable and automate operational processes within a telecom operator. TAM
is mainly intended to address the need for a common language for information
exchange between operators and system suppliers in developing OSS applications.
Having common map component standardization for the procurement process is
expected to drive license prices down. Having a standard application model will
ensure that investment risks and costs are reduced by building or procuring systems
technology that aligns with the rest of the industry using a well-defined, long-term
336 ◾ Fundamentals of EMS, NMS and OSS/BSS
The different layers used for grouping applications in TAM are similar to those
used in the eTOM model. This ensures close mapping between business process
(eTOM), information model (SID), and applications (TAM).
21.9 Conclusion
A layered approach is required to ensure focus on various aspects of the telecom
domain. This is achieved in NGOSS, where eTOM, SID, and TAM have defined
telecom under a set of domains and functional areas to ensure sufficient depth of
focus. In a converged environment multiple services have to interoperate in a mul-
titechnology, multivendor environment. NGOSS offers a common framework of
methodologies and models to support a converged service environment. An impor-
tant requirement on next generation OSS is the presence of customer self-care strat-
egies and integrated customer-centric management. Customer experience will be a
key differentiating factor between service providers and will determine profitability.
This is one of the key focus items in NGOSS and TMF has dedicated teams work-
ing on this area to improve customer experience with NGOSS.
Fast creation and deployment of services would include service activation
on demand and rapid personalization of service. NGOSS is working on service
life cycle management with SOA, which would make service introduction and
management easy, resulting in rapid roll in/out of services. The next generation
OSS/BSS solution should be able to monitor quality of service (QoS) and quality
of experience (QoE) and ensure that customer service-level agreements are met.
NGOSS ◾ 337
NGOSS addresses the end to end SLA and QoS management of next generation
support systems.
NGOSS addresses the issue of standard information model and interfaces for
interaction. Data used by different services need to follow a common format as
defined by SID for proper interoperability between services. The UML in SID helps
to build shared information and data paradigm required for the telecom industry.
NGOSS interfaces ensure ease of integration and better interoperability. A service
provider might use OSS solutions from different vendors. For example, the provi-
sioning system may be from one vendor while the inventory management solution
may be from some other vendor. To enable flow through provisioning in service
provider space, these solutions from different vendors need to integrate seamlessly.
This seamless integration can be achieved if both the vendors adhere to NGOSS
methodologies and models. In short, NGOSS provides the necessary framework to
develop next generation operation support systems.
Additional Reading
1. www.tmforum.org
Latest release of these documents can be downloaded from the Web site:
a) NGOSS Lifecycle Methodology Suite Release
b) GB921: eTOM Solution Suite
c) GB929: Telecom Applications Map Guidebook
d) TMF053: Technology Neutral Architecture
e) GB922 & GB926: SID Solution Suite.
2. Martin J. Creaner and John P. Reilly. NGOSS Distilled: The Essential Guide to Next
Generation Telecoms Management. Cambridge: The Lean Corporation, 2005.
Chapter 22
Telecom Processes
This chapter is about telecom process models. The influence of IT models in telecom
process models is discussed in detail. The main models that have been used in this
chapter for reference are eTOM and ITIL. A sample business flow and how it can
be aligned with telecom standard model is also taken up in this chapter. The chap-
ter is intended to provide the reader with an overview of how different models can
be applied together for effective business process management.
22.1 Introduction
There are multiple telecom and enterprise specific business process models that are
used in a telecom company for effective business management. Some of the most
popular ones include eTOM (enhanced Telecom Operations Map) for telecom pro-
cess; ITIL (Information Technology Infrastructure Library) for IT process, and
quality certifications like ISO/IEC20000 and CMMI. The usage of eTOM and
ITIL used to be in two different domains, where telecom companies continued
using eTOM and IT companies used ITIL. The change in telecom space brought
out by convergence and flat world concepts has resulted in compelling IT enabled
services. Most telecom service providers then started outsourcing IT business activ-
ities in the telecom space to external IT service providers. Many IT companies play
a significant role in current telecom industry. This is because the IT infrastructure
is the heart of every business.
The increasing participation of IT companies in telecom industry has brought
confusion on what process models need to be used to manage the telecom busi-
ness. IT Infrastructure Library (ITIL) developed by the Central Computer and
Telecommunications Agency has been adopted by many companies to manage their
339
340 ◾ Fundamentals of EMS, NMS and OSS/BSS
22.2 eTOM
The eTOM is a business process framework for development and management of
processes in a telecom service provider space. It provides this guidance by defining
the key elements in business process and how they interact. The important aspect
of eTOM is a layered approach that gives enough focus to the different functional
areas in a telecom environment. As discussed in the previous chapter, eTOM is part
of the NGOSS program developed by TeleManagement Forum (TMF). It is a suc-
cessor of the TOM model. While TOM is limited to operational processes, eTOM
added strategy planning, product life cycle, infrastructure, and enterprise processes
in the telecom business process definition.
eTOM has a set of process categories across various functional areas like cus-
tomer, service, and resource management. This makes it a complete enterprise pro-
cess framework for the ICT industry. eTOM is used by all players in the telecom
space including service providers, suppliers, and system integrators. Based on the
holistic telecom vision in eTOM an end-to-end automation of information and
communications services can be achieved. eTOM has been adopted as ITU-T
(Telecommunication Standardization Sector of International Telecommunication
Union) Recommendation M.3050 making it an international standard.
Though eTOM provides a business process framework for service providers to
streamline their end to end processes, it should be kept in mind that eTOM is a
Telecom Processes ◾ 341
22.3 ITIL
The ITIL is a public framework describing best practices for IT service manage-
ment. ITIL facilitates continuous measurement and improvement of the business
process thereby improving the quality of IT service. ITIL was introduced due to a
number of compelling factors in IT industry:
The ITIL processes represent flows in a number of key operational areas. The main
emphasis is on the stages in service life cycle, namely:
◾◾ Service strategy: IT planning is the main activity in service strategy. The type
of service to be rolled out, the target customer segment, and the resource to
be allocated is part of the strategy. Market research becomes a key ingredient
in service strategy where the market competition for the service and plans to
create visibility and value for the service is formulated.
◾◾ Service design: This stage in the service life cycle is to design services that
meet business goals and design processes that support service life cycle.
Design measurement metrics and methods are also identified in this stage
with the overall intent to improve the quality of service.
◾◾ Service transition: This stage of the service life cycle is to handle changes. It
could be change in policies, modification of design or any other changes in
IT enterprise. The reuse of an existing process and knowledge transfer associ-
ated to the change is also covered as part of service transition.
◾◾ Service operation: This stage is associated with management of applications,
technology, and infrastructure to support delivery of services. Operational
processes, fulfillment of service, and problem management are part of the
service operation stage. The key IT operations management is covered under
service operation.
◾◾ Continual service improvement (CSI): This stage deals with providing value
add to the customer offering good service quality and continued improve-
ment achieved by maturity in the service life cycle and IT process. The prob-
lem with most IT service companies is that improvement is looked into only
when there is considerable impact on performance or when a failure occurs.
The presence of CSI ensures that improvement is a part of the service life cycle
and not a mitigation plan when an issue arises. Improvement will require
continuous measurement of data, analysis of measured data, and implemen-
tation of corrective actions.
The service delivery management is defined in a complete manner giving the pro-
cesses, roles, and activities performed. The interactions between processes are also
defined in ITIL. Similar to eTOM, ITIL is also a framework and hence its imple-
mentation differs from company to company. However, ITIL is mainly focused on
internal IT customers without giving a complete consideration to all the players in
the industry as in eTOM. ITIL is not a standard, but standards like ISO 15000 is
based on ITIL. So organizations are usually accessed against ISO 15000 to check
on its capability in IT service management and direct compliancy or certification
Telecom Processes ◾ 343
eTOM ITIL
◾◾ Customer: This is the top most layer of the ICT industry. This corresponds
to the end consumer that uses the products and services offered by the ICT
industry. This is shown as a separate layer due to its importance in ICT
industry where multiple standards and certification techniques are available
to measure the effectiveness of interaction with customers. The customer is
Customer
Organization
Business process
framework eTOM
ITIL
Enterprise tools
Communication Information
technology technology
infrastructure infrastructure
an entity that interacts with the ICT and the customer layer incorporates the
activities involving the customer and the ICT industry.
◾◾ Product and services: The end offering of an ICT industry can be a product
or a service. While product is the common terminology used for tangible
offering of the provider, service is the terminology used for nontangible offer-
ings. To site an example, online/mobile news is a content-based service while
a software game to be played on a mobile is a product offering.
◾◾ Organization: This is the glue layer between the service provider offering and
the backend business process that makes the smooth delivery of product or
service possible. The employees and their capabilities make up the organiza-
tion of the ICT industry. The organization is effective in setting up business
goals and the business process is intended to provide a framework to achieve
these goals. Customer interactions take place with the organization layer and
the interaction is on topics in the product/service layer.
◾◾ Business process framework: This is the most important layer in context with
the discussion in this chapter. It covers the business processes used in the
ICT industry. These include both telecom and IT business processes used in
the ICT industry. The eTOM model comes in this layer and defines the E2E
telecom business process framework.
◾◾ Enterprise tools: This layer has the management applications and tools that
are internal to the enterprise. It uses the data coming from telecom and IT to
effectively manage the enterprise. This is a glue layer that brings in automa-
tion and links the infrastructure with the business process.
◾◾ Enterprise data model: Data from the resources is expected to comply with
the enterprise data model. Similar to multiple process models like eTOM
and ITIL, there are multiple data models. For telecom companies the shared
information and data (SID) is the most popular data model and for IT com-
panies the common information model (CIM) is most popular. So enterprise
data model can encompass multiple data models.
◾◾ Infrastructure: Enterprise infrastructure consists of communication tech-
nology infrastructure and information technology infrastructure. This is
the lowest layer consisting of the infrastructure used for product and service
development.
and align with, and to comply with requirements of next generation operational
standards.
One of the main goals of harmonization between the business models is to
meet customer driven requirements. Though customer is shown as the top most
layer consuming the products and services, the customer also interacts with the
enterprise through the organization layer and at times the customer can directly
interact with the business process framework layer. Interaction directly with pro-
cess is achieved using customer self-service where customer interactions become a
part of the business process.
Both models increase operational efficiency and foster effective productivity.
They can result in considerable cost reduction without negatively impacting the
quality of service. Implementing eTOM telecom process that incorporates ITIL
best practices will result in considerable improvement of service quality. The
improvements in quality are mainly attributed to the presence of a specific continu-
ous service improvement stage in the service life cycle defined by ITIL.
To summarize this section, both ITIL and eTOM can be used in conjunction
in an enterprise for effective management of services. The effective method of using
both is to keep eTOM to define the telecom process and ITIL for incorporating
best practices. So the steps in preparing a business process that uses both eTOM
and ITIL will start with preparation of a process flow. Make it compliant with
eTOM. Then filter and merge the process with ITIL best practices. Verify the new
model and ensure it still complies with eTOM.
TR143: “ITIL and eTOM—Building Bridges” document from TeleManagement
Forum has many recommendations and guidelines on using eTOM with ITIL. The
document proposes extensions to eTOM framework to incorporate ITIL process
and best practices. Convergence is bringing in a lot of changes in process and infor-
mation models to ensure that a single model can be used across multiple domains.
1. Service design: This process can be further subdivided into the following
processes.
a. Service catalogue management: This process involves managing the
details of all operational services and services that are planned for deploy-
ment. Some information from the catalogue is published to customers
and used to support sales and delivery of services.
Telecom Processes ◾ 347
b. Service level management: This process mainly deals with agreements and
contracts. It manages service level agreements, operational level agree-
ments, and service contracts. Service and operational level agreements
are made with the customer who consumes the product and services, and
contacts are usually established with suppliers and vendor partners.
c. Capacity management: This process is expected to ensure that the
expected service levels are achieved using the available services and infra-
structure. It involves managing the capacity of all resources to meet busi-
ness requirements.
d. Availability management: This process is expected to plan and improve all
aspects of the availability of IT services. For improving availability, first
the resource availability is measured. Based on information from measur-
ing, an analysis is performed and an appropriate plan of action is defined.
e. IT service continuity management: This process is intended to ensure
business continuity. It looks into risks that can effect delivery of service
and develop plans for mitigating the identified risks. Disaster recovery
planning is a key activity performed as part of the IT service continuity
management.
f. Information security management: This process is expected to ensure that
a secure frame is available for delivery of services. The security concerns
are not just restricted to the services offered. Ensuring security is more an
organizationwide activity.
2. Service transition: This process can be further subdivided into the following
processes.
a. Service asset and configuration management: This process deals with
managing the assets and their configuration to offer a service. The rela-
tionship between the assets is also maintained for effective management.
b. Change management: This process deals with managing changes in an
enterprise in a constructive manner that ensures continuity of services.
Change can be associated with organization changes, process changes, or
change in the system caused changes in technology.
c. Release and deployment management: The product or service after devel-
opment is released to either test or to live environment. This release activ-
ity needs to be planned and scheduled in advance for proper deliver of
product or service in time to meet market demands.
3. Service operation: This process can be further subdivided into the following
processes.
a. Event management: In this process, events are monitored, filtered,
grouped, and handled. The action handling will be set based on the type
of event. This is one of the most important processes in services opera-
tions. It effects quick identification of and resolution of issues. Events
can be a key input to check if there is any service level agreement (SLA)
jeopardy to avoid a possible SLA failure.
348 ◾ Fundamentals of EMS, NMS and OSS/BSS
There are many more processes in ITIL. However, in this section the processes
covered are the ones that can be incorporated to work with eTOM.
Communicate
back to service Fix the issue
provider
details of the customer account and collect all possible information to describe the
trouble condition to a sufficient level of detail and create a trouble ticket for the
issue. This ticket number is given to the customer, so that the customer can call up
and check the status of the issue resolution easily by quoting the ticket number to
the customer support desk.
From the account details and the problem description on service having issue,
the ticket is then assigned to the service support desk. If the service problem is
caused by some provisioning or service configuration fault, the service support desk
will reprovision or reconfigure the service to bring it back to service. In the specific
problem scenario being discussed, the problem is with vendor software running
on one of the resources. Upon analysis conducted by the service support desk, the
resource that is causing the issue is isolated and the trouble ticket is assigned to
the resource support desk sitting at the network operations center (NOC) owning
the resource that has been identified as the cause of the problem.
The identification of the resource that caused problems in a service is done by
analyzing event records generated by the resources. It is expected that the devel-
oper creates enough logs to troubleshoot and fix an issue. The resource team will
then analyze the issue and isolate the cause of the problem. If the problem is
caused by a module developed by the service provider, then the ticket is assigned
to the team that developed the module to fix the issue. On the other hand (as
in our example) if the problem is caused by a module developed by an external
vendor, then the trouble ticket is assigned to the vendor trouble submission
queue.
When the vendor is notified about trouble in their product, based on the sup-
port level offered, the vendor will work with the service provider to resolve the
issue. Once the issue is fixed, the information is communicated to the service pro-
vider. The service provider will update the status of the trouble ticket as closed in
the ticket database. This activity will generate an event to the customer support
desk, so that they call up and inform the customer that the issue has been resolved.
The customer can call the customer support desk not just to enquire the status
of the trouble ticket but also to close the ticket if the issue was temporary or was
caused due to some issue that the customer was able to fix as part of self-service.
It is therefore important that all the processes that update the ticket status with
the database, first queries the database and checks the status before making any
updates. In addition to ticket status and problem description, the trouble ticket
will also contain a history of activities performed to resolve the trouble ticket,
dependencies on other trouble tickets that need to be resolved first, the person who
is currently working on the ticket, and so on.
Now let us look into the flaws in this business process for trouble management.
Though the explanation gives a clear idea of the flow, the process by itself does not
give details of the entity responsible for a specific block. For example, it is not clear
if analysis of the service problem is performed by the customer support desk as an
initial activity or by the service support team. Another flaw is that several processes
350 ◾ Fundamentals of EMS, NMS and OSS/BSS
Problem
handling
Customer relationship management
Service problem
management
Service management and operations
Resource trouble
management
Resource management and operations
22.8 Conclusion
Let us conclude the discussion with the three step methodology from
TeleManagement Forum to apply eTOM in conjunction with ITIL. The first step
is to ensure that the best practice specification using ITIL exists with regard to the
processes in eTOM model. The second step is to identify the enterprise area in an
application where eTOM needs to be applied in conjunction with ITL. The third
and final part involves refining the identified area using eTOM process elements on
the ITIL process steps.
While there are several books and white papers on eTOM and ITIL, the intent
of this chapter is to introduce the reader to the most popular models. A simple
process diagram is also explained along with guidelines on how these models can
be used together to achieve business goals.
Additional Reading
1. www.tmforum.org
Latest release of these documents can be downloaded from the Web site:
a) GB921: eTOM Solution Suite
b) TR143: Building Bridges: ITIL and eTOM.
2. Randy A. Steinberg. Servicing ITIL: A Handbook of IT Services for ITIL Managers and
Practitioners. Indiana: Trafford Publishing, 2007.
3. John P. Reilly and Mike Kelly. The eTOM: A Business Process Framework Implementer’s
Guide. New Jersey: Casewise and TM Forum, 2009.
Chapter 23
Management Applications
23.1 Introduction
There are a wide variety of telecom management applications currently available
on the market. Most telecom application developers concentrate on building a few
management solutions. It can be seen that the top player in inventory management
may not be the top player in the development of billing management applications.
In most scenarios a vendor specializing in development of inventory management
may not even have billing or customer relationship management applications in its
portfolio of products.
In general, management applications can be classified under the following
heads:
353
354 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ Business management applications: These are the applications that deal with
business for the service offered to the customer. Some of the applications
in this category are billing management, fraud management, order manage-
ment, sales management, and so on.
◾◾ Service management applications: These relate to the services offered to the
customer and deal with applications to manage the service. Some of the
applications that fall in this category are service quality management, service
provisioning, service performance management, and so on.
◾◾ Resource management applications: This category has applications to man-
age the resources in the telecom space. Some of the applications that fall in
this category are resource provisioning, resource specification management,
resource testing, resource monitoring, and so on.
◾◾ Enterprise management applications: Every Telco has a set of enterprise appli-
cations to manage the enterprise. Some of the applications that fall in this
category are knowledge management, human resource management, finance
management, and so on.
◾◾ Others: While most applications fall under one of the categories listed above,
there are some applications like integration tools, process management, sup-
plier management, transportation management, and retailer management
that does not strictly fall in a specific category. In most cases, some of the
features in these applications will either be part of business management or
enterprise management.
Applications are discussed in this chapter with regard to the functionality sup-
ported by the application rather than category to which the application belongs.
23.2 Overview of Applications
1. Inventory management: This application is intended to maintain the active
list of inventory or assets in the service provider space. It is also referred to
as asset management application. Before provisioning a service, the provi-
sioning module queries the inventory module to identify the set of resources
that can be allocated for provisioning the service. The inventory module also
performs the function of allocating resources and keeping track of the allo-
cated resources. This application is also linked with taking key decisions of
when new inventory is to be procured and provides insight on possible asset
capacity to meet the requirements of a new service. Inventory management
applications are no longer linked with keeping the data of static assets that
can be put to use. Current inventory management applications perform auto
discovery and dynamically updates asset status information.
2. Bill formatting: There are many vendors currently in the telecom industry
that provide a solution for bill formatting. With new value added services
Management Applications ◾ 355
enable flow through provisioning. Once the order is captured in the order
management system, information on the services requested by the customer
is passed to the service provisioning module. The provisioning module identi-
fies the resources that can be provisioned by interacting with inventory man-
agement. The allocated resources are then provisioned, which also includes
network provisioning for setting up the infrastructure to support the services.
After network level provisioning is successfully completed, the service provi-
sioning performs the required configuration to set up the parameters to be
used when activating the service. Finally the service is activated. A validation
module checks if the service activation is successful and the subscribed ser-
vice is ready for customer use. The service provisioning module also informs
the order management module that the order is successfully completed and
triggers the billing subsystem to start billing the customer.
6. Service configuration management: Service configuration management
involves configuring the resource components of a generic service built for a
customer or a class of customers. After basic installation of components for
operating a service, the specific parameters to be used while operating the
service needs to be provided. Configuration applications put in the relevant
values for offering the performance guaranteed by the customer SLA. Usually
a centralized configuration application will do the configuration across appli-
cation modules that deliver the service. Having distributed configuration
managers is also common in service provider space. A generic service build
will have a set of customizable parameters that are configured based on the
customer subscription.
7. Service design: At a process level, service design involves planning and
organizing service resources to improve the quality of service. The service
resources can be manpower, infrastructure, communication, or other mate-
rial components associated with the service. There are several methods in
place to perform effective service design. The increasing importance and size
of the service sector based on current economy requires services to be accu-
rately designed in order for service providers to remain competitive and to
continue to attract customers. Applications that aid the service design process
is called a service design application. These applications can redesign a service
that could change the way the service is configured, provisioned, or activated.
The resources, interfaces, and interactions with other modules can also be
modeled during service design.
8. Order management: This application is used for entering the customer order.
It first captures customer information and account level information. Before
completing the order, credit verification or payment processing is done to
check for validity/availability of funds from the customer. After the order
is placed, the order information is transferred to the provisioning system to
activate the service(s) specified in the order. The order management needs to
check if sufficient resources are available to fulfill the order. On successfully
Management Applications ◾ 357
activating the service, the order management system needs to close the
order and maintain the records of the order for billing and customer sup-
port. Billing is related to order management as the customer is billed only for
subscriptions specified in the order placed by the customer. Data from the
order is used by customer support to validate the customer against the order
number. Referring to the order placed by the customers, this helps the sup-
port desk professional to better understand the problem specification from
the customer. Order management usually has a workflow that it follows. The
workflow would involve order capture, customer validation, fraud check, pro-
visioning, activation response, order closure, and notification of customer. It
has multiple interactions with a variety of OSS modules to manage order life
cycle.
9. Credit risk management: Applications handling credit risk management offer
communications service providers the ability to continuously assess and miti-
gate subscriber credit risk throughout the customer life cycle. This helps the
service providers to achieve greater operational efficiency. These applications
tracks credit risk in near real-time, prior to subscriber acquisition, ongoing
usage, and recovery. The real-time check is made possible by building an
extensive customer profile consisting of demographics, usage patterns, pay-
ment information, and other relevant customer information.
This level of detail gives the accurate information of the subscriber and
enables operators to automatically track credit risk by automating the process
with the value of service utilization. Most applications can also create credit
risk assessment schemes. Risk variations can be tracked by configuring alert
conditions.
10. Interconnect billing: The interconnect are settlement systems that handle a
variety of traffic, from simple voice to the most advanced content and data
services. They support multiple billing formats and have powerful preprocess-
ing capabilities. Volume-based simple to complex rating, revenue-sharing,
discounting, and multiparty settlement can be performed on many types of
services.
Interconnect billing applications help to track and manage all vital infor-
mation about partners of the service provider, partner agreements, products,
services, and marketing strategies. This helps to accurately track and timely
bill the customer. Interconnect billing also performs cost management and
can be used to easily identify rampant costs and route overflows in billing
systems.
11. Invoicing: The invoice management solution speeds processing time by auto-
mating the bill payment process from receipt to accounts payable. Telecom
bills are mostly complex and error prone. This leads to high invoice process-
ing costs. Use of an invoice management solution reduces cost per invoicing.
Other advantages include a faster invoice approval and collection process. This
leads to an increase in efficiency caused by improved financial reporting.
358 ◾ Fundamentals of EMS, NMS and OSS/BSS
These formatted documents are then stored for future reference. Using these
documents in the final generation stage the desired file streams (print, web,
xml) are generated.
25. SLA management: The service level agreement the service provider has with the
customer needs to be constantly monitored for compliance. This is achieved
by measuring attributes that can be monitored for business contracts made
with the customer. The SLA management system identifies potential breach
conditions and fixes issues before an actual SLA breach occurs. In cases were
an actual breach occurs, the operator is notified so that the issue can be com-
municated to the customer and suitable compensations are provided as per
breach terms defined in SLA.
The QoS guaranteed to the customer is a form of the SLA. Degradation
of QoS is a direct indication on the possibility of breaching SLA. Any service
quality degradations that are expected should be identified and the customer
informed. This means that the customer is to be informed of any planned
maintenance or other scheduled events likely to impact delivery of service or
violation of SLA. Preset thresholds are set for the SLA attributes to identify
SLA jeopardy and SLA breach. SLA management also reports to the cus-
tomer the QoS performance as per queries or customer reports generated for
the subscribed service.
26. Customer self-management: Customer self-care applications are now used by
almost all service providers. These applications provide a set of interfaces to
the customer to directly undertake a variety of business functions. This way
the customer does not always have to contact a support desk and some of the
support operations can be performed by the customer itself. Access to the
customer interface application needs to be secure and should have a solid
authentication and authorization mechanism.
These applications provide fully assisted automated service over various
customers touch points. The customer self-management application interacts
with the customer relationship management modules and performs the oper-
ations that was previously done by a support desk professional.
27. Customer problem management: This application is intended to handle cus-
tomer problems. It is closely linked with trouble ticketing. When a customer
calls with a service or billing problem, the customer information and prob-
lem details are collected using problem management application. When the
reported problem cannot be fixed immediately, a trouble ticket is raised so
that customer can refer to the ticket number for tracking the status of the
problem resolution.
The trouble ticket is assigned to the appropriate team in service or network
support department and the problem management module continuously
tracks the status of the ticket.
Customer problem resolution applications are mainly concerned with how
the problem would affect the relationship between the customer and service
Management Applications ◾ 363
layer service and business support modules. Network data mediations mainly
provide feed to the monitoring system. The data from the different network
elements will be in different formats and the mediation component does the
job of converting to a common readable format.
Mediation applications are usually used with most OSS system that take
data from multiple elements or element management systems. When stan-
dard interfaces are not used, change in network element brings in significant
development to change the mediation system to support the new or changed
element. Control commands and messages can be exchanged through the
network data mediation solution. Most of the current mediation solutions
are standards based to support the multiple northbound and southbound
interfaces. In addition to formatting, some of the other functions performed
by mediation system include parsing, correlation, and rule-based message
handling.
42. Event management and correlation: Fault data is a critical source of infor-
mation in automating management functionality. Predefined actions can
be mapped to specific events to handle scenarios the same way it would be
done by a network professional. Event management solutions display event
records and take corrective actions for specific events. A network operator
uses a sequence of events to identify a particular problem. Similarly, the event
management system needs to collect and correlate events to identify the event
pattern that maps to a predefined problem.
Event management solutions collect data from multiple network elements.
These events are filtered and the event is checked against specific event pat-
terns for match. If the events can lead to a pattern, then such events are stored
in a repository for reference. For example, if three SLA jeopardy events on the
same service raises an e-mail to the operator, then each SLA jeopardy event or
the event count needs to be stored.
43. Network design: These solutions help to develop a design model of new ele-
ments to be included in a network and to design network configurations that
are needed to support new services. First the network and its elements are
modeled and simulated before assigning the design to appropriate systems
that support the implementation.
Network design will include physical design, logical design, and soft-
ware design. For example, multiple call servers need to be placed in a rack.
The physical design will detail what shelf and slot each of the servers will be
placed. The logical design would segregate the servers into call processing
servers and management servers to detail the interactions. Software design
will drill down into the software packages and features that should be avail-
able on the servers.
44 Network domain management: These applications provide interfaces to
hide vendor specific implementations. Using domain management applica-
tion, legacy functionalities are exposed as standard interfaces compliant to
368 ◾ Fundamentals of EMS, NMS and OSS/BSS
standards like MTNM, OSS/J, and MTOSI, following standard data models
like SID and CIM.
The net result is an in-domain activation, alarm collection, filtering, and
QoS features. A key advantage with domain management solution is that it
can be replicated across policies for different resource domains. An interest-
ing feature is that domains can also be replicated to cover multiple vendors
and equipment types. This makes it highly reusable and a versatile application
used by most service providers.
45. Network logistics: These applications are used to manage the availability
and deployment of network resources at site locations where the service is
offered. These applications will coordinate the identification and distribu-
tion of resources to meet business demands in the minimum possible time.
They are employed in stock balancing, resource or kit distribution, warehouse
stock level projections, and capacity planning.
It can be seen that network logistic applications have interactions with
supply chain management applications to get the details for stock distribu-
tion. Network logistics also help in workforce management encompassing
all the responsibilities in maintaining a productive workforce. An important
module in network logistics management is automating the engineering work
order. This involves automating many engineering processes and eliminating
data reentry to increase engineering productivity.
46. Network performance monitoring: These applications collect performance
data from the network elements. The performance records facilitate effective
network management. Performance records give information about histori-
cal behavior and utilization trends. This can be used for capacity manage-
ment and planning of resources. Problem identification and testing can
also be performed using the records collected by performance monitoring
systems.
Network monitoring systems provide inputs to resource problem man-
agement, capacity planning, and forecasting applications. A set of key per-
formance indicators and key quality indicators are monitored to identify
a shift in network performance from a predefined threshold. As historical
records need to be maintained for performance analysis, it is common to
have a dedicated repository for archival of performance records. The network
performance monitoring application will access the repository for storage and
analysis of data.
47. Network planning: These applications deal with all types of planning associ-
ated with different technology layers in the network. Engineering works and
meeting unpredicted demands cannot be performed without proper plan-
ning. There would be considerable network rearrangement and relocation of
network capability that needs to be planned to meet operational scenarios.
Use of network planning applications brings in considerable cost benefit to
the telecom company.
Management Applications ◾ 369
common customer queries that can be referred using links and search utilities
to identify how to troubleshoot a problem. These applications support work-
flows to assign customer complaints to the appropriate help desk when the
user makes a call.
Tracking of the status of a customer complain can be done with these
applications. To achieve this functionality the support solution interacts with
trouble ticket management solution. It is quite common to have the solution
offered from a Web portal through which customer issues can be submitted
and the central knowledge base can be accessed. Some of the functionalities
in this solution will be available in an integrated customer relationship man-
agement solution.
55. Traffic monitoring: These applications are used to identify potential conges-
tion in traffic over the network. Identifying a possible congestion helps to
route traffic appropriately to avoid an actual congestion. The traffic moni-
toring solutions usually interact with the configuration modules doing traf-
fic control and with inventory modules to allocate resources to fix traffic
problems.
Most traffic monitoring solutions are not limited to identification and
analysis. The corrective action to fix a problem is either suggested by this solu-
tion to the operator or a predefined set of actions is executed to fix a potential
congestion. The corrective actions are applied based on resource utilization
values of the routes participating in the network. The services affected by a
traffic problem can be identified and highlighted using the traffic monitoring
solutions.
56. Retail outlet solutions: These solutions are used by wireless service providers
to perform point of sales and point of service in their retail stores. The func-
tions performed using these solutions are similar to what is performed at the
customer support desk. Instead of placing the order by phone to a support
desk, the customer visits the retail outlet to place the order.
These retail solutions will interact with the order management module to
place and manage the customer order. It can also access the customer infor-
mation. In short, most of the interfaces offered in customer relationship man-
agement and customer query knowledge base solutions are provided in these
retail outlet solutions. In addition to these capabilities local cash manage-
ment, retail store inventory management, local promotions, and local pricing
that make up retail integration are available in the retail outlet solutions.
57. Security management: The rapid spread of IP-based access technologies as
well as the move toward core network convergence has led to a much wider
and richer service experience. The security vulnerabilities such as denial-of-
service attacks, data-flood attacks (SPIT), and so on that can destabilize the
system and allow an attacker to gain control over it during in internet, is
true for wireless communication over IP networks also. Hackers currently
are not limited to looking at private data, they can also see their victims.
372 ◾ Fundamentals of EMS, NMS and OSS/BSS
23.3 Conclusion
With evolution in technology new and improved, networks are coming up in the
market and there is an increased demand for management solutions that can man-
age these next generation networks or its elements by hiding the inherent complex-
ity of the infrastructure. The capital and operational expenditure in managing the
networks keeps on increasing when the OSS/BSS framework is not supporting the
advances in network and service technology. The problem most service providers
face is considerable effort to have an OSS/BSS framework that can manage a new
network and effectively bill the customer to yield business benefits.
There are multiple OSS/BSS applications that are discussed in this chapter.
These applications makes it easy to manage the network and services in a mul-
tiservice, multitechnology, and multivendor environment. Standardizing bodies
are coming up with standards and guidelines in developing these management
solutions. Current developments in telecom industry have lead to the addition of
new networks and expansion of existing networks to support new technologies
and products. The networks will continue to evolve and the only way to perform
network management in a cost effective and operationally friendly manner is to do
cost-effective OSS/BSS management.
This chapter gives the reader an overview of the management applications that
make up the OSS/BSS stack from a holistic perspective. It considers the customer,
service, and network layers in OSS/BSS and the applications that are used for man-
aging interactions with third-party vendors, suppliers, stake holders, and partners.
The intent is to meet the goal of this book, which is to give the reader a fundamental
understanding of the concepts without diving into the details of a specific topic.
It is important to note that TeleManagement Forum (TMF) has come up
with a Telecom Application Map (TAM), which is an application framework that
can be referred to for understanding the role and the functionality of the various
applications that deliver OSS and BSS capability. The TAM documents provide a
Management Applications ◾ 377
Additional Reading
1. www.tmforum.org
GB929: Applications Framework Map (TAM). Latest release of TAM documents can
be downloaded from the Web site.
2. Lawrence Harte and Avi Ofrane. Billing Dictionary, BSS, Customer Care and OSS
Technologies and Systems. Fuquay Varina, NC: Althos Publishing, 2008.
3. Kornel Terplan. OSS Essentials: Support System Solutions for Service Providers. New York:
John Wiley & Sons, 2001.
4. Andreas Molisch. Wireless Communications. New York: Wiley-IEEE Press, 2005.
5. Andrew McCrackan. Practical Guide To Business Continuity Assurance. Norwood, MA:
Artech House Publishers, 2004.
6. James K. Shaw. Strategic Management in Telecommunications. Norwood, MA: Artech
House Publishers, 2000.
7. Christos Voudouris, Gilbert Owusu, Raphael Dorne, and David Lesaint. Service Chain
Management: Technology Innovation for the Service Business. New York: Springer, 2008.
Chapter 24
Information Models
This chapter is about information models. Two of the most popular information
models in telecom domain, the Common Information Model (CIM) and Shared
Information/Data Model are discussed in this chapter. TeleManagement Forum
(TMF) and Distributed Management Task Force (DMTF) have developed a map-
ping between these two information models.
24.1 Introduction
Information models are important for application scalability. For example, consider
a simple order management solution that uses two fields, first name and last name
to represent the name of a customer. Another application for customer support
uses three fields, first name, middle name, and last name to represent the name
of a customer that contacts the support desk for assistance. The data collected and
used by one of these applications cannot be used directly by the other application
due to discrepancy in the number of fields used to collect the customer name. If
the customer support application wants to check the order details of the customer
by interacting with the order management module, then the customer name sent
by the customer support application will not match the name used by the order
management module. So standardization of information is required to ensure that
an application and the information base are more scalable.
For the above example, the standardization of information is achieved by defin-
ing the number of fields to be used in defining a customer name. For a customer
entity, there will be multiple attributes that can be used to define the entity. The
number of fields to be used for each of these attributes, if defined, will improve
379
380 ◾ Fundamentals of EMS, NMS and OSS/BSS
The SID framework comprises of business entities and their attributes represented
using UML models with process mapping to eTOM. The SID specifications are
grouped into domains, each of which has a set of SID business entities and attri-
butes collected in aggregate business entities that can be directly used for data
design by the service developers and service providers.
There is coupling between the different SID domains with a high degree of
cohesion within the domain (see Figure 24.1). Use of SID leads to business benefits
relating to cost, quality, timeliness, and adaptability of enterprise operations. With
standard information and interface models, service providers can focus on value
creation for their customers without worrying about the interoperability between
applications.
The most compelling requirement in enterprises of achieving business and IT
alignment can be achieved with the SID model apart from satisfying the NGOSS
information and data needs. SID derives its roots from the eTOM that provides a
business process reference framework and common business process vocabulary
for the communications industry. The eTOM offers enterprises an effective way
to organize their business processes and the communication between these pro-
cesses. The companion model to the eTOM, the SID business view model provides
an information reference model and a common vocabulary from a business entity
perspective.
The SID uses the concepts of domains and aggregate business entities (or subdo-
mains) to categorize business entities, thereby reducing duplication and promoting
reuse. The SID scheme is categorized into a set of layers. This allows a specific team
to identify and define specifications on a layer with minimal impact on other layers.
382 ◾ Fundamentals of EMS, NMS and OSS/BSS
Mapping
SID eTOM
Service
Resource
Supplier, partner
Enterprise
The ABEs are organized using a categorization pattern. This ensures consis-
tency of the ABE content and structure within each domain. The managed entity
categories include:
Since a detailed discussion of SID is outside the scope of this book, let’s look
into the basic building block, the domains in SID. Domains have the following
properties:
1. Market domain: This domain includes strategy and plans in the market space.
Market segments, competitors, and their products, through to campaign for-
mulation is covered in this domain.
2. Sales domain: This domain covers all sales related activity from sales contacts
or prospects through to the sales force and sales statistics. Both the sales and
marketing domain has data and contract operations that support the sales
and marketing activities needed to gain business from customers and poten-
tial customers.
3. Product domain: The information and contract operations related to product
life cycle are covered in product domain. The ABEs in product domain deal
with product planning, performance, offerings, and usage statistics of the
products delivered to the customer.
4. Customer domain: All data and contract operations associated with indi-
viduals or organizations that obtain products from a service provider are
included here. It represents all types of interactions with the customer and
management of customer relationship. The contract operations related to the
384 ◾ Fundamentals of EMS, NMS and OSS/BSS
accessing data across an enterprise and is not associated to a specific vendor. CIM
can be used for modeling actual and virtual resources.
Another important standard, Web-based enterprise management (WBEM)
also developed by DMTF is commonly used with CIM. While CIM defines
the information model, WBEM defines the protocols used to communicate
with a particular CIM implementation. CIM provides interoperability at the
model level and WBEM provides interoperability at the protocol level. WBEM
standards can be used to develop a single set of management applications for a
diverse set of resources. DMTF has defined specifications on the schema and
operations with CIM. This section is intended to give an overview on the CIM
concepts.
Some of the basic building blocks in CIM are:
4. Property: It contains the properties describing the data of the CIM class.
CIM property will have a name, data type, value, and an optional default
value. Each property must be unique within the class.
5. Method: Methods describe the behavior of the class. A method can be invoked
and includes a name, return type, optional input parameters, and optional
output parameters. Methods must be unique within the class, as a class can
have zero or more methods. Methods with these characteristics are termed
CIM methods. The method parameter and return type must be one of the
CIM supported data types. The IN or OUT qualifier specifies the parameter
type to be input or output.
6. Qualifier: Values providing additional information about classes, associa-
tions, indications, methods, method parameters, properties, or references
are called a CIM qualifier. A qualifier can be used only with a qualifier type
definition and the data type and value must match to that of the qualifier
type. Qualifiers have a name, type, value, scope, flavor, and an optional
default value. The scope of a qualifier is determined by the namespace in
which it is present. The qualifier type definition is unique to the namespace
to which it belongs.
7. Reference: This is a special property data type declared with REF key word.
Use of REF indicates that the data type is a pointer to another instance. Thus
defining the role each object plays in an association is done using a reference.
The role name of a class in the context of an association can be represented by
the reference.
8. Association: Associations are classes having the association qualifier. A type of
class or separate object that contains two or more references representing rela-
tionships between two or more classes is called an association. Associations
are used to establish relationship between classes without affecting any of the
related classes.
9. Indication: Indications are types of classes having the indication qualifier.
The active representation of the occurrence of an event that can be received
by subscribing to them is called an indication. CIM indications can have
properties and methods and can be arranged in a hierarchy.
There are two types of indications; namely, life cycle indications and pro-
cess indications. Life cycle indications deals with CIM class and instance life
cycle events. Class events can be class creation, edit, or destroy and instance
events can be instance creation, edit, destroy, and method invocation. Process
indications deals with alert notifications associated with objects.
10. Managed object format (MOF): This is a language based on the IDL through
which CIM management information could be represented to exchange infor-
mation. Textual descriptions of classes, associations, properties, references,
methods, and instance declarations, their associated qualifiers, and comments
are the components of a MOF specification. A set of class and instance decla-
rations make up the MOF file.
Information Models ◾ 387
1. Core model: The core model is a set of classes, associations, properties for
describing managed systems. The core model can be represented as a starting
point to determine how to extend the common schema. It covers all areas of
management with emphasis on elements and associations of the managed
environment.
The class hierarchy begins with the abstract-managed element class. It
includes subclasses like managed system element classes, product related
classes, configuration classes, collection classes, and the statistical data classes.
From the classes, the model expands to address problem domains and rela-
tionships between managed entities.
2. Common models: Information models that capture notions that are com-
mon to a particular management area, but not dependent of any particular
technology or implementation are called the common models. This offers
a broad range of information including concrete classes and implementa-
tions of the classes in the common model. The classes, properties, associa-
tions, and methods in the common models are used for program design and
implementation.
3. Application model: This model is intended to manage the life cycle and
execution of an application. Information in application model deal with
managing and deployment of software products and applications. It
details models for applications intended for stand-alone and distributed
environment.
It supports modeling of a single software product as well as a group of
interdependent software products. The three main areas in application model
are the structure of the application, life cycle of the application, and transi-
tion between states in the life cycle of an application.
The structure of an application can be defined using the following
components:
Software element: It is a collection of one or more files and associated
details that are individually deployed and managed on a particular
platform.
Software feature: It is a collection of software elements performing a par-
ticular function.
Software product: It is a collection of software features that is acquired
as a single unit. An agreement that involves licensing, support, and war-
rantee is required between the consumer and supplier for acquiring a
software product.
Application system: It is a collection of software features that can be man-
aged as an independent unit supporting a particular business function.
4. Database model: The management components for a database environment
are defined in the CIM database model. This database model has a number
388 ◾ Fundamentals of EMS, NMS and OSS/BSS
24.4 Conclusion
The SID and CIM are two widely accepted frameworks for information modeling.
There is considerable divergence between the two models as they were developed by
different standardizing organizations. TeleManagement Forum the developers of
SID and Distributed Management Task Force the developers of CIM have jointly
worked toward harmonizing the two models. The TMF document GB932/DMTF
document DSP2004 has the physical model mapping and TMF document GB933/
DMTF document DSP2009 has the logical model mapping.
Using a standard information model is required for system interoperability.
Ease of integration is a must in telecom management space where multiple oper-
ation support solutions interoperate to manage the operations in the Telco. The
information models can be mapped to business process and applications where it
needs to be used. In short, use of standard information models is mandatory in
next generation telecom management solutions. After a brief overview of what an
information model is expected to achieve, this chapter explains the basic concepts
of the two most popular information models.
Additional Reading
1. www.tmforum.org
Latest release of the following documents can be downloaded from the Web site
GB922 and GB926: SID Solution Suite
GB932 and GB933: CIM-SID Suite
2. John P. Reilly. Getting Started with the SID: A Data Modeler’s Guide. Morristown, NJ:
TeleManagement Forum, 2009.
3. Distributed Management Task Force (DMTF) CIM Tutorial at: www.wbemsolutions.
com/tutorials/CIM/
4. Chris Hobbs. A Practical Approach to WBEM/CIM Management. Boca Raton, FL: CRC
Press, 2004.
5. John W. Sweitzer, Patrick Thompson, Andrea R. Westerinen, Raymond C. Williams,
and Winston Bumpus. Common Information Model: Implementing the Object Model for
Enterprise Management. New York: John Wiley & Sons, 1999.
Chapter 25
Standard Interfaces
This chapter is about OSS interfaces for interaction. Two of the most popular
information models in telecom domain, Multi-Technology Operations System
Interface (MTOSI) and OSS through Java Initiative (OSS/J) are discussed in this
chapter. How these two standards can complement each other is also covered in
this chapter.
25.1 Introduction
There are multiple operation support systems currently available in the telecom
market. In a service provider space, multiple OSS solutions need to interoperate
seamlessly to reduce operational costs and improve operational efficiency. To make
this possible the interfaces for interaction need to be standardized. Most OSS solu-
tion developers are making their products compliant to MTOSI or OSS/J interface
standards.
In current industry, compliancy to standards is more a necessity. Let us take a
simple example of a simple fault management module. Now the network provision-
ing module might request active fault alarms using the interface call “getAlarm-
List (),” the inventory management module might request active fault alarms using
the interface call “getActiveAlarms (),” and the event management module might
request active fault alarms using the interface call “getFaultLogs ().” On the other end,
the fault management module might be exposing the interface getActiveFault () for
other modules to query and get the active alarm list as response.
One solution to fix this problem is to write an adapter that translates the call
between modules, like getAlarmList () from network provisioning module to
getActiveFault () that the fault management module can understand. In the example
391
392 ◾ Fundamentals of EMS, NMS and OSS/BSS
we are discussing, the solution of developing adapters will result in three adapters
for the fault management module alone. So in a service provider space where there
will be many OSS solutions, the number of adapters required would make interop-
erability costs huge. Again when a new OSS solution is introduced, a set of adapters
needs to be developed for the modules it is expected to integrate with. So adapters
are not a viable solution that would be cost effective to the service provider.
Another option to fix the problem of interoperability is to have standard inter-
faces. Let us look into how this works for the example we are discussing. If it is
decided by the standardizing body that getActiveFault () is the standard interface
for getting active alarms from the fault management module, then the service pro-
vider can request compliancy to standard for modules that interact with the fault
management module. That is the fault management module that will respond only
to getActiveFault () request and any module compliant to standard that interacts
with the fault management module is expected to use the standard interface. When
a new module is introduced that is compliant to standards interoperability with
fault management module, this will not be an issue.
This chapter explains the fundamentals of MTOSI and OSS/J so that the reader
can understand the significance of interface models and how an interface model
can be used in developing applications that are compliant to standards. The com-
mon terminologies used in the standard documents on interface models are also
discussed in this chapter.
25.2 MTOSI Overview
Before we discuss MTOSI, let us first give a brief overview on MTNM the prede-
cessor of MTOSI. MTNM was formed by combining the ATM information model
(ATMIM) team that was formed to address ATM management and SONET/SDH
information model (SSIM) team. MTNM was one of the most deployed interface
suites of TM Forum. It includes a business agreement document titled TMF513,
an information agreement titled TMF608, a CORBA solution set titled TMF814,
and an implementation statement titled TMF814A. Popular NMS solutions like
AdventNet Web NMS provide support for TMNF MTNM interfaces. MTNM
was mainly focusing on management of connectionless networks with an emphasis
on Ethernet and control plane-based networks.
When most OS suppliers started converting MTNM IDL to XML, TMF started
the MTOSI group. This group took a subset of MTNM and generalized the proce-
dures for OS-OS interactions (see Figure 25.1) on an XML-based interface that was
most required for the telecom market. A major MTOSI goal is to make the XML
interface independent of the transport and gradually supports more transports.
Use of MTOSI has many benefits to different players in the telecom value
chain. To equipment vendors, MTOSI results in reduction of development costs
and provides a faster time to market. It also lowers deployment risks and enables
Standard Interfaces ◾ 393
NMS
MTNM CORBA
OS OS OS
MTOSI MTOSI
OS OS
NE NE
NE
announce the entity. The discovering OS and the naming OS need not be the same
OS for any particular entity.
The following attributes of the entities are visible across the interface:
◾◾ Name: The entity instance on the CCV can be uniquely identified using the
“name” attribute. Once the naming OS sets the attribute, its value does not
vary for the life of the entity.
◾◾ Discovered name: In the case where the OS that publishes the entity on the
CCV is not the naming OS, the name of the entity is known as the discov-
ered name. This attribute may be left empty if the naming OS publishes the
entity first on the CCV. The value of the name attribute and the discovered
name attribute of the entity can be set to the same value or different value.
◾◾ Naming OS: The OS that is responsible for setting the “name” of the entity
also sets up the naming OS attribute and this attribute represents an identi-
fier for the steward of the entity. There is a unique naming OS for each entity
published on CCV.
◾◾ Alias name list: The attribute is a list of name-value pairs for the entity. The
name refers to the type of alias and the value component holds the alias itself.
◾◾ User label: It is an optional attribute that represents the “user friendly” name
for the entity.
◾◾ Owner: It is an optional attribute that is used to identify the owner of the
entity.
◾◾ Vendor extensions: This attribute is used by vendors to further qualify and
extend the entity characteristics.
MTOSI provides steps to discover new network entities such as managed elements
and topological links in cases where two different suppliers are providing the net-
work inventory management OS and the network discovery/activation OS. Another
interesting feature in MTOSI is backward and forward compatibility.
25.3 MTOSI Notifications
In this section, the notification feature is taken as a sample MTOSI feature for
discussion. Some of the event notifications provided by MTOSI include:
The interface offers capability for both single event inventory notifications like the
creation of a particular entity or attribute value changes for several attributes of a
given entity, as well as multievent inventory notifications, which is a direct method
for getting bulk inventory updates from the source OS to other interested OSs.
To get a feel of notification generation like object creation notification (OCN)
and attribute value change (AVC) notification, let us discuss two methods by which
a new network entity is announced on CCV. The first method requires the discovery
OS to keep track of the planned entities, and the network entities to be planned and
named at the interface before they are discovered. The steps in the first method are:
In the second method the new network entity is first announced on the CCV by
the discovery OS before it is given a unique name. Here we will deal with object
discovery notification (ODscN) and object creation notification (OCN). The steps
in the second method are:
1. A network entity that has not been announced by the inventory OS at the
interface is directly detected by the discovery OS.
2. The discovery OS then sends an object discovery notification to the notifica-
tion service.
396 ◾ Fundamentals of EMS, NMS and OSS/BSS
Service interfaces and data entities define interface implementation. The service
interfaces consist of operations and notifications that have parameters. Attributes
make up data entities. Additional service interfaces like operations and notifica-
tions are needed as the interface is extended. Interface elements like operations will
not be deleted when passing from one version to the next unless an error with an
existing element is identified. Instead of “delete” term, “Deprecate” is used with
respect to parameters for operations and notifications, since it may be easier to leave
the operation/notification signature alone and just deprecate usage of a particular
parameter.
25.4 OSS/J Overview
While the NGOSS program of TMForum focuses on the business and systems
aspects, essentially the problem and solution statement of OSS-solution delivery,
the OSS/J program focuses on the implementation, solution realization, and deploy-
ment aspects using Java technology. The OSS through the Java (OSS/J) initiative
defines a technology specific set of API specifications. In addition to API specifi-
cations, OSS/J also includes reference implementations, technology compatibility
kits, and multitechnology profiles for OSS integration and deployment.
The core business entities (CBE) model of shared information/data (SID) model
that helps to provide a technology neutral implementations view of the NGOSS
architecture is used by the OSS/J API specifications. The Java 2 Platform Enterprise
Edition and EJB-based Application Architecture are the enabling technologies in
OSS/J, used as tools for developers to easily develop applications. The J2EE plat-
form offers faster solution delivery with a reduced cost of ownership. The Enterprise
JavaBean (EJB) component architecture is designed to hide complexity and enhance
portability by separating complexities inherent in OSS applications.
Rapid integration and deployment of a new OSS application can be done by
OSS through Java initiative’s framework due to standard EJB components that are
already equipped with connectors. Application integration capabilities are provided
for server-side applications by APIs that are defined within the initiative using J2EE
platform and EJB architecture.
A standard set of APIs using Java technology for OSS and BSS can be defined
and implemented by using OSS through Java initiative. Members of OSS/J are
leveraging standards from 3rd Generation Partnership Project (3GPP), Mobile
Wireless Internet Forum (MWIF), and the TeleManagement Forum (TMF) to
Standard Interfaces ◾ 397
implement common Java APIs and provide a source code that can be made freely
available to the industry. OSS/J implementation groups include telecom equip-
ment manufacturers, software vendors, and systems vendors that are formed to
develop and implement interoperable and interchangeable components to build
OSS applications.
Addressing the lack of interoperable and interchangeable OSS applications for
the OSS industry is the goal of the initiative. Developing and implementing com-
mon APIs based on an open component model with well-defined interoperable
protocols and deployment models serve to accomplish the goal.
OSS/J initiative is performed under the open Java community process (JCP).
Using JCP the initiative will develop the following for each application area:
Use of OSS/J has many benefits to different players in the telecom value chain. To
equipment vendors, it facilitates integration of equipment with more vendor prod-
ucts. Service providers are able to develop agile solution and improve interoperability
398 ◾ Fundamentals of EMS, NMS and OSS/BSS
Before we conclude the discussion let’s look into OSS/J certification. The OSS/J
API specification and reference implementation are accessible for free download.
OSS solution developers can use the API specification and reference implementa-
tion code to develop OSS/J API in products that are planned to be certified. To
certify the developed API, the solution developer has to download the technology
compatibility kit within the Java community process, which is a test suit that is
available for free.
The test suite needs to be run against the API developed by the service provider.
The test execution will generate a test report that will specify if the test run was
successful. On obtaining a successful test run in the report, the solution developer
needs to send the test execution report to the OSS/J certification committee. The
certification committee will validate the report and the process used to create it.
Once the certification committee acknowledges the certification of the product,
the certification can then be published. The published certificates will also have a
reference to the test execution report. The publication of the test report eases the
procurement of certified products for faster integration.
MTOSI adapter
MTOSI
EMS
NE NE NE
OSS/J offers technology independent APIs. There are separate APIs for various
OSS functions. MTOSI gives OSS interfaces for a multitechnology environment
and is based on specific models. MTOSI models are defined in TMF 608. OSS/J,
being associated to a generic set of operations, can have an adapter written when a
specialized set of operations needs to be executed with MTOSI. This is shown in
Figure 25.2 where the generic OSS/J framework has an MTOSI adapter for interac-
tion wherever a defined set of operations are to be executed.
MTOSI and OSS/J has some marked differences. While OSS/J is Java based,
MTOSI is XML based with emphasis on Web services. MTOSI however has guide-
lines on using JMS, though its usage is not mandatory for implementing MTOSI
compliant APIs. OSS/J also supports XML, but the XML API is wrapped with
Java API.
The MTOSI back end implementation is not based on a specific technology
as compared to OSS/J where the back end implementation is in Java. As MTOSI
evolved from MTNM most of the issues in interface APIs were fixed and the defini-
tions were to offer a service oriented interface on a heterogeneous platform environ-
ment. So when multitechnology, multinetwork support is required MTOSI would
be a good choice, while in open source based environment OSS/J would be the
popular choice. Support to both these interface standards can be implemented in
the same OSS solution.
25.6 Conclusion
An application programming interface (API) is an interface that defines the ways
by which an application program may request services from another application.
APIs provide the format and calling conventions the application developer should
Standard Interfaces ◾ 401
use to work on the services. It may include specifications for routines, data struc-
tures, object classes, and protocols used to communicate between the applications.
The APIs are mostly abstract interface specifications and controls the behavior
of the objects specified in that interface. The application that provides the function-
ality described by the interface specification is the implementation of the API. This
chapter gives the reader an overview of the management interface standards used in
OSS/BSS development. It considers MTOSI and OSS/J for developing applications
in the customer, service, and network layers of OSS/BSS. The basic concepts of
these two popular interface standards are discussed without diving into the details
of a specific topic.
Additional Reading
1. www.tmforum.org
(Latest version of the documents can be downloaded from the Web site)
TMF608: MTOSI Information Agreement
TMF517: MTOSI Business Agreement
TMF854: MTOSI Systems Interface: XML Solution Set
TMF608: MTNM Information Agreement
TMF513: MTNM Business Agreement
TMF814: MTNM IDL Solution Set
TMF814A: MTNM Implementation Set and Guidelines
OSS APIs, Roadmap, Developer Guidelines, and Procurement Guidelines
2. Software Industry Report. Sun & Leading Telecom Companies Announce Intention To
Start Next Generation OSS Through Java. Washington, DC: Millin Publishing Inc.,
2000.
3. Kornel Terplan. OSS Essentials: Support System Solutions for Service Providers. New York:
John Wiley & Sons., 2001.
4. Dr. Christian Saxtoft. Convergence: User Expectations, Communications Enablers and
Business Opportunities. New York: John Wiley & Sons., 2008.
5. John Strassner. Policy-Based Network Management: Solutions for the Next Generation.
San Francisco, CA: Morgan Kaufmann, 2003.
IMPLEMENTATION IV
GUIDELINES
Chapter 26
Socket Communication
This chapter is about the basics of network programming. It introduces the reader
to the concept of writing client–server code. The client usually acts as the agent
sourcing data to the server that listens for data from multiple clients. Out of the dif-
ferent techniques that are used for client–server programming, this chapter specifi-
cally deals with socket programming. Programming for TCP and UDP transport
between client and server is also discussed.
26.1 Introduction
To complete this book on fundamentals of EMS, NMS, and OSS/BSS, it is essential
to discuss the implementation basics for developing solutions using the concepts
discussed in preceding chapters.
Some of the important aspects in developing management applications are:
◾◾ These applications mainly deal with the application layer. Management pro-
tocols usually lie in this layer.
◾◾ There will be multiple agents or clients running on the network elements
that will collect data and send to a central server. Even for upper layers, there
will be business and service level applications that take feed from multiple
NMS clients.
◾◾ There are different techniques of communication based on environment.
Socket-based programs, RPC-based programs, and web-based programs
are quite common in implementing application interactions for manage-
ment solutions.
405
406 ◾ Fundamentals of EMS, NMS and OSS/BSS
This part of the book is intended to cover all these topics for developing manage-
ment applications including a case study on the design approach to be followed in
developing an NGNM solution and then customizing the solution for a specific
network. The coverage still limits the discussion to the fundamentals with informa-
tion on additional reading to get detailed information on specific topics.
This chapter gives details on the application layer and client–server program-
ming. The socket interface is then discussed in detail how TCP and UDP commu-
tation is achieved with socket programming. The programming language used in
this chapter is “C” considering most initial implementations of socket programming
were done with socket APIs (application programming interface) in C programming
language. It is important to note that C programming language was initially devel-
oped for the UNIX operating system by Dennis Ritchie.
26.2 Application Layer
The application layer is the top layer in the OSI model and TCP/IP model. The
application layer manages communication between applications and handles issues
like network transparency and resource allocation. The layers below the application
layer provide the framework for communication between application programs in
the application layers.
The important aspects of the application layer programs are:
◾◾ Services: There are different types of services that can be offered in the
application layer. These make use of the client–server paradigm for message
exchange. Some of the popular protocol–based services are:
−− Services using simple mail transfer protocol (SMTP): The SMTP service
allows transfer of message between two users on the Internet via elec-
tronic mail.
−− Services using file transfer protocol (FTP): The FTP service allows the
exchange of files between two elements.
−− Services using hypertext transfer protocol (HTTP): The HTTP transfer
protocol facilitates the use of the World Wide Web.
◾◾ Addressing: The client message includes the address of the server as the desti-
nation address and its own address as the source address for communication.
Similarly for messages from the server, the server address is used as the source
address and the address of the client is the destination address. The application
layer has its own addressing format, which is different from the address for-
mat used by the underlying layers. For example, electronic mail service has a
host address format like [email protected] and the world wide web service has
a host address format like www.crcpress.com.
The application program uses an alias name as the address rather than an
actual IP address. The main part of the alias is the address of the remote host
and the remaining part is related to the port address of the server and direc-
tory structure where the server program is located. Instead of using the IP
address, the application program uses an alias name with the help of another
entity called DNS (domain name system).
◾◾ Capabilities: There are a standard set of capabilities that are expected in any
application layer program. Three of the capabilities from the standard set
that are discussed here are reliability of data transfer, high throughput, and
minimal delay in communication. Not all application programs implement
these capabilities even though these are implied requirements in commercial
grade management solutions.
−− Reliability of data transfer: Some application programs include reli-
ability as part of its protocol implementation or use the services of a
reliable transport layer protocol like TCP. There are applications that
do not require reliable data transfer as in intermittent data transfer
over UDP.
−− High throughput: Having a high throughput ensures transmission of a
maximum amount of data in a unit of time. In most cases a high through-
put will require high bandwidth. When the volume of data to be trans-
ferred is high as in transmission of video files, high throughput becomes
a necessary requirement.
−− Minimum delay: Some applications are very sensitive to delay. For exam-
ple, communication delay can adversely affect an interactive real-time
application program.
408 ◾ Fundamentals of EMS, NMS and OSS/BSS
26.3 Monitoring
Networking protocols helps to manage the network and makes it easier to work on
the network. Some of the utility protocols may not be required for every network
application though it might be possible to add unique features to solutions using
these protocols. Applications based on DNS, ICMP (Internet control message),
TELNET, and other utility protocols in the TCP/IP suite like ARP (address reso-
lution protocol) or RIP (routing information protocol), can be effective tools for
an operator performing network troubleshooting. Some of these methods are dis-
cussed here (see also Figure 26.1).
Domain name system: This is a client–server application that identifies each
host on the Internet with the help of a unique user-friendly name referred to as a
domain name. This functionality is useful as most people find it difficult to remem-
ber strings of numbers like an IP address of a host. The DNS helps to convert
domain names into the IP address. This is done by mapping an easily recogniz-
able domain name with IP addresses. A worldwide network of DNS stores the list
of domain names against IP addresses.
Internet control message protocol: This is also referred to as PING. It is a proto-
col that is used to report broken network connections or other router-level problems
that end hosts might need to know. The PING utility available in most computers
helps to identify if another computer in the network is switched on and how much
delay there is over the connection to send messages to it.
TELNET: This protocol is used for debugging servers and for investigating new
TCP-based protocols. To debug a server, a TCP connection is made to the server
Figure 26.1 Working with PING, ARP, and TELNET in MS-DOS (Microsoft disk
operating system).
Socket Communication ◾ 409
that is to be debugged using telnet utility. Since most telnet clients facilitate the
connection to ports other than default port 23, the telnet utility can also be used
to investigate new TCP-based protocols. Most operating systems include telnet
clients, so telnet is rarely used programmatically.
Address resolution protocol resolves IP addresses into their equivalent MAC
addresses and routing information protocol helps to identify the number of hops
to a destination. There are many more utility applications that can aid in network
troubleshooting.
Client Client
Internet Internet
Server
t erminate before another client is started by the machine. When the clients are run
concurrently, the machine allows running two or more clients at the same time.
Similarly the servers can be run on a machine either iteratively or concur-
rently. An iterate server accepts, processes, and sends the response to the client
before accepting another request; that is, it processes a single request at a time.
A concurrent server processes a large number of requests at the same time. It
shares its time between a large numbers of requests. The transport layer protocol
and the service method facilitate the server operation. UDP—connectionless
transport layer protocol or TCP—connection oriented transport layer protocols
are used commonly.
Based on the number of requests the server can handle and the type of connec-
tion, servers can be classified into four types:
◾◾ Family: This field defines the protocol group like IPV4, IPV6, or UNIX
domain protocols.
◾◾ Type: This field defines the type of the socket. Socket type can be raw socket,
stream socket, or datagram socket.
◾◾ Protocol: This field defines the protocol used. It can be TCP for connection
oriented communication and UDP for connectionless communication.
Socket Communication ◾ 411
Application program
Stream
Datagram socket
Raw socket socket
interface
interface interface
TCP UDP
IP
◾◾ Local socket address: This field is a combination of the local IP address and
the port address. It is the socket address of the local application program.
◾◾ Remote socket address: This field defines the combination of the remote IP
address and the port address. It is the remote socket address of the remote
application program.
◾◾ Raw socket: Sockets that directly use the IP services are referred to as raw
sockets. Neither the stream socket nor the datagram sockets are used by
some protocols like ICMP or OSPF that directly uses the IP services of
raw socket.
◾◾ Stream socket: Stream sockets are used with connection oriented protocol
like TCP. It uses pairs of stream sockets to connect an application program
with another application program on the network.
◾◾ Datagram socket: Datagram sockets are used with connectionless protocol
like UDP. It sends messages from one application program to another appli-
cation program on the network using pairs of datagram sockets.
412 ◾ Fundamentals of EMS, NMS and OSS/BSS
The two types of transport protocols commonly used for socket communication:
#include <sys/types.h>
#include <sys/socket.h>
int socket (int domain, int soc_type, int protocol);
2. bind: This function is used to associate the socket id with an address to which
other processes can connect.
#include <sys/types.h>
#include <sys/socket.h>
int bind (int socID, struct sockaddr *addrPtr, int
length);
3. listen: This function is used by the server to specify the maximum number
of connection requests allowed. That is the server queue size for connections
from client.
#include <sys/types.h>
#include <sys/socket.h>
int listen (int socID, int size);
4. accept: This function is used to identify the socket id and address of the client
connecting to the server socket. Waits for an incoming request and when
received creates a socket for it.
#include <sys/types.h>
#include <sys/socket.h>
int accept (int socID, struct sockaddr *addrPtr, int
*lengthPtr);
Iterating server: In this type only one socket is opened by the server at a
time. After completion of processing in that connection, the next connec-
tion is accepted. Having only a single connection usually leads to poor
performance and utilization.
Forking server: In this type, after the connection is accepted from a cli-
ent, a child process is forked off to handle the connection. Forking allows
multiple clients to be connected to the server with a separate forked pro-
cess handling a client connection. If data sharing is required then forking
should be implemented with multithreading.
Concurrent single server: In this type, the server can simultaneously
wait on multiple open socket ids. The server process wakes up only when
new data arrives from the client. This reduces the complexity of threads
implementations.
5. Send: This function is used to send data. The return value of the function
indicates the number of bytes sent. The send function is used in connection
oriented implementation.
414 ◾ Fundamentals of EMS, NMS and OSS/BSS
#include <sys/types.h>
#include <sys/socket.h>
int send (int socID, const char *dataPtr, int length,
int flag);
6. Recv: This function is used to receive data. The return value of the function
indicates the number of bytes received. The receive function is used in con-
nection oriented implementation.
#include <sys/types.h>
#include <sys/socket.h>
int recv (int socID, char *dataPtr, int length, int flag);
// “socID” is the descriptor of the socket from which the data is to be received.
// “dataPtr” is a pointer to the data.
// “length” is the buffer size for data received.
// “flag” allows the caller to control details.
7. Sendto: This function is used to send data. The return value of the function
indicates the number of bytes sent. The sendto function is used in a connec-
tionless protocol implementation.
#include <sys/types.h>
#include <sys/socket.h>
int sendto (int socID, const void *dataPtr, size_t
dataLength, int flag, struct sockaddr *addrPtr,
socklen_t addrLength);
8. Recvfrom: This function is used to receive data. The return value of the func-
tion indicates the number of bytes received. The recvfrom function is used in
connectionless protocol implementation.
Socket Communication ◾ 415
#include <sys/types.h>
#include <sys/socket.h>
int recvfrom (int socID, void *dataPtr, int dataLength,
int flag, struct sockaddr *addrPtr, int *addrLength);
#include <sys/types.h>
#include <sys/socket.h>
int shutdown (int socID, int flag);
10. Connect: This function is used by the client to connect to a server that is
listening for connection.
#include <sys/types.h>
#include <sys/socket.h>
int connect (int socID, struct sockaddr *addrPtr, int
addrlen);
11. close: This function is used to close connection corresponding to the socket
descriptor and free the socket descriptor.
#include <sys/types.h>
#include <sys/socket.h>
int close (int sockID);
The server can use only one well-known port at a time for communication, but
there are several client connections open at the same time that requires the pres-
ence of several ports. To overcome this problem the server issues several ephemeral
ports along with the well-known port. The connection made to the well-known
Server Client
Socket
Bind Socket
Listen Connect
Accept
Send Receive
Receive Send
Close Close
port is assigned to the temporary port so that the data transfer takes place between
the temporary port at the client site and the temporary port at the server site. This
helps to free the well-known port so that the server can make a connection with
another client.
/* ==================================== *
* Connection Oriented (TCP) Server Program *
* ==================================== */
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdio.h>
#include <string.h>
void main(void)
{
int responseListenID;
int responseAcceptID;
socklen_t clientAddrLen;
struct sockaddress serverAddr;
struct sockaddress clientAddr;
/* Create Socket */
responseListenID = socket(AF_INET,SOCK_STREAM,0);
memset (&serverAddr, 0, sizeof (serverAddr));
/* define port (selectedPort) to use for listening */
serverAddr.sin_family = AF_INET;
serverAddr.sin_port = htons(selectedPort);
serverAddr.sin_addr.s_addr = htonl(INADDR_ANY);
/* Bind the socket */
bind (responseListenID, &serverAddr, sizeof (serverAddr));
/* Listen for connect request from client */
listen (responseListenID, 1);
clientAddrLen = sizeof(clientAddr);
/* define infinite “for” loop to accept connections
from client */
for(; ;)
{
/* accept response from client */
418 ◾ Fundamentals of EMS, NMS and OSS/BSS
responseAcceptID = accept(responseListenID,&clientAddr,
&clientAddrLen);
/* fork to handle multiple client request as separate
process*/
pid = fork();
if (pid>0)
{
/* this is the parent process of fork operation */
close(responseAcceptID);
continue;
}
else
{
/* this is the child process and needs to handle the
specific client*/
close(responseListenID);
/*Implement read and write, for communication with
client */
close(responseAcceptID);
} /* if-else end here */
} /* for loop end here */
}
/* ==================================== *
* Connection Oriented (TCP) Client Program *
* ==================================== */
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdio.h>
#include <string.h>
char buf[DATABUF];
struct hostent *hptr;
/* Create Socket */
commSocket = socket (AF_INET, SOCK_STREAM, 0);
memset (&servAddr, 0, sizeof(hostAddr));
hostAddr.sin_family = AF_INET;
/* define port (selectedPort) to use connection */
hostAddr.sin_port = htons(selectedPort);
/* define domain name (address) */
hptr = gethostbyname (“address”);
memcpy((char*)&hostAddr.sin_addr.s_addr,hptr->h_addr_
list[0],hptr->h_length);
/* connect to server */
connect (commSocket, hostAddr, sizeof struct sockaddress);
memset (buf, 0, DATABUF);
/* Have a loop to send and receive information */
while (gets(buf))
{
/* Implement read and write, for communication with
server */
}
close (commSocket);
}
Server Client
Socket
Bind Socket
Bind
Close Close
/* ==================================== *
* Connectionless (UDP) Server Program *
* ==================================== */
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdio.h>
#include <string.h>
Socket Communication ◾ 421
void main(void)
{
/* Create Socket */
socketID = socket (AF_INET, SOCK_DGRAM, 0);
memset (&serverAddr, 0, sizeof (serverAddr));
serverAddr.sin_family = AF_INET;
}
422 ◾ Fundamentals of EMS, NMS and OSS/BSS
/* ==================================== *
* Connectionless (UDP) Client Program *
* ==================================== */
#include <sys/types.h>
#include <sys/socket.h>
#include <netdb.h>
#include <netinet/in.h>
#include <stdio.h>
#include <string.h>
socklen_t remoteAddrLen;
struct sockaddress remoteAddr;
struct hostent *hptr;
/* Create Socket */
socketID = socket (AF_INET, SOCK_DGRAM, 0);
memset (&remoteAddr, 0, sizeof (remoteAddr));
remoteAddr.sin_family = AF_INET;
while (gets(buf))
{
Socket Communication ◾ 423
26.9 Conclusion
Most legacy developments on NMS are client–server based and these solutions
usually use socket communication for interaction. In the client–server model only
when intermittent communication is involved the transmission mechanism is UDP
based and when a continuous reliable connection is required, TCP is used.
This chapter gives the reader a basic understanding of socket programming
by introducing the reader to the basic terminologies. The sample code given in
this chapter is intended to bridge the gap between theory and implementation, by
first explaining the program and then supplementing the discussion with code.
Comments have been added in the code, so that the reader can easily follow the
program execution.
Additional Reading
1. Lincoln D. Stein. Network Programming with Perl. United Kingdom: Addison-Wesley
Professional, 2001.
2. W. Richard Stevens, Bill Fenner, and Andrew M. Rudoff. Unix Network Programming,
Volume 1: The Sockets Networking API. 3rd ed. United Kingdom: Addison-Wesley
Professional, 2003.
3. Elliotte Rusty Harold. Java Network Programming. 3rd ed. California: O’Reilly Media
Inc., 2009.
4. Michael Donahoo and Kenneth Calvert. TCP/IP Sockets in C: Practical Guide for
Programmers. San Francisco, CA: Morgan Kaufmann, 2000.
5. Richard Blum. C# Network Programming. Sybex, California: Sybex, 2002.
Chapter 27
RPC Programming
This chapter is about the remote procedure call (RPC) programming. It introduces
the reader to the concept of writing RPC client–server code. Remote procedure
call defines a powerful technology for creating distributed client–server programs.
The RPC runtime libraries manage most of the details relating to network proto-
cols and communication. With RPC, a client can connect to a server running on
another platform. For example, the server could be written for Linux and the client
could be written for Win32. The RPC is for distributed environment.
27.1 Introduction
Programs communicating over a network need a paradigm for communication.
The RPC is used in a distributed environment where communication is required
between heterogeneous systems. It follows the client–server model of communica-
tion. In a remote procedure call based communication a client makes a procedure
call to send a request to the server. The arrival of the request causes the server to
dispatch a routine, perform the service the client has requested, and send back the
response to the client.
The machine implementing the set of network services is called the server and
the machine that requests for the service is the client. The server can support more
than one version of a remote program. The control moves through two processes
in the remote procedure call model. They are the caller process and server pro-
cess. The caller process sends a message containing the procedure parameters to the
server process and waits for a reply message that contains the procedure results. On
arrival of the result, the caller process resumes execution. The server side process
is dormant until it gets a call message. When the server process gets a request, it
425
426 ◾ Fundamentals of EMS, NMS and OSS/BSS
will process the same and send a response. It can be seen that only one of the two
processes is active at any given time in the RPC model.
The RPC call model can support both synchronous and asynchronous calls.
Asynchronous calls permit the client to perform useful work while waiting for the
reply from the server. To ensure effective utilization of the server, the server can
create a task to process an incoming request, so that it will be free to receive other
requests. A collection of one or more remote programs, implementing one or more
remote procedures such that the procedures, their parameters, and results are docu-
mented in the protocol specification of the program is called a RPC service. The
network client will initiate remote procedure calls to access the services.
In order to be forward compatible with changing protocols, a server may sup-
port more than one version of a remote program. The caller places arguments to a
procedure in a well-specified location and transfers control to the procedure in the
local model. As the control is gained by the caller it extracts the results of the pro-
cedure from the well-specified location and continues execution. This chapter gives
an overview on RPC with emphasis on programming. The XDR (eXternal Data
Representation) has been used in examples as the presentation layer to wrap data in
a format that both the client and server can understand with the RPC model that
involves data transfer between heterogeneous systems.
Return response
processing until either a reply is received or the request times out and the client
program continues as soon as the RPC call is completed.
A remote procedure is identified by a combination of program number, version
number, and procedure number. In a group of related remote procedures, each
remote procedure is identified by a unique procedure number in the program. The
program is further identified using the program number, which is unique. Each
version of a program consists of a collection of procedures that are available to be
called remotely. Multiple versions of an RPC protocol can be made available using
the version numbers.
In order to develop an RPC application, first the protocol for the client–server
communication needs to be defined. This is followed by the development of the
client program and the server program. The programming aspects to be looked into
for RPC development are discussed in this chapter.
primarily by the client RPC layer to match replies to requests and the client application
may choose to reuse its previous transaction ID when retransmitting a request.
The server cannot examine this ID in any other way except as a test for equality.
A server takes advantage of the transaction ID that is packaged with every RPC
request to ensure that the procedure is executed at least once. The transaction ID
can also ensure that a previously granted request from a client does not get granted
again. If the application runs on a reliable transport, it can infer that the procedure
was executed exactly once from the reply message. If no reply message is received, it
cannot assume the remote procedure was not executed. An application needs tim-
eouts and reconnection to handle server crashes even with a connection oriented
protocol like TCP.
27.3.2 Presentation
The XDR is a popular presentation layer in implementing RPC applications. XDR
can be used to transfer data between diverse computer architectures and enables
communication between diverse machines like Sun Solaris, VAX, or AIX from
IBM. For communication between heterogeneous systems, a common format that
is understood by both systems is required. XDR enables RPC to handle arbitrary
data structures, in a machine independent manner regardless of the byte order or
structure layout conventions used. This machine dependency is eliminated by con-
verting the data structures to XDR before sending (see Figure 27.2).
Using XDR, any program running on any machine can create portable data
by translating its local representation into the XDR representation and vice versa.
Data conversion with XDR involves two tasks. They are:
Intricate data formats can also be described using XDR language. Using a data
description language like XDR provides less ambiguous descriptions of data.
RPC
In the absence of rpcgen, the programs written by the programmer has to make
local procedure calls to the client skeletons using remote program. The rpcgen com-
piler allows programmers to mix low-level code with high-level code. After writing
the server procedures, the main program can be linked with the server skeletons to
produce an executable server program. Handwritten routines can be easily linked
with the rpcgen output.
The client skeletons interface with the RPC library hides the network from its
caller. The server skeleton hides the network from server procedures that are to
be invoked by remote clients. The most important file for conversion when using
rpcgen is the protocol definition file. This file has a description of the interface of
the remote procedures. It also maintains function prototypes and definition of any
data structures used in the calls. These definitions will include both argument types
and return types that need to be converted. The protocol definition file can also
include “C” (programming language) code shared by client and server.
/* ===================
Before Conversion
=================== */
type1 PRO1(operands1)
type2 PRO2(operands2)
430 ◾ Fundamentals of EMS, NMS and OSS/BSS
/* ===================
After Conversion
Client Stub
=================== */
/* ===================
After Conversion
Server Procedure
=================== */
/* ==============
Example
============== */
program TEST_PROG
{
version TEST_VERSION
{
type1 PRO1(operands1) = 1;
type2 PRO2(operands2) = 2;
} = 3;
} = 2000055;
RPC Programming ◾ 431
// In this example:
// >> the program with name “TEST_PROG” has program number
2000055
// >> the version with name “TEST_VERSION” has version
number 3
// >> the procedure with name “PRO1” has program number 1
27.3.6 Broadcasting
In broadcast RPC-based protocols, the client sends a broadcast packet to the
network and waits for numerous replies. Broadcast RPC uses unreliable, packet-
based protocols like UDP as its transport. Servers that support broadcast protocols
respond only when the request is successfully processed without any specific point-
ers on the errors. Broadcast RPC uses the port mapper to convert RPC program
numbers into port numbers.
When normal RPC expects one answer, the broadcast RPC expects many
answers from each responding machine. A major issue with broadcast RPC is that
when there is a version mismatch between the broadcaster and a remote service, the
broadcast RPC never detects this error.
Also services need to first register with the port mapper to be accessible using
the broadcast RPC mechanism. Broadcast requests are limited in size to the maxi-
mum transfer unit (MTU) of the local network.
432 ◾ Fundamentals of EMS, NMS and OSS/BSS
27.3.7 Batching
Batching allows a client to send a sequence of call messages to a server. It uses reliable
byte stream protocols like TCP for transport. The client never waits for a reply from
the server, and the server does not send replies to batch requests. A sequence of batch
calls is usually terminated by a RPC to flush the pipeline. Since the server does not
respond to every call, the client can generate new calls in parallel, with the server
executing previous calls. In addition, the reliable connection oriented implementa-
tion can buffer many calls and send them to the server in a single write system call.
The RPC architecture is designed so that a client sends a call to a server and
waits for a reply that the call has succeeded. Clients do not compute while serv-
ers are processing a call, which is inefficient if the client does not want or need an
acknowledgment for every message sent. The RPC batch facilities make it possible
for clients to continue computing while waiting for a response.
27.3.8 Authentication
Identification is the means to present or assert an identity that is recognizable to the
receiver. Authentication provides the actions to verify the truth of the asserted iden-
tity. Once the client is identified and verified, access control can be implemented.
Access control is the mechanism that provides permission to allow the requests
made by the user to be granted, based upon the user’s authentic identity. Access
control is not provided in RPC and must be supplied by the application. Several
different authentication protocols are supported in RPC. A field in the RPC header
indicates what protocol is being used. The RPC protocol provides the fields neces-
sary for a client to identify itself to a service and vice versa. The call message has
two authentication fields, the credentials and verifier. The reply message has one
authentication field, the response verifier.
The RPC protocol specification defines all three fields as the following opaque
type:
enum auth_flavor {
AUTH_NONE = 0,
AUTH_UNIX = 1,
AUTH_SHORT = 2,
AUTH_DES = 3,
AUTH_KERB = 4
/* and more to be defined */
};
struct opaque_auth {
auth_flavor flavor;
.....
};
RPC Programming ◾ 433
The external data representation (XDR) is a data abstraction needed for machine
independent communication. The client and server need not be machines of the
same type. Simple NULL-terminated strings can be used for passing and receiving
the directory name and directory contents. The program, procedure, and version
numbers for client and servers can be set using rpcgen or relying on predefined
macros in the simplified interface.
Program numbers are defined in a standard way:
Client proxy file and server stub file: These can be generated using protocol
compilers, mibdl.exe for windows and rpcgen for unix.
Interface header: Also generated by a protocol compiler from the interface
file. Add this header in both client and server code.
Client and server code: To make a handle for communication, registering
procedures, and so on.
XDR file: –Unix uses XDR (eternal data representation) in the presenta-
tion layer for RPC. This file is also created by protocol compiler.
2. First let us create the interface file (msg.x):
>>rpcgen –N –a msg.x
Output files:
-rw-r--r-- 1 jithesh root 210 Aug 16 23:38 msg.x // IDF
-rw-r--r-- 1 jithesh root 597 Aug 16 23:38 msg_xdr.c // XDR File
-rw-r--r-- 1 jithesh root 4398 Aug 16 23:38 msg_svc.c // Server stub
-rw-r--r-- 1 jithesh root 671 Aug 16 23:38 msg_clnt.c // Client Proxy
-rw-r--r-- 1 jithesh root 606 Aug 16 23:38 msg.h // IDF header
436 ◾ Fundamentals of EMS, NMS and OSS/BSS
getstate_1_arg1.nodeNumber = 3;
switch(result_1->nodeState)
{
case 1: printf(“%s Node State is Active \n”);break;
case 2: printf(“%s Node State is Busy \n”);break;
default: printf(“%s Node State is Faulty \n”);break;
}
6. Compile/linking:
/usr/local/bin/gcc -c msg_xdr.c
/usr/local/bin/gcc msg_client.c msg_clnt.c msg_xdr.o -o
client -lnsl
/usr/local/bin/gcc msg_server.c msg_svc.c msg_xdr.o -o
sever –lnsl
Output:
-rwxr-xr-x 1 jithesh root 8096 Aug 17 00:20 client*
-rwxr-xr-x 1 jithesh root 11720 Aug 17 00:20 sever*
7. Some points to be noted:
Just like socket programming here connection handling is involved also.
RPC Programming ◾ 437
We need to create handle, register rpc, destroy handle after use. The
protocol compiler generates code for this or the programmer can write
code for the same.
The programmer needs to update xdr_free to release memory, though
this is not mandatory for the program to execute.
8. Testing
# ./sever
# ./client 122.102.201.11
Node State is Active
27.7 Conclusion
Remote procedure call defines a powerful technology for creating distributed client-
server programs. The RPC runtime libraries manage most of the details relating to
network protocols and communication. With RPC, a client can connect to a server
running on another platform. For example, the server could be written for Linux
and the client could be written for Win32. This makes RPC programming different
from normal socket based communication.
It is to be noted that RPC in not quite common in working on Web pages
when using the Internet. One of the most important concepts to understand about
the Web services framework (WSF) is that it is not a distributed object system.
Web services communicate by exchanging messages, more like JMS than RMI.
The WSF doesn’t support remote references, remote object garbage collection, or
any of the other distributed object features developers have come to rely upon in
RMI, CORBA, DCOM, or other distributed object systems. Web communication
deals with a browser type of client process and Web server type of server process
(socket programming). The RPC is for distributed environment. This concluding
paragraph is intended to give the reader an understanding on how RPC is different
438 ◾ Fundamentals of EMS, NMS and OSS/BSS
from socket programming that is discussed in the preceding chapter and Web
communication that will be discussed in the next chapter.
Additional Reading
1. Per Brinch Hansen. The Origins of Concurrent Programming: From Semaphores to Remote
Procedure Calls. New York: Springer, 2002.
2. John Bloomer. Power Programming with RPC. California: O’Reilly Media, Inc., 1992.
3. David Gunter, Steven Burnett, Gregory L. Field, Lola Gunter, Thomas Klejna, Shankar
Lakshman, Alexia Prendergast, Mark C. Reynolds, and Marcia E. Roland. Client/Server
Programming With RPC and DCE. United Kingdom: Que, 1995.
4. Guy Eddon. RPC for NT Building. United Kingdom: CMP, 2007.
Chapter 28
Web Communication
28.1 Introduction
The World Wide Web (WWW) is a flexible, portable, user-friendly service that is
a repository of the information spread all over the world yet linked together using
the Internet framework. The World Wide Web was initiated by CERN (European
Laboratory for Particle Physics) for handling the distributed resources for scientific
research. It has now developed into a distributed client-server service where a client
can access the server service using a browser. Web sites are the locations where the
services are distributed.
Information published on the Web or displayed on a Web client like a browser
is called a page. The Web page is a unit of hypertext or hypermedia available on
the Web. The hypertext stores the information as document sets (text) that are
linked through pointers. A link can be created between documents, so that the
user browsing through a document can move to another document by clicking the
link to another document. Hypermedia contains graphics, pictures, and sound.
Homepage is the main page for an individual or organization. A server can provide
information on a topic in one or more Web pages. Being a single server, this is an
undistributed environment. Multiple servers can also provide the information on
439
440 ◾ Fundamentals of EMS, NMS and OSS/BSS
Request message
Client Server
Response message
Request message
URL
Request HTTP
type Method:// Host: Port/Path
version
(method)
Headers
Blank line
Body (optional)
request type categorizes the request into various methods, the URL is a standard
for specifying any information on the Internet facilitating document access from
the World Wide Web, and version indicates the HTTP version used. Version 1.1
is the latest version of HTTP. The older versions of HTTP like the 1.0 and 0.9 are
also in use.
The URL is not limited to use in HTTP. It is a Web address defining where a
resource is available. The URL in the request line defines the following:
The request type field present in the request message defines different messages
(methods). Some of these methods are:
◾◾ GET: The GET method is the main method for retrieving a document from
the server. When a GET request is issued by the client, the server responds
with the document contents in the body of the response message.
◾◾ POST: Information can be provided by the client to the server using the
POST method.
◾◾ HEAD: By using HEAD method it is possible to get information about a doc-
ument. The server response to the HEAD method does not contain a body.
◾◾ LINK: Document can be linked to another location by specifying the file
location in the URL part of the request line and the destination location in
the entity header using the LINK method.
◾◾ UNLINK: Links created by the LINK method are deleted by the UNLINK
method.
◾◾ PATCH: This method is a request for a new or replacement document. The
PATCH method contains the list of differences that should be implemented
in the existing file.
◾◾ MOVE: File location can be moved by specifying the source file location in
the URL part of the request line and the destination location in the entity
header using the MOVE method.
◾◾ OPTION: Information on the available options is requested by the client to
the server using the OPTION method.
◾◾ DELETE: Document on the server can be deleted using the DELETE
method.
◾◾ COPY: A file can be copied to another location by specifying the source file
location in the URL part of the request line and the destination location in
the entity header using the COPY method.
The HTTP response message (see Figure 28.3) has a status line that includes HTTP
version, status code, and the status phrase. The HTTP version in the status line and
the request line are the same. A three digit code representing the status is the status
code and the status code is explained in text form by the status phrase.
Information can also be exchanged between the client and the server using head-
ers. The client can request the document be sent in a special format or the server can
send additional information about the document using headers. One or more header
lines make up the header and each header line consist of the header name, a colon, a
space, and a header value. A header line belongs to the following four categories:
Response message
HTTP Status
Status code
version phrase
Headers
Space
Blank line
Body (optional)
◾◾ Request header: The client’s configuration and its preferred document format
are specified by the request header. The request header is present only in the
request message.
◾◾ Response header: The server’s configuration and its special information about
the request are specified by the response header. The response header is pres-
ent only in the response message.
◾◾ Entity header: Information about the document body is provided by the
entity header. It is usually present in response messages. Request messages
like POST method that has a body can also uses this header.
The request messages contain only general header, request header, and entity
header and the response messages contain only general header, response header,
and entity headers.
An important aspect of HTTP interaction is the proxy server. It is a server that
keeps the copies of responses to recent requests. When a request is received from
the HTTP client, the proxy server first checks its cache and sends the request to
the corresponding server only if the response is not stored in the cache. This helps
to reduce the traffic and the load on the original server as well as to improve the
latency. The proxy server thus collects the incoming response and stores it for future
requests from other clients. The client can use the proxy server only if it is config-
ured to access the proxy.
The HTTP connection can be a persistent or nonpersistent connection. Let us
discuss the difference between these connections and the HTTP version that sup-
ports the specific connection.
444 ◾ Fundamentals of EMS, NMS and OSS/BSS
◾◾ Static documents: The documents that are created and stored inside the server
are called static documents. The server can change the document contents
but the user cannot change it. A document copy is sent to the client through
document access and the document is displayed using a browsing program
by the user.
◾◾ Dynamic documents: The Web server runs an application program for the
creation of a dynamic document on the arrival of a request. Here the server
checks the URL to find if it defines a dynamic document. If so, the pro-
gram is executed by the server and the program output is send to the client.
These documents do not exist in a predefined format. The program output
is returned to the browser that requested the document as a response. The
dynamic document content differs from one request to another as it creates
fresh document for each requests.
Web Communication ◾ 445
◾◾ Active documents: Programs that must be run at the client site comes under
active documents. For a program that creates an animated graphics on the
screen or a program that interacts with the user, requires the program to be
run at the client site where the animation or interaction takes place. Active
documents are not run on the server. The server only stores the active docu-
ment in the form of binary document.
Next let us look into the programming aspect of each of these document types.
Image
<IMG> </IMG> Defines an image
Hyperlink
<A> </A> Defines an address (hyperlink)
Executable contents
<APPLET> </APPLET> Document is an applet
data should be supplied to the program, and how the output result should be
used.
The term common in CGI indicates that the set of rules are common to any
language like C, C++, Unix Shell, or Perl. The term gateway in CGI indicates
that a CGI program is a gateway used to access other resources like databases
and graphic packages. The term interface in CGI indicates that there exists a
set of predefined terms like variables and cells that can be used in any CGI
program. Codes that are written in one of the languages that support CGI are
known as a CGI program. A programmer can write a simple CGI program by
encoding a sequence of thoughts in a program using the syntax of any of the
languages specified above.
3. Programming for active documents
The active documents are compressed at the server site so that when the
client requests the active document, the server sends a document copy in
the form of a byte code. Thus the document is run at the client site or
decompressed at the client site. The conversion to the binary form saves the
bandwidth and transmission time. The client can also store the retrieved
document so that it can run the document again without making a request
to the server.
The steps from the creation to the execution of an active document are:
The programmer writes the program to be run on the client.
The program is compiled and the binary code is created.
The compressed document is stored in a file at the server.
The client sends a request for the program.
The requested copy of the binary code is transported in a compressed
form from the server to the client.
The client changes the binary code into executable code.
The client links all the library modules and prepares it for execution.
The program is run and the result is presented by the client.
Java is an object-oriented language that is popular for developing Applet pro-
grams to be run on the client as active documents. Java has a class library
allowing the programmer to write and use an active document. An active
document written in Java is known as an Applet. Java is a combination of
high level programming language, a run time environment, and a class
library allowing the programmer to write an active document that can be run
on a browser. It should be noted that Java program does not necessarily need
a browser and can be executed as a stand-alone program.
Private data, public methods, and private methods can be defined in the
Applet. An instance of this applet is created by the browser. The private meth-
ods or data are invoked by the browser using the public methods defined
in the Applet. In order to execute the Applet, first a Java source file is cre-
ated using an editor. From the file, the byte code is then created by the Java
compiler.
448 ◾ Fundamentals of EMS, NMS and OSS/BSS
1. At the base of a HTTP server is a TCP server. The server has to be multithre
aded, so an array list of sockets is declared to handle multiple connections.
2. Every HTTP server has an HTTP root, which is a path to a folder on the
hard disk from which the server will retrieve Web pages. This path needs
to be set. Then initialize the array list of sockets and start the main server
thread.
while(true)
{
Socket socketHandler = tcpListener.
AcceptSocket();
if (handlerSocket.Connected)
{
listSockets.Add(socketHandler);
ThreadStart thdstHandler =
new ThreadStart(handlerThread);
Thread thdHandler =
new Thread(thdstHandler);
thdHandler.Start();
4. The first task this thread must perform, before it can communicate with the
client, is to retrieve a socket from the top of the ArrayList. Once this socket
has been obtained, it can then create a stream to this client by passing the
socket to the constructor of a NetworkStream. A StreamReader is used to
read from the incoming NetworkStream. The incoming line is assumed to be
HTTP GET in this example and the same can be extended to handle other
HTTP POST.
The physical path also needs to be resolved. It can be read from disk and
sent out on the network stream. Then the socket is closed. The response file-
Contents needs to be modified to include the HTTP headers so that the cli-
ent can understand how to handle the response.
StreamReader reader;
NetworkStream networkStream =
new NetworkStream(socketHandler);
reader = new StreamReader(networkStream);
streamData = reader.ReadLine();
input = streamData.Split(“ ” .ToCharArray());
// The input line is assumed to be: GET <some URL path>
HTTP/1.1
// Parse the filename using input
// Add the HTTP root path
fileName = path + fileName;
FileStream fs = new FileStream(fileName, FileMode.
OpenOrCreate);
fs.Seek(0, SeekOrigin.Begin);
byte[] fileContents = new byte[fs.Length];
fs.Read(fileContents, 0, (int)fs.Length);
fs.Close();
// Modify fileContents to include HTTP header.
socketHandler.Send(fileContents);
socketHandler.Close();
}
28.5 Conclusion
In a typical client-server model, each application has its own client program and
had to be separately installed on the client machine. Only after the client applica-
tion is installed on the machine will the user interface be available. Any upgrade of
client application would require an upgrade of solution running on individual user
workstation also. This resulted in a huge maintenance cost and decreased produc-
tivity. Also with the wide spread use of Internet, there was an increased demand for
making applications available anytime and anywhere.
Web Communication ◾ 451
Additional Reading
1. Ralph F. Grove. Web-Based Application Development. Sudbury, MA: Jones & Bartlett
Publishers, 2009.
2. Leon Shklar and Rich Rosen. Web Application Architecture: Principles, Protocols and
Practices. 2nd ed. New York: Wiley, 2009.
3. Susan Fowler and Victor Stanwick. Web Application Design Handbook: Best Practices for
Web-Based Software. San Francisco, CA: Morgan Kaufmann, 2004.
4. Ralph Moseley. Developing Web Applications. New York: Wiley, 2007.
5. Wendy Chisholm and Matt May. Universal Design for Web Applications: Web Applications
That Reach Everyone. California: O’Reilly Media, Inc., 2008.
Chapter 29
Mail Communication
This chapter is about the mail communication. It introduces the reader to the
concept of writing programs for mail service. It is quite common in current telecom
management applications to have alerts and summary reports sent by mail from
the EMS, NMS, and OSS/BSS solutions to the concerned operator or business
manager. SMTP (simple mail transfer protocol) has been used as the mail service
protocol to explain the concepts of communication by mail.
29.1 Introduction
The standard mechanism for electronic mailing on the Internet for sending a single
message that includes text, video, voice, or graphics to one or more recipients is
known as simple mail transfer protocol or SMTP. The electronic message called
mail has an envelope containing the sender address, the receiver address, along
with other information. The message has a header part and a body. The header has
information on destination and source, while the body of the message holds the
content to be read by the recipient.
The header has:
◾◾ The sender address: Details of the address from which the message is sent.
◾◾ The receiver address: Details of the recipient address to which the message is
to be sent.
◾◾ Message subject: An identifier for the message.
The Internet user having an e-mail ID receives the mail to his mailbox that is
periodically checked by the e-mail system and informs the user with a notice. If the
453
454 ◾ Fundamentals of EMS, NMS and OSS/BSS
user is ready to read his mail, a list containing the information about the mail like
the sender mail address, the subject, the time when the mail was sent or received
are displayed. The user can select the message of his choice and the contents of the
message is displayed on the screen.
E-mail addresses are required for delivering mail. The addressing system used
by SMTP consists of the local part and the domain name that are separated by an
@ symbol (local_identification@domain_name). Some of the terminologies that
the reader will encounter in this chapter are:
◾◾ User agent: The user mailbox stores the received mails for a user for retrieval
by the user agent.
◾◾ Email address: The name of the user mailbox is defined in the local part of
the email address.
◾◾ Mail exchangers: These are the hosts that receive and send e-mail. The domain
name assigned to each mail exchanger may be a DNS database or a logical
name.
This chapter has details on the user agent, mail transfer agents, and the mail
delivery. SMTP is used for explaining the concepts. Perl programming language
is used to show a programming example on implementing mail communication.
Standard SMTP libraries are available in most of the high level programming
languages. These libraries make client and server programming for mail communi-
cation easy. This chapter also uses an SMTP library in Perl programming language
to code the example program for mail communication.
1. Connection establishment
2. Message transfer
3. Connection termination
The SMTP server starts the connection phase after the client has established a
TCP connection to the well-known port 25. Exchange of a single message takes
Mail Communication ◾ 455
Commands
Client Server
Responses
Mail
box
Mail Mail
access 3 access
server client
place between the sender and one or more recipients after the establishment of
a connection between the SMTP client and server. The connection between the
SMTP client and server is terminated by the client after the successful transfer of
the message between them.
The transfer of mails from the sender to the receiver involves the following
stages (see Figure 29.2):
a. Stage 1: The mail is transferred from the user agent that uses the SMTP client
software to the local server that uses the SMTP server software. There would
be another remote SMTP server as the connectivity to the remote server may
not be available at all times from the SMTP client. The mail is stored in the
local server before it is transferred to the remote server.
b. Stage 2: The local server now acts as the SMTP client to the remote server
that is now the SMTP server. Data needs to be sent from the local server to
the remote server that is now the SMTP server. This mail server receives the
mail and stores the mail in the user mailbox for later retrieval.
c. Stage 3: In this stage the mail access client uses mail access protocols like
POP3 (Post Office Protocol, version 3) or IMAP4 (Internet Mail Access
Protocol, version 4) to access the mail so as to obtain the mail.
456 ◾ Fundamentals of EMS, NMS and OSS/BSS
29.3 Mail Protocols
a. Simple mail transfer protocol (SMTP): This protocol pushes the messages
to the receiver without the server requesting a transfer. So SMTP is a push-
based protocol. It is used to send data from client to a server where the mails
are stored. Some other access protocol is to be used by the recipient to retrieve
the messages from the mail server mailbox where it was stored.
b. Post office protocol, version 3 (POP3): This is a mail access protocol. The
recipient computer has client POP3 software and the mail server has the
server POP3 software. The client can access mail by downloading e-mail
from the mail server mailbox. A connection is opened between the user agent
or the client and the server on TCP port 110. The user name and password
are sent for authentication.
The delete mode and the keep mode are the two modes of POP3. After the
retrieval of the mails, they are deleted from the mailbox through the delete
mode. The mails are stored in the user’s local workstation where they can save
and organize the received mails after reading or replying. In the keep mode,
the mails remain inside the mailbox even after retrieval and are employed.
The keep mode is used when the user accesses the mails through a system
away from the local workstation, so that the mail can be read and maintained
for later retrieval and organizing. However POP3 has several limitations.
c. Internet mail access protocol, version 4 (IMAP4): It is a mail access proto-
col. IMAP4 is a powerful and complex mail access protocol used to handle
the transmission of email. Using IMAP4 the user can create the mailbox
hierarchy in a folder for e-mail storage or even create, delete, or rename the
mailboxes on the mail server. In cases where the bandwidth is low and the
e-mail contains multimedia with high bandwidth requirements, the user can
partially download the e-mail through IMAP4. It is also possible to check the
e-mail header or search for a string of characters in the e-mail content before
downloading.
◾◾ Reading messages: The incoming messages are read by the user agents. A user
agent (when invoked, checks the mail in the incoming mailbox) referred to as
an inbox. For each of the received mail, the user agent shows a one-line sum-
mary with fields like the number field, a flag indicating whether the mail has
been read and replied to or read but not replied to, sender name, subject field
if subject line in the message is not empty, and message size.
◾◾ Replying to messages: The user agent allows the user to send a reply message
to the original sender of the message or to all the recipients such that the
reply message contains the original message for quick reference and the new
message.
◾◾ Forwarding messages: The user agent allows the message to be send to a third
party apart from the original sender or the recipients of the copy with or
without adding any extra comments.
There are two types of mailboxes, called the inbox and outbox. These are actu-
ally files with a special format that are created and handled by the user agent (see
Figure 29.3). The received mails are kept inside the inbox and all the sent e-mails
are kept inside outbox. The messages received in the inbox remain there until the
user deletes the messages.
User agents can be of two types:
◾◾ Command driven user agents: The command driven user agents were used
in the earlier days but are still present as underlying user agents in servers. It
performs its task by accepting a one-character command from the keyboard.
By typing a predefined character at the command prompt the user can send
a reply to the sender of the message and by typing another predefined char-
acter at the command prompt the user can send reply to the sender and all
the recipients.
◾◾ GUI-based user agents: The GUI (graphical user interface) based user agents
contain user interface components like icons, menu bars, windows that allows
User User
agents agents
the user to interact with the software by using both the keyboard, and the
mouse thereby making the services easy to access.
There are user agents from multiple vendors that can be used for day-to-day mail
communication. Some of the most popular application based user agents include
Microsoft Outlook, Mozilla Thunderbird. Some vendors offer Internet mail by
hosting mail server on their server and providing the user with Web-based user
agent like Gmail, Yahoo Mail, Microsoft MSN Mail, Rediff Mail, and so on.
In Web-based mail, the mails are transferred to the receiving mail server from
the sending mail server through SMTP and HTTP is employed in the transfer
of messages to the browser from the Web server. The user sends a message to the
Web site to retrieve the mails. The Web site asks the user to specify the user name
and password for authentication. The transfer of messages from Web server to the
browser happens only after user authentication. The messages are transferred in
HTML format from the Web server to the browser that loads the user agent.
User User
MIME MIME
SMTP SMTP
7-bit ASCII
MIME-Version: 1.1
2. Content-Type => The data type that are used in the body of the message are
defined by this header. The type and the subtype of the contents are separated
by a slash. The header may contain other parameters depending on the sub-
type (see Figure 29.5).
Value Code Value Code Value Code Value Code Value Code Value Code
0 A 11 L 22 W 33 H 44 S 55 3
1 B 12 M 23 X 34 I 45 T 56 4
2 C 13 N 24 Y 35 J 46 U 57 5
3 D 14 O 25 Z 36 K 47 V 58 6
4 E 15 P 26 A 37 L 48 W 59 7
5 F 16 Q 27 B 38 M 49 X 60 8
6 G 17 R 28 C 39 N 50 Y 61 9
7 H 18 S 29 D 40 O 51 Z 62 +
8 I 19 T 30 E 41 P 52 0 63 /
9 J 20 U 31 F 42 Q 53 1
10 K 21 V 32 G 43 R 54 2
Content-Id: id = <content-id>
5. Content-Description => The nature of the body; that is, whether the body
is an image, an audio, or a video are defined by the Content-Description
header.
Content-Description: <description>
The Base 64 and Quoted-Printable are preferred over 8-bit and binary encod-
ing. MIME is an integral part of communication involving SMTP client and
SMTP server.
29.6 Implementation
In this example, the Perl programming language is used for implementing SMTP
server and client. The implementation is not tied to a specific programming lan-
guage and the SMTP client and server can be implemented in other program-
ming languages as well. This example is just to illustrate the necessary components
required to implement the SMTP server and client.
Net::SMTP library has been used in this example as it implements the SMTP
functions required for implement SMTP applications in the Perl programming
language. Most high level programming languages have similar libraries for imple-
menting applications that require mail communication.
# ----------------- #
# SMTP Client #
# ----------------- #
# Include Net::SMTP library
use Net::SMTP qw(smtp);
# ------------------ #
# SMTP Server #
# ------------------ #
# Include libraries
use Carp;
use Net::SMTP::Server;
use Net::SMTP::Server::Client;
use Net::SMTP::Server::Relay;
29.7 Conclusion
The intent of this chapter is to introduce the reader to the basic concepts of mail
communication. The chapter started with a basic overview of mail messaging and
mail delivery process. Then some of the mail protocols for data transfer and access
were discussed. MIME, which is an important concept in mail delivery, was also
handled. The chapter concludes with an implementation example of SMTP server
and client using the Perl programming language. Event handling is an essential
activity in operation support solution and mail communication is a way of notify-
ing operators and business managers on critical events.
Additional Reading
1. Pete Loshin. TCP/IP Clearly Explained. 4th ed. San Francisco, CA: Morgan Kaufmann,
2003.
2. John Rhoton. Programmer’s Guide to Internet Mail: SMTP, POP, IMAP, and LDAP.
Florida: Digital Press, 1999.
3. Rod Scrimger, Paul LaSalle, Mridula Parihar, and Meeta Gupta. TCP/IP Bible.
New York: John Wiley & Sons, 2001.
4. Laura A. Chappell and Ed Tittel. Guide to TCP/IP. Kentucky: Course Technology, 2001.
5. Candace Leiden, Marshall Wilensky, and Scott Bradner. TCP/IP for Dummies. 5th ed.
New York: For Dummies, 2003.
Chapter 30
File Handling
This chapter is about the file transfer that is a common method of data exchange.
Performance and other data records are collected in the network elements at regular
intervals and stored in files. These files in the network elements needs to be trans-
ferred to the OSS solutions for analysis and plotting the data in the records in a
user-friendly manner. File transfer protocol (FTP) is the most common protocol
used for transfer of files across systems. This chapter introduces the reader to file
handling concepts and the use of FTP in file transfer.
30.1 Introduction
The transfer of files from one system to another is provided by the application
layer protocol called FTP. This means that FTP can be used in telecom manage-
ment solutions for file transfer between agents running on network elements and
management server. FTP involves a client–server based interaction. FTP avoids
the common problems present during a file transfer and favors easy file transfer by
taking care of:
Using TCP services, FTP establishes one connection in port 20 for data transfer
known as the data connection. This connection can be used to transfer different
types of data. It establishes another connection in port 21 for the control informa-
tion like commands and responses known as the control connection. A line of
465
466 ◾ Fundamentals of EMS, NMS and OSS/BSS
User
User interface
TCP/IP
Data transfer Data transfer
process process
Data connection
Client Server
these systems must communicate with each other to facilitate file transfer. FTP is
the protocol that enables file transfer between two systems that may or may not
process the same properties.
FTP uses the following communication methods:
Files of different types can be transferred across the data connection. The origi-
nal file can be transformed into ASCII characters at the sender site and the ASCII
characters are retransformed back to its own representation at the receiver site, in
the case of ASCII files, which is the default format for transferring the text files. The
Local Local
code code
Local data type Data transfer Data transfer Local data type
and structure process Data process and structure
connection
Client Server
files can also be transferred by EBCDIC encoding if either one end or both the ends
of the connection use EBCDIC encoding. Binary files like compiled programs or
images encoded as zeros and ones can be transferred as continuous streams of bits
without any interpretation or encoding.
Printability of the file encoded in ASCII or EBCDIC can be defined by adding
the following attribute:
◾◾ Nonprint format is the default format used for transferring a text file that will
be stored and processed later as the file lacks any character to be interpreted
for the vertical movement of the print head.
◾◾ Files in TELNET format contain ASCII vertical characters like NL (new
line), VT (vertical feed), CR (carriage return), and LF (line feed) indicating
that the file is printable after transfer.
The different data structures that can be used for transferring files through data
connection are:
◾◾ Record structure: Text files can be divided into records by using record
structure.
◾◾ Page structure: This divides the file into pages such that each page has a page
number and a page header facilitating the pages to be stored and accessed
sequentially or randomly.
◾◾ File structure: In this structure, the file is a continuous stream of bytes having
no structure.
The different transmission modes that can be used for transferring files through
data connection are:
◾◾ Block mode: In the block mode the files are delivered in the form of blocks
to TCP such that each block is preceded by a 3-byte header. The first byte
provides a description of the block and they are called block descriptor. The
block sizes are defined by the second and third bytes in the form of bytes.
◾◾ Compressed mode: In compressed mode the big files can be compressed by
replacing the data units that appear consecutively by one occurrence and
the repetition number. Blanks are compressed in text files and null charac-
ters are compressed in binary files. Run-length encoding method is used for
compression.
◾◾ Stream mode: In the stream mode the data reaches the TCP as a continuous
stream of bytes that is then split into segments of appropriate size. If the data
consist of many records then each recorder will have a termination character
that terminates the data connection by the sender. There is no need for a
termination or end-of-file character in cases where the data is simply a stream
of bytes. The stream mode is the default mode.
File Handling ◾ 469
◾◾ File retrieval: This is the process of copying the file from the server to the
client.
◾◾ File storage: This is the process of copying the file from the client to the
server.
◾◾ Directory list or file names: These are treated as a file that can be sent from
the server to the client over data connection.
In file storage, the control commands and responses are exchanged across the con-
trol connection (see Figure 30.5). Then each of the data records is transferred across
Storing a file
Control Control
Data connection
process process
Retrieving a list
30.6 Implementation
Four example programs are discussed in this section. The implementation of all the
programs in this section is using the Perl programming language.
1. Read file: In this example, the contents of a file is read into a data structure and
the data structure is printed to display the contents read from the file.
# --------------------------- #
# Read Contents of File #
# --------------------------- #
# Specify the file that needs to be read
$fileName = ‘/root/test.txt’;
File Handling ◾ 473
2. Write data to file: In this example, data is written to a file opened in write
mode. After write operation with this program, the contents of the file can be
verified using the program to read file.
# --------------------------- #
# Write Data to File #
# --------------------------- #
# --------------------------- #
# File Manipulation #
# --------------------------- #
# --------------------------- #
# FTP Client #
# --------------------------- #
30.7 Conclusion
The intent of this chapter is to introduce the reader to the basic concepts of file
transfer. The chapter started with a basic overview of file transfer using FTP. Then
some of the details on communication with FTP are discussed along with other
methods for file sharing. The chapter concludes with implementation examples
using the Perl programming language. File handling and transfer is an essential
activity in an operation support solution.
File Handling ◾ 475
Additional Reading
1. Candace Leiden, Marshall Wilensky, and Scott Bradner. TCP/IP for Dummies. 5th ed.
New York: For Dummies, 2003.
2. Pete Loshin. TCP/IP Clearly Explained. 4th ed. San Francisco, CA: Morgan Kaufmann,
2003.
3. Rod Scrimger, Paul LaSalle, Mridula Parihar, and Meeta Gupta. TCP/IP Bible. New
York: John Wiley & Sons, 2001.
4. Laura A. Chappell and Ed Tittel. Guide to TCP/IP. Kentucky: Course Technology. 2001.
Chapter 31
Secure Communication
31.1 Introduction
The study of transforming messages in order to make them secure and immune
to attacks is known as cryptography. The original text message without any trans-
formation is said to be in plaintext format. After applying transformation on the
original message to make it secure, the message is called ciphertext. The plaintext is
transformed to the ciphertext by the user through the encryption algorithm and the
ciphertext is transformed back to the plaintext by the receiver through the decryp-
tion algorithm (see Figure 31.1).
The process of ciphering (encrypting or decrypting) is performed using a key
called the cipher, which is actually a number or a value. The key is kept secret and
requires protection. In the encryption process the encryption key along with the
encryption algorithm transforms the plaintext to the ciphertext and in the decryp-
tion process the decryption key along with the decryption algorithm transforms the
ciphertext to the plaintext (see Figure 31.2). Symmetric key or secret key cryptogra-
phy algorithms and the public key or asymmetric cryptography algorithms are the
two groups of cryptography algorithms.
477
478 ◾ Fundamentals of EMS, NMS and OSS/BSS
Sender Receiver
Plaintext Plaintext
Encryption Decryption
Plaintext Ciphertext Ciphertext Plaintext
algorithm algorithm
Encryption Decryption
Let us take DES and Triple DES as examples of symmetric key cryptography.
There are many other symmetric key algorithms like AES (Advanced Encryption
Standard), RC4, IDEA (International Data Encryption Algorithm), Twofish,
Serpent, Blowfish, and CAST5.
480 ◾ Fundamentals of EMS, NMS and OSS/BSS
64-bit plaintext
56-bit key
64-bit ciphertext
Triple DES
Message
(variable length)
amount of texts. Also the relationship between an entity and its public key must be
verified in the public key method.
The digital signature that is used for authenticating the sender of documents
and messages uses public key cryptography. Either the whole document or a digest
(condensed version of the document) can be signed. Public key encryption can be
used for signing an entire document where the private key is used by the sender for
encryption and the public key of the sender is used by the receiver for decryption.
Signing the whole document using public key encryption is inefficient for long
messages. To overcome the inefficiency in signing the whole document using the
public key encryption, a digest of the document is created using hash function
and then signed (see Figure 31.5). Digest is a condensed version of the document.
Message Digest 5(MD5) that produces a 120-bit digest and Secure Hash Algorithm
1(SHA-1) that produces a 160-bit digest are the two common hash functions.
Hash operation is one-way operation; that is, the digest can only be created
from the document while the other way is not possible. Also hashing is a one-to-one
function producing little probability that two messages will create the same digest.
The digest so created is encrypted by the sender using the private key and attached
to the original message and sent to the receiver who separates the digest from the
original message and applies the same hash function to the message to create a
second digest.
The receiver uses the public key of the sender to decrypt the received digest.
If the two digests are the same then message authentication and integrity are
preserved. The digest, which is a representation of the message, is not changed
indicating that integrity is provided. The message is secure as the message and its
digest comes from a true sender. Also the sender cannot deny the message as the
sender cannot deny the digest.
There is an access to every user’s public key in the case of public key cryp-
tography. So as to prevent the intruder attack, the receiver can ask a certification
authority or CA, which is a federal or state organization that binds a public key to
an entity and then issues a certificate. This will help the receiver to give his public
key to the users so that no user can accept a public key forged as that of the receiver.
The problem behind public key certification is that one certificate may have the
public key in one format and another certificate may have the public key in another
format. To overcome this ITU has devised the X.509 protocol that describes the
482 ◾ Fundamentals of EMS, NMS and OSS/BSS
certificate in a structural way. When we use public key universally many DNS
servers are required to answer the queries, such that the servers are arranged in a
hierarchical relationship that is called public key infrastructure (PKI).
31.4 Message Security
The following services are offered under security:
Before starting a communication, the sender identity is verified using the authenti-
cation service. User authentication can be done by both symmetric key cryptogra-
phy and public key cryptography.
◾◾ User authentication with public key cryptography: Here the sender encrypts
the message with its private key and the receiver can decrypt the message
using the sender’s public key and authenticate the sender. But the intruder
can either announce its public key to the receiver or encrypt the message
containing a nonce with its private key, which the receiver can decrypt with
the intruder’s public key (assumed to be that of the sender).
The transport mode and the tunnel mode are the two modes that define where
the IPSec header is added to the IP packet. In transport mode, the IPSec header is
added between the IP header and the rest of the packet and in tunnel mode, the
IPSec header is added before the original IP header and a new IP header is added
before the IPSec header. IPSec header, IP header, and the rest of the packet consti-
tute the payload of the new IP header.
Authentication header (AH) protocol and the encapsulating security payload
(ESP) protocol are the two protocols defined by IPSec.
Security in the World Wide Web can be met through the security protocol at the
transport level known as transport layer security or TLS. The TLS lies between the
Secure Communication ◾ 485
application layer and the transport layer (TCP). The handshake protocol and the
data exchange or record protocol are the two protocols that make up the TLS.
Now coming to the top most application layer, there are several protocols that offer
application level security. One of the application layer security (ALS) protocols is
PGP. The pretty good privacy (PGP) protocol is used at the application layer to
provide security including privacy, integrity, authentication, and nonrepudiation
in sending an e-mail. Combination of hashing and public key encryption is called
digital signature and provides integrity, authentication, and nonrepudiation.
// Cryptography namespace
using System.Security.Cryptography;
3. The file selected for encryption is termed “fileName” in this example. This
can be changed to the specific file to be encrypted. Next we write the func-
tion to encrypt. The code has been commented so that the reader can follow
the steps involved.
486 ◾ Fundamentals of EMS, NMS and OSS/BSS
/* --------------------------
Encrypt function
--------------------------- */
void Encrypt_Function( )
{
// Ouput file has same file name with extension “.enc”
string encryptedFile = fileName + “.enc”;
// Encrypt data
ICryptoTransform desencrypt = descriptor.
CreateEncryptor();
CryptoStream output = new CryptoStream(fileRead,
desencrypt, CryptoStreamMode.Write);
output.Write(input, 0, input.Length);
output.Close();
/* --------------------------
Decrypt function
--------------------------- */
void Decrypt_Function( )
{
31.8 Conclusion
The intent of this chapter is to introduce the reader to the basic concepts of
security implementation. Some of the fundamental concepts of symmetric and
asymmetric cryptography were discussed. The security services and protocols will
give the reader an understanding of security implementation at different layers. The
implementation example discussed in this chapter uses 3DES.
Additional Reading
1. Neil Daswani, Christoph Kern, and Anita Kesavan. Foundations of Security: What Every
Programmer Needs to Know, New York: Apress, 2007.
2. Vasant Raval and Ashok Fichadia. Risks, Controls, and Security: Concepts and
Applications. New York: Wiley, 2007.
Chapter 32
Application Layer
This chapter is about NETCONF protocol and its implementation. The basic
features in NETCONF are discussed along with the XML message for performing
NETCONF operations. Considering that most legacy implementations are based
on SNMP and the recent technology is centered around XML, a sample program
to convert SNMP MIB to XML is also discussed.
32.1 Introduction
In the preceeding chapters we have discussed implementation of several application
layer protocols like FTP, SMTP, and HTTP. While the application layer protocols
discussed so far were suited for a specific management activity like transfer of files,
event handling as mails, and so on, these protocols do not fall in the category of a
network management protocol for message exchange between the agent and server.
In order to be used as a management protocol, the protocol needs to have func-
tions to perform the following activities between server and agent:
There are several protocols like SNMP and NETCONF that satisfies these condi-
tions and are used for network management. Considering that SNMP is mostly
489
490 ◾ Fundamentals of EMS, NMS and OSS/BSS
phased out with the advent of XML-based protocols, this chapter is focused on
implementation with NETCONF, which is an XML-based protocol. It should be
noted that most legacy NMS implementations are still SNMP based. A program
to convert SNMP event MIB to XML is also discussed. The program uses libsmi
library to access SMI MIB modules.
The features of NETCONF that make it the next generation management
protocol are:
Most industry leaders in operation support space have embraced XML-based pro-
tocols. NETCONF is one of the most popular XML-based protocols and hence has
been used in this chapter to discuss management operations.
32.2 NETCONF
NetConf, short for network configuration, is based on the client–server model and
uses a remote procedure called (RPC) paradigm to facilitate communication. The
messages are encoded in XML and are sent over a secure, connection-oriented ses-
sion. The contents of both the request and the response messages are described
using XML. The client is a script or application typically running as part of a net-
work manager and the server is the agent in the network device.
The layers in NETCONF are (see Figure 32.1):
Content
Operations
RPC
Transport
The configuration data depends on the type of device being managed and varies
based on the device implementation. The transport layer can be any basic transport
protocol that satisfies some requisites of NETCONF. In this implementation, the
XML-based messaging patterns of the RPC model and the NETCONF operations
are discussed.
32.4 NETCONF Operations
Some of the basic protocol operations in NETCONF are discussed in this section.
II. <edit-config> This operation is used to load all or part of a specified con-
figuration to a defined target configuration. The new configuration can be
expressed inline or using local/remote file.
<interface>
<name>ExternalRouter 6</
name>
<router>15</router>
</interface>
</config>
</edit-config>
</rpc>
<delete-config>
<target>
<!-- The current or <running>
configuration cannot be deleted -->
<startup/>
</target>
</delete-config>
</rpc>
VII. <lock> This operation is used to lock the configuration source. The lock will
not be granted if a lock is already held by another session on the source or the
previous modification has not been committed.
#include <stdio.h>
#include <smi.h>
#include <unistd.h>
// ………
// Initialize
smiInit(NULL);
// ……………...
i = optind++;
modulename = smiLoadModule(argv[i]);
smiModule = modulename ? smiGetModule(modulename):
NULL;
Application Layer ◾ 499
if ( ! smiModule ) {
fprintf(stderr, “Cannot locate
module `%s’\n”,
argv[i]);
}
else
{
// Get total module count
modules[moduleCount++] =
smiModule;
}
// ……………...
fclose(file);
// ……………...
Step 3: Convert the data to XML. The output needs to describe each event with
the object identifier, event identifier, event label, and description. Read through the
comments to understand program flow.
SmiNode* smiNode;
SmiNode* tmpNode;
SmiElement* smiElem;
int i;
int j;
int len;
char* logmsg;
{
fprintf(file, “<event>\n”);
fprintf(file, “\t<objIdf>”);
len = smiNode->oidlen;
fprintf(file, “</objIdf>\n”);
fprintf(file, “</event>\n”);
c. Run the executable with the MIB file as input and check the converted XML
output.
# mibtrap2xml SNMPv2-MIB.txt
# mibtrap2xml IF-MIB.txt
Application Layer ◾ 501
d. Sample outputs
Example 1: Output for SNMPv2-MIB
<event>
<objIdf>.1.3.6.1.6.3.1.1.5.4</objIdf>
<eventIdf>linkUp</eventIdf>
<eventLabel>IF-MIB defined trap event: linkUp
</eventLabel>
<description> A linkUp trap signifies that the
SNMP entity, acting in an agent role, has detected
that the ifOperStatus object for one of its
communication links left the down state and
transitioned into some other state (but not
into the notPresent state). This other state
is indicated by the included value of
ifOperStatus.</descriptor>
</event>
<!-- End of xml generated from MIB: IF-MIB -->
32.6 Conclusion
The intent of this chapter is to introduce the reader to the basic concepts of imple-
menting NETCONF. The chapter started with a basic overview of NETCONF.
Then the layers that are relevant for NETCONF application development were
discussed. The chapter concludes with an implementation example for converting
MIB file to XML with libsmi library. Implementing communication between cli-
ent and server using a management protocol is the base framework over which busi-
ness logic of operation support solution is added. The basic operations remain the
same when using another management protocol. For example NETCONF <get-
config> is similar to SNMP “GET” and SNMP “SET” is similar to NETCONF
<set-config> operation.
Additional Reading
1. Alexander Clemm, Marshall Wilensky, and Scott Bradner. Network Management
Fundamentals. Indianapolis, IN: Cisco Press, 2006.
2. Robert L. Townsend. SNMP Application Developer’s Guide. New York: Wiley, 1995.
3. Larry Walsh. SNMP MIB Handbook. England: Wyndham Press, 2008.
4. Douglas Mauro and Kevin Schmidt. Essential SNMP. 2nd ed. California: O’Reilly
Media, Inc., 2005.
Chapter 33
This chapter is about design patterns that are commonly used in programming
management applications. There are several design patterns that can be applied in
the development of specific modules in management application. The discussion in
this chapter is limited to a few of the common design patterns used in management
application development.
33.1 Introduction
This chapter details the following design patterns in the context of developing man-
agement applications:
The C++ programming language has been used to implement the patterns. Each
pattern description has three parts in this chapter. The first part gives the pattern
overview. The second part details a specific management application area where the
503
504 ◾ Fundamentals of EMS, NMS and OSS/BSS
pattern can be used. The third part gives the implementation of the pattern. It is
advised skipping the implementation details if the reader is not familiar with C++.
Implementation:
//---------------------------------------------------------
// Singleton Design Pattern
// ver0.0 Jithesh Sathyan Aug 02, 2009
//---------------------------------------------------------
#include <iostream.h>
class singleton
{
private:
static singleton *p;
singleton()
{
};
public:
static singleton* returnAdd()
OSS Design Patterns ◾ 505
{
// Additional check required for multithreaded
applications
if(p == NULL)
{
p = new singleton;
cout<<“New object created”<<endl;
return p;
}
else
{
cout<<“Old object returned”<<endl;
return p;
}
}
static cleanUp()
{
delete p;
}
};
singleton* singleton::p = NULL;
void main()
{
singleton *p1 = singleton::returnAdd();
singleton *p2 = singleton::returnAdd();
singleton::cleanUp();
}
#include <iostream.h>
class Alarm
{
// Add constructor :-)
public:
virtual void needHelp(int x)
{
// Implement in child
}
};
public:
HSS(Alarm *pass)
{
emp = pass;
}
void needHelp(int x)
{
cout <<“In HSS Alarm module”<<endl;
if(x != hss)
{
cout <<“No help in HSS Alarm module”<<endl;
emp->needHelp(x);
}
else
{
cout<<“Help identified in HSS Alarm
Module”<<endl;
}
}
};
}
void needHelp(int x)
{
cout <<“In HLR Alarm module”<<endl;
if(x != hlr)
{
emp->needHelp(x);
cout <<“No help in HLR Alarm module”<<endl;
}
else
{
cout<<“Help identified in HLR Alarm
Module”<<endl;
}
508 ◾ Fundamentals of EMS, NMS and OSS/BSS
}
};
void main()
{
General *gen = new General();
HLR *ana = new HLR(gen);
HSS *man = new HSS(ana);
// Try requesting help for different values
man->needHelp(2);
}
#include <iostream.h>
class Observer
{
// Add constructor :-)
public:
int flag_set;
virtual void update()
{
}
}*obsr[3];
// Would suggest to store the observers in a vector
// so that we don’t have to keep track of number of
observers
class Subject
{
// Add constructor :-)
510 ◾ Fundamentals of EMS, NMS and OSS/BSS
public:
void registerObserver(Observer * o);
void removeObserver(Observer * o);
void notifyObserver();
};
Implementation:
//---------------------------------------------------------
// Factory Design Pattern
// ver0.0 Jithesh Sathyan Aug 03, 2009
//---------------------------------------------------------
#include <iostream.h>
class connection
{
public:
virtual description()
{
cout<< “Generic connection”<<endl;
}
};
}
};
class Factory
{
private:
int connectType;
public:
Factory(int t)
{
connectType = t;
}
connection* createConnection()
{
// Decision making implemented inside Factory
switch(connectType){
case Oracle_type: return new Oracle();
break;
case MySql_type: return new MySql(); break;
default: cout<<“Error. Contact Jithesh”<<endl;
break;
}
}
};
void main()
{
Factory *fact;
fact = new Factory(Oracle_type);
connection *con = fact->createConnection();
con->description();
}
514 ◾ Fundamentals of EMS, NMS and OSS/BSS
Implementation:
//---------------------------------------------------------
// Adapter Design Pattern
// ver0.0 Jithesh Sathyan Aug 06, 2009
//---------------------------------------------------------
#include <iostream>
#include <string>
using namespace std;
OSS Design Patterns ◾ 515
class server_msq
{
public:
string snmpver_msg;
setvalue(string msg)
{
snmpver_msg = msg;
}
getvalue()
{
//-----
}
};
class newClient
{
public:
string version;
string msg;
public:
setvalue()
{
//--------
}
getvalue()
{
cout<<“Version: “<<version<<endl;
cout<<“Message: “<<msg<<endl;
}
};
void main()
{
server_msq *serv = new server_msq();
serv->setvalue(“v2 Test Message”);
adapter *adp = new adapter();
adp->setvalue(serv);
adp->getvalue();
// ----- now adp output can be used to set newClient object
delete serv;
delete adp;
}
#include <iostream>
#include <vector>
#include <string>
using namespace std;
class employee
{
public:
string name;
string dept;
employee(string n, string d)
{
name = n;
dept = d;
}
employee(const employee& pt)
{
name = pt.name;
dept = pt.dept;
}
employee& operator=(const employee& ptx)
{
name = ptx.name;
dept = ptx.dept;
return *this;
}
};
class Division
{
private:
string dept;
vector<employee> v;
vector<employee>::iterator itrv;
Division(string n)
{
dept = n;
}
518 ◾ Fundamentals of EMS, NMS and OSS/BSS
add(string emp)
{
v.push_back(employee(emp,dept));
}
displayDetails()
{
for(itrv = v.begin(); itrv != v.end(); itrv++)
{
cout<<“Name: “<<itrv->name<<” “<<”Section: “<<
itrv->dept<<endl;
}
cout<<“-----------------------------”<<endl;
}
};
void main()
{
Division *d1 = new Division(“Management”);
d1->add(“Dan”);
d1->add(“Phil”);
d1->add(“Steve”);
d1->displayDetails();
Division *d2 = new Division(“Techie”);
d2->add(“Mary”);
d2->add(“Rick”);
d2->add(“Susan”);
d2->displayDetails();
delete d1,d2;
}
//---------------------------------------------------------
// Mediator Design Pattern
// ver0.0 Jithesh Sathyan Aug 08, 2009
//---------------------------------------------------------
#include <iostream.h>
class MedInf
{
public:
virtual void navigate() {};
};
class Home
{
private:
MedInf *med;
public:
Home(MedInf *x)
{
med = x;
}
go()
{
cout<<“In home page”<<endl;
med->navigate();
}
};
class Purchase
{
private:
MedInf *med;
520 ◾ Fundamentals of EMS, NMS and OSS/BSS
public:
Purchase(MedInf *x)
{
med = x;
}
go()
{
cout<<“In Purchase page”<<endl;
med->navigate();
}
};
class Search
{
private:
MedInf *med;
public:
Search(MedInf *x)
{
med = x;
}
go()
{
cout<<“In Search page”<<endl;
med->navigate();
}
};
class Exit
{
private:
MedInf *med;
public:
Exit(MedInf *x)
{
med = x;
}
go()
{
cout<<“Exiting ......”<<endl;
}
};
class Mediator:public MedInf
{
private:
Home *home; Purchase *pur;
Search *sur; Exit *ext;
public:
Mediator()
OSS Design Patterns ◾ 521
{
home = new Home(this);
pur = new Purchase(this);
sur = new Search(this);
ext = new Exit(this);
}
void navigate()
{
int n;
cout<<“Make a selection [1]Home [2]Purchase
[3] Search [4]Exit”<<endl;
cin>>n;
switch(n)
{
case 1: home->go(); break;
case 2: pur->go(); break;
case 3: sur->go(); break;
case 4: ext->go(); break;
default: cout<<“Wrong choice”<<endl;
ext->go(); break;
}
}
};
void main()
{
MedInf *x = new Mediator();
x->navigate();
}
33.9 Conclusion
The important note the author wants to convey from this chapter is that
knowledge of the business logic is not enough to develop a good man-
agement solution. The service developers need to have a good under-
standing of the programming language used, the best practices in the
programming language, and the design patterns that can be used to
implement an effective OSS solution for commercial use.
This chapter only details some of the design patterns that are used in OSS
development. There are many more design patterns that are used. For example,
when the service developer wants to hide the connection details to a server module
a remote proxy design pattern can be used and when state transitions are involved
a state method or state object pattern can be used. The choice of design pattern is
based on the specific problem that is to be addressed.
Additional Reading
1. Erich Gamma, Richard Helm, Ralph Johnson, and John M. Vlisside. Design Patterns:
Elements of Reusable Object-Oriented Software. United Kingdom: Addison-Wesley
Professional, 1994.
2. Elisabeth Freeman, Eric Freeman, Bert Bates, and Kathy Sierra. Head First Design
Patterns. California: O’Reilly Media, Inc., 2004.
Chapter 34
Report: NGNM
Framework and
Adoption Strategy
523
524 ◾ Fundamentals of EMS, NMS and OSS/BSS
34.2 Introduction
With rapid improvements happening in the architecture of core and access net-
works it is quite clear that legacy NMS (network management system) with
proprietary interfaces and protocols will become obsolete, as the cost and effort
required in maintaining such systems is very huge and those are not scalable with
the requirements. It is difficult to integrate other components to these systems. The
solution toward developing an NMS solution for next generation network is to have
Report: NGNM Framework and Adoption Strategy ◾ 525
a scalable NMS suitable for handling legacy and next generation networks with
special customizations added specifically to the different networks.
This report presents an NGNM framework incorporating the most popu-
lar standards currently available in NMS domain. The discussion on the sug-
gested NGNM framework is organized into four main phases. The first phase
details the framework for developing generic network management solution that
is based on SOA (service oriented architecture). To the framework, generic func-
tionalities are added based on open standards and interface definitions. Once
the architecture and design details of a scalable generic network management
solution is ready, the report details the second phase of adding the function-
alities of NGNM (next generation network management) as defined by the
latest standards on next generation network management. The architecture for
building these functionalities is also discussed with the example in the use of
EDA (event driven architecture) to satisfy the requirement of minimal manual
intervention.
At the end of phase two, a next generation network management solution is
ready. The next two phases involve specialization of the solution for a specific net-
work. IP multimedia subsystem (IMS) is considered as an example network in this
report. Some of the NMS functionalities specific for IMS are discussed and the
architecture for developing the same is detailed. An example of functionality in
phase three is HSS (home subscriber server) provisioning agent that sends SOAP
(simple object access protocol) triggers to HSS and also updates a common sub-
scriber store based on a generic user profile for a converged network.
The functionalities added are in line with SOA. This ensures that the addi-
tions will not require changes in the generic solution. The specialized functions can
be managed as separate services. The fourth phase involves customization specific
to IMS, like the log format with IMS log parameters and registers for IMS per-
formance data collection based on 3GPP standards. The discussion on NGNM
is followed by an analysis of the adoption strategies for working on the NGNM
framework.
34.3 Legacy NMS
The various modules in a typical legacy NMS are (see Figure 34.1):
Platform components
Management function components
Legacy NMS cannot easily adopt these services as the fault functionality does
not isolate “alarm handling by setting specific threshold” from “forced clear-
ing of alarm” at implementation. The interactions between these modules are
closely coupled making the addition of new services difficult without major
rework in code.
4. Nonstandard interfaces for interaction: Proprietary interfaces are being used
by legacy NMS, to communicate with the NE/EMS in the southbound inter-
face and to OSS in the northbound interface. The communications within
the different layers of NMS also use proprietary interfaces. When there is a
need to support new NE/EMS/OSS, the complete pipeline in legacy NMS
is modified to manage the NE/EMS/OSS. Proprietary interfaces make the
NMS very tightly coupled and difficult to maintain.
5. Dependency on management protocols and not based on standard informa-
tion/data model: The NMS should not be hardwired to the protocol used by
the EMS from which it collects data. The functionalities in NMS must be
implemented for a standard format unlike the way it is done in legacy NMS.
6. Nonscalable architecture: There are multiple functionalities that are supported
in an NMS and in most cases these functionalities may not be implemented
by the same vendor. Legacy NMS is not based on scalable architecture like
SOA, which makes addition of functionality tedious and causes minimal
reuse of components.
34.5 NGNM Solution
34.5.1 NGNM Framework
The Phase 1 of the NGNM solution development starts with creation of a generic
NGNM framework. This is the NGNM framework shown in Figure 34.2 without
any specialized NMS functions. The NGNM framework suggested in this report
supports the following:
a. SOA for architecture: The functionalities in the NMS are designed as sepa-
rate services with standard interfaces for interaction and an XML file for any
customization. There is no hard coding of network element specific informa-
tion like table names. New functionalities can be added as separate services
without disturbing or making changes in existing functions. For example, the
performance module may interact with the fault module but is independent
of the implementation of the fault module. With this technique, when open
source components like an independent subscriber provisioning functionality
is added to the NMS that interacts with fault or performance module, retest-
ing of existing functions is not required. SOA leads to smooth integration
and functionality rich NMS.
528 ◾ Fundamentals of EMS, NMS and OSS/BSS
Process To SBI
handling
Event bus
Data from SOA and standards-based NMS (where events are aggregated and presented in a specific format)
types of networks. The generic framework and functionalities designed so far can
be used with different kinds of networks and not just for IMS.
PC PC
OSS/BSS
http
http
SOAP
/http
HSS provisioning
agent
NMS server
SOAP SOAP
Common subscriber trigger
store (GUP)
SOAP client
HSS
34.6 Adoption Strategy
Adoption strategy is a discussion of how the NGNM framework can be adopted
without doing away with the existing legacy NMS solution.
NGNM framework
Mediation layer
development. Years of development have made it a highly specialized NMS, but each
time a new network is to be managed or new application needs to be added, there is
considerable work involved to make the legacy NMS usable for the new requirement.
In this case there would be many code segments and functionalities in the leg-
acy NMS that can be reused and writing an adapter would mean losing the oppor-
tunity to reuse it effectively. A complete rewrite from legacy NMS to NGNM by
porting of applications will result in a majority of applications not working properly
or ending up with major hard coding and tight coupling in the new framework that
still may not be scalable. This makes the staged approach a good solution to migrate
to NGNM from legacy NMS in a cost-effective manner.
The staged migration has four main stages:
Framework components
Framework
Published interface agreements
Mediation layer
State/Service Subscriber
discovery Event handling Threshold set Single-sign-on setting
Fault
correlation Access
management RMI/IIOP
SID + OSS/J
NBI
GUI/NBI components
Data Fault
correlation Access
plug-in
management
Platform components
Report Backup
engine handler
Event Threshold
SBI
handling set
(NEs)
Service bus
Service registry
Enhancements
Now add additional components like a service registry and make the functional
framework based on the SOA architecture (see Figure 34.11). This way, we can
migrate from legacy NMS to unified NMS framework. The advantage of phased
approach is that each stage can be tested, further split for more control. This makes
the transformation less risky, more manageable, and more cost effective.
(Continued)
The staged migration can also be performed as a single stage activity where the
single stage represents a product release that ensures that the legacy NMS product
transforms to NGNM framework.
The migration can be performed in line with existing product development
merging the changes with the main code base after changes are made for each stage
(see Figure 34.12). This way the product functionalities can be augmented while
Product functional
Product architecture transformation augmentation over releases
the transformation of architecture happens. Each stage can be tested and stabilized
without effecting delivery to existing clients of NMS product.
(Continued)
540 ◾ Fundamentals of EMS, NMS and OSS/BSS
(Continued)
34.8 Conclusion
It is not cost effective for NMS developers to continue using their legacy solutions
and sooner or later adoption of NGNM is inevitable. The major challenge is to adopt
NGNM in a manner that best addresses the business needs of the company. This
report tries to come up with an NGNM framework that can be considered the best
one that can be formulated from existing technology standards in the domain. The
adoption methods specified in the report need to be evaluated by different compa-
nies and each one should come up with the best strategy that satisfies the company
goals. Suggested future work includes white papers that discuss detailed case studies
on issues identified in the NGNM frame and its adoption in different companies and
how these issues can be avoided. It should be understood that any development on
an NGNM solution should ensure that existing scalable architecture is maintained
and no new hard codings and dependencies between modules are introduced.
Supporting Publications
1. Transformation of Legacy Network Management System to Service Oriented
Architecture Conference. Next-Generation Communication and Sensor
Networks, 2007, USA.
2. Strategies for Developing Next Generation Network Management System
Conference. International Seminar and Exhibition on Strategies for Future Defense
Networks and Relevance of EMI/EMC in Future Battlefields, 2007, India.
Report: NGNM Framework and Adoption Strategy ◾ 541
References
1. ITU-T SG4: M.3060 (Principles for the Management of Next Generation Networks).
2. 3GPP specification 3gpp TS 32.111-2: Telecommunication management; Fault
Management; Part 2: Alarm Integration Reference Point: Information Service (IS).
3. 3GPP specification 3gpp TS 32.409: Telecommunication management; Performance
Management (PM); Performance measurements IP Multimedia Subsystem (IMS).
4. TR133-NGN Management Strategy: Policy Paper and associated Addenda, Release
1.0 of TeleManagement Forum.
5. Venkatesan Krishnamoorthy, Naveen Krishnan Unni, and V. Niranjan. Event-Driven
Service Oriented Architecture for an Agile and Scalable Network Management System.
In Proceedings of the 1st International Conference of Next Generation Web Services
Practices. Seoul, Korea, August 2005.
6. Matt Welsh, David Culler, and Eric Brewer. SEDA: Architecture for Well-Conditioned
Scalable Internet Services. In Proceedings of the 18th Symposium on Operating
Systems Principle, Alberta, Canada, October 2001, pp 230–45.
7. Andreas Hanemann, Martin Sailer, and David Schmitz. 2004. Assured Service Quality
by Improved Fault Management. In Proceedings of the 2nd International Conference
on Service Oriented Computing, New York, 2004.
8. Boris Gruschke. Integrated Event Management: Event Correlation Using Dependency
Graphs. Proceedings of the 9th IFIP/IEEE International Workshop on Distributed
Systems: Operations and Management, Newark, DE, October 1998, pp 130–41.
9. Zeng Bin, Hu Tao, Wang Wei, and Li ZiTan. A Model of Scalable Distributed Network
Performance Management. In Proceedings of the International Symposium on Parallel
Architectures, Algorithms and Networks (ISPAN 2004).
10. Jean Phillippe Martin Flatin, Pierre Alain Doffoel, and Mario Jeckle. Web Services for
Integrated Management: A Case Study. In Proceedings of the European Conference on
Web Services, September 27–30, 2004.
Telecommunications / Systems Development
SATHYAN
Fundamentals of EMS,
NMS, and OSS/BSS
JITHESH SATHYAN
From the initial efforts in managing elements to the latest management standards,
the text:
• Covers the basics of network management, including legacy systems,
management protocols, and popular products
• Deals with OSS/BSS—covering processes, applications, and interfaces
in the service/business management layers
• Includes implementation guidelines for developing customized
management solutions
The book includes chapters devoted to popular market products and contains case
studies that illustrate real-life implementations as well as the interaction between
management layers. Complete with detailed references and lists of Web resources
to keep you current, this valuable resource supplies you with the fundamental
understanding and the tools required to begin developing telecom management
solutions tailored to your customers’ needs.
AU8573
ISBN: 978-1-4200-8573-0
90000
w w w. c rc p r e s s . c o m
9 781420 085730
www.auerbach-publications.com